Advertisement

SKIP ADVERTISEMENT

The Times Sharply Increases Articles Open for Comments, Using Google’s Technology

Credit...David Doran

Bassey Etim is the Community Editor for nytimes.com.

On its face, the relationship between a news outlet and its readers is fairly straightforward. Readers pay for a subscription or view an ad, and the news outlet uses the funds to report the news.

The advent of community made things more complicated. News outlets wanted to engage their readers on a large scale. Readers wanted to be heard. Comment sections evolved and readers began to discuss issues with one another directly. While at best, comment sections became places for dynamic conversation and exchange, they could also become irrelevant or loaded with spam and vitriol.

To protect our conversations from bad actors, The New York Times’s community desk reviews almost all reader submissions by hand. With 12,000 comments moderated per day, this work is labor intensive, and has forced us to close comments on stories sooner than we would like simply because we didn’t have the resources to sort through them all. Many of our best stories are never opened for comments at all.

That’s about to change.

We have implemented a new system called Moderator, and starting today, all our top stories will allow comments for an 8-hour period on weekdays. And for the first time, comments in both the News and Opinion sections will remain open for 24 hours.

Moderator was created in partnership with Jigsaw, a technology incubator that’s part of Alphabet, Google’s parent company. It uses machine learning technology to prioritize comments for moderation, and sometimes, approves them automatically. Its judgments are based on more than 16 million moderated Times comments, going back to 2007.

If The Times has innovated in the comments space, it is by treating reader submissions like content. The community desk has long sought quality of comments over quantity. Surveys of Times readers have made clear that the approach paid off — readers who have seen our comment sections love them.

In the summer of 2016, Jigsaw braced itself to deal with a similar issue: how to improve the quality of online conversations.

The Times struck a deal with Jigsaw that we outlined last year: In exchange for The Times’s anonymized comments data, Jigsaw would build a machine learning algorithm that predicts what a Times moderator might do with future comments. In addition, The Times, Jigsaw and a digital product partner called Instrument would collaborate to create Moderator, an application built to take advantage of the machine learning that is now a part of the Perspective project, which spots abuse and harassment online.

“Publishers often rely on advertising, and advertising relies on reader engagement,” Jared Cohen, chief executive of Jigsaw, wrote in response to questions from The Times. Jigsaw’s efforts help “platforms to create more space to engage their readers in civil discussion.”

How The Times Will Use Moderator

Our new moderation platform diverges from the most common approach for organizing user generated content, which is to prioritize each submission in the order it was received.

In Moderator, each comment is scored based on the likelihood that Times staff members would make a certain judgment, i.e.: approve or reject the comment.

To the Times moderator, each comment appears as a dot on a histogram chart, illustrated below.

Image

Its placement on the chart indicates the probability that it would be rejected by a Times moderator. Moderator also tries to predict why the comment would be rejected (e.g.: inflammatory or insubstantial).

For many stories, Times moderators will be able to check the machine learning model against an article’s comments by reading through the submissions with between, say, a 15 to 20 percent likelihood to be rejected. If those comments can be approved, then a moderator might approve all comments between the 0 to 20 percent range. In our previous platform, we would need to read each of those comments individually before they were approved.

Most comments will initially be prioritized based on a “summary score.” Right now, that means judging comments on three factors: their potential for obscenity, toxicity and likelihood to be rejected.

As The Times gains more confidence in this summary score model, we are taking our approach a step further — automating the moderation of comments that are overwhemingly likely to be approved.

“The best part about machine learning models is they get better with time,” Mr. Cohen told The Times.

Our partnership with Jigsaw and Instrument builds on work we’ve done in partnership with The Washington Post, Knight Foundation and Mozilla on the Coral Project, an effort that helps news sites accept and manage reader submissions on a large scale.

In the long run, we hope to reimagine what it means to “comment” online. The Times is developing a community where readers can discuss the news pseudonymously in an environment safe from harassment, abuse and even your crazy uncle. We hope you join us on the journey.

The Times community team will be responding to your questions in the comments.

See more on: The New York Times

Advertisement

SKIP ADVERTISEMENT