But better moderation is only part of the solution to controlling trolls and bots, and does little to help readers understand who is commenting and where they are coming from
The Washington Post recently announced that it would launch a new software application that utilizes artificial intelligence to moderate comments called ModBot. If the new application has, indeed, been launched, it is hard to see that it has helped the Post create a more civil comment threads platform.
The Post’s approach to moderation is about cutting down on the costs of human comment moderation. With about a million comments posted every month posted from readers, the task is enormous, so there are only a few ways to handle the work. The Post currently uses a third party company to do the work, screening only a small portion of the comments that come in. It likely hopes that that, going forward, using a big data approach will allow them to end this practice.
The Washington Post should know that have a problem, they ran a column just last year by David Lat, founder and managing editor of Above the Law. In that column, Lat explains why they decided to shut off reader comments altogether.
“Reader comments in the early days of Above the Law were a treasure trove of information, insight and humor, advancing our mission of bringing greater transparency to an often opaque profession,” Lat wrote. “Over the years, however, our comments changed. They had always been edgy, but the ratio of offensive to substantive shifted in favor of the offensive.”
And so the decision was taken to no longer allow for commenting.
In part, our decision was based on science. Researchers have found that when readers are exposed to uncivil, negative comments at the end of articles, they trust the content of the pieces less. (Scientists dubbed this the “nasty effect.”) A study by the Atlantic found that negative comments accompanying a news article caused readers to hold the article in lower esteem. In an increasingly competitive media environment, websites can ill afford to have their content and brands tarnished in this way.”
The big challenge for many websites is how to handle comment moderation.
One approach is to simply let the comments post. The advantage of this is that digital readership grows, though so much of this readership is not something most advertisers would appreciate. Nonetheless, when trying to beat the NYT, this is the approach to use. It also takes no resources. A variation of this approach is what the Post uses, letting a third party company use a light touch when moderating a subset of all comments.
The second way is to mimic the NYT’s approach: have comment threads open for a limited time so that moderators can handle the load. The value of this approach is that it does not cut off the conversation completely, though readers are often frustrated to find a thread closed before they can get involved in the conversation.
The third approach is used by smaller websites, including TNM before all comments were turned off: have comments placed in a queue, only released after they have been moderated. The NYT does this, as well, and the combination of approaches no doubt suppresses digital readership, but leads to better comment threads.
But moderation is but one tool a publisher can use.
The biggest problem with reader comments today involves their origin: who exactly is commenting, a paid reader, a casual reader, a troll, or a bot? In this regard, the Post is the worst of the worst, as its system tells you nothing about who is commenting.
Last summer, when TNM was publishing a second website that focused on the intersection of politics and the media, I reached out to several news organizations to see if they were monitoring who and where comments were coming from. I was seeing increasing commenting from eastern European countries, especially Russia, where they seeing the same, and if so, what were they doing about it? Now, this interference in the 2016 election is all anyone can talk about, but news organizations must have seen what was happening to their own websites. Why, I wondered, were they not talking about it, and changing their comment systems to adjust to the flood of trolls and bots?
I never heard back from the Post, nor the NYT on this matter.
The Post allows a reader responding to comment from another reader to “Reply”, “Like” the comment, “Report” the comment, or “Share” the comment.
But what it doesn’t do is key. The Post system does not allow you to see other comments by the same person easily, does not allow you to know where that person is coming from, or if they are a paid reader.
Instead, the Post is allowing readers to stay anonymous, and to be able to create sock puppets in order to dominate the conversation. If they are monitoring IP addresses to prevent this, they have not admitted it.
“We don’t want to engage less, we want to engage more,” Greg Barber, director of digital news projects at the Post told Poynter last month. “We want to spend more time with our readers — and we’ve got a lot of them — who come to The Washington Post to contribute really thoughtful feedback, or to propose their own ideas. A service like ModBot allows us to do that.”
But the Post’s data approach to the problem of comments is data driven, not reader driven, and that is where the Post goes off the rails.
Today, the Post comment threads feel like they are among the most trolled on the internet. Only Breitbart News has more alt-right comments each day, and maybe this is part of the Post’s business model. Certainly it has driven up their traffic numbers, but how many readers have decided that the site is not worth paying for due to the condition of its comment threads?
Home Page Photo: Anonymous Hacker by Brian Klug, used under Creative Commons Attribution 2.0 Generic
The post The Washington Post takes big data approach to reader comment moderation appeared first on Talking New Media.