Johanna Blakley
Media | Entertainment | FashionArchive for Ramona Pringle
How Do We Detox Online News Comments?
May 16, 2017 at 9:02 am · Filed under jb exploits, online culture, social media and tagged: online news, Ramona Pringle, Sean Stewart, Steve Ladurantaye, SXSW
Ramona Pringle, a colleague of mine at Ryerson University, invited me to join a wonderful panel for South by Southwest (SXSW) this year that tackled the daunting problem of toxic online news comments. As Ramona put it, how can we build systems that lead to constructive conversation rather than Lord of the Flies horror stories?
One of the best things about doing a panel like this is the conversations it spawns before and after the event. I was surprised that every time I mentioned this topic to people – including journalists, Austin ride-share drivers, academics, waiters, coders, you name it – they showed a lot more interest than I anticipated. A huge number of Americans comment on news (you’ll find some interesting stats below) and just about everyone I talked to had some thoughts about what needed to change in order to make online comments more civil. I share a laundry list of those ideas at the bottom of this post, but first, here’s a bit about our conversation at SXSW:
Ramona brought to the panel her colleague at Canada’s CBC News, Steve Ladurantaye, who was been working on the frontlines of news and UGC since he was the director of news and politics at Twitter Canada. This guy’s seen it all and he was dead serious when he said that the psychological effects of being a comment moderator can be compared to PTSD.
She also brought in Sean Stewart, a game designer and science fiction author, to discuss how we might approach online commenting as a design problem. In fact, the germ of the idea for the panel had come from a conversation they’d had about how surprisingly well-behaved people were in the online comment boards for The Beast, Sean’s break-out Alternate Reality Game (ARG), which engaged the public in an international, real-time online and offline murder mystery that promoted the release of the film A.I.: Artificial Intelligence. After the snarky film news site Ain’t It Cool News caught wind of the game, The Beast website received 25 million hits in one day.
Since Ain’t It Cool News is infamous for its commenter’s anti-social behavior, everyone – the game designers and the players – were shocked that interactions among players of the game were collaborative and constructive, as they helped one another gather real-world clues and solve the mystery together. Everyone wanted to know, “Why are we assholes when we’re on Ain’t It Cool News but we’re angels on The Beast?”
Sean’s theory was simple: people behave better when you give them something to do. He noticed that whenever the participants were given problems to solve they collaborated beautifully, but once they were less occupied they’d start reverting to the kind of aggressive, anti-social behavior that characterizes comments on so many news and information sites.
Giving participants rewards for certain types of behavior – particularly the type of rewards that translate into social proof – could be integrated into news comments sections, much as they have in gaming leader boards. This might mean giving news commenters points, badges, status, and rewards for engaging in desired actions. The key is, once again, social proof: other people need to be able to see these indicators of quality or engagement in order for them to have social currency.
Steve from CBC has experimented with different ways to positively engage audiences and found some success. One method that worked was sorting comments by most recent, rather than most popular (which are often the most polarizing). But some topics have proven too difficult to attract civilized comments. The CBC currently will not allow comments on stories about indigenous populations because public commentary is so toxic.
Another tactic that Steve has found to work quite well is having the journalist contribute to the conversation. His impression was that most aggressive commenters back-off quickly once they are reminded that the reporter they’re railing against is, in fact, a human being who is reading those cruel comments.
Let’s Look at Some Data
The Engaging News Project at UT Austin, which has done a lot of terrific research on online news comments, found support for this tactic in their research. However, I was surprised that their December 2015 national survey of news commenters and comment readers revealed some resistance to inviting journalists into the public commenting space. 61% of commenters said they welcome factual clarifications from journalists, but only 41% said that they’d like journalists to actively join the conversation and only 26% sought their guidance there.
This sentiment was echoed in other parts of this fascinating survey, which used a national representative sample. Among those who participate in one way or another in online news comments, either as readers of comments or writers of them, 42% said that they don’t want any policing of comments whatsoever, and another 31% weren’t sure either way. It seems that the Wild West insanity that we witness there now seems to be A-OK, or at least acceptable, to most of the inhabitants of this world.
You might be wondering, who are these people? The survey found that half of Americans either read or write comments on news sites, and most of this activity takes place on local news sites. Among the most striking findings for me was that commenters are more male and have lower levels of education and income compared to those who read news comments.
Here’s another kicker: 40% of people who read comments (but don’t write them) say that they read comments because of their entertainment value. This was the second most popular reason, after learning about other people’s opinions (46%), suggesting that the informational aspect of news comments may not be as important as we thought (or was that just me?). While we presumably consume the news to find out accurate things about the world, we don’t necessarily consume news comments for the same reason. However, in this age of “news as entertainment,” these findings might feed a growing fear that facts are not the only things, or maybe even the main things, that audiences are looking for in news media.
The Laundry List
Here’s a quick overview of some of the tools we might use to improve civility in online news comments:
Artificial Intelligence
Jigsaw, Google’s AI research arm, just released Perspective, a tool that identifies toxic language and can flag comments that are most likely loathsome. In conjunction with human moderation, this appears to be an incredible tool. Some fear, however, that it will be left to run on its own, which could trigger the removal of perfectly civil posts and the retention of some carefully cloaked hate speech.
Prompts
Using AI like Perspective, it could be possible to intercept posts before their posted, warning commenters, right after they hit “submit,” that the language they’re using is toxic. This would give posters an opportunity to reflect on their language and resubmit (or leave the site in a huff).
Metadata
What if each comment included some relevant metadata about the commenter? One big complaint about news comments is that it’s often evident that the poster didn’t actually read the article. What if the site posted the amount of the time the poster spent on the page, or the number of comments that person had posted that day? (Thanks to Cyrus for this one!)
Social proof
This is a specific subset of metadata: information such as the number of likes and followers someone has confers status and can help create an environment in which people have some guidelines for behavior.
Ranking
There are all kinds of comment ranking systems, but the one we talked about the most on the panel was Reddit’s. On that site, users can upvote or downvote posts and so universally panned posts sink out of visibility pretty quickly. Since attracting attention is a key motivator for certain flame-throwing commenters, invisibility is a painful price to pay. I was fascinated to discover that Reddit goes a bit further: it also programs its algorithm to give a lower ranking to controversial posts that get similar amounts of upvotes and downvotes. So a post with the same number of upvotes, but a lower number of downvotes, will be more visible to users, creating (presumably) more civil discourse.
Kicking the bums out
The main reason that people generate any kind of user generated content is because they hope to attract some attention to their thoughts, ideas, products, weird proclivities, what have you. By depriving people of attention (kicking them off the board after X number of infractions), a news site can create a rational disincentive to bad behavior
Constructive journalism
I’ve been doing research on solutions journalism – reporting that focuses on responses to social problems, not just the problems themselves – and so I really perked up when someone suggested that the kind of combative, gotcha journalism that we often encounter these days triggers the vitriolic exchanges we see in the comments section.
Formatting
The Engaging News Project performed a study where a one-column comment section (the typical format) was compared to a three-column format. The topic was the legalization of marijuana. In the three-column format comments were clustered by whether they were pro-legalization, anti-legalization, or if they had questions/other comments about the issue. There were some mixed results, but they found that people preferred the three-column format and they were more likely to leave comments there.
As you can probably tell, I’m just scratching the surface here. I can’t wait to hear the results from the University of Connecticut, which recently received a $2 million investment from the Templeton Foundation to fund ten scholars working to improve online civil discourse. Let’s hope they come up with some game-changing ideas for re-vamping our online public sphere, where there are far too many barriers to meaningful civic engagement.
The Unintended Consequences of Technology
March 1, 2017 at 12:00 pm · Filed under artificial intelligence, jb exploits, media impact, online culture, social media and tagged: filter bubble, IEEE, Ramona Pringle, recommendation engines, Technology & Society
An article of mine on the “Technologies of Taste” has just come out in Technology & Society, a publication of the Institute of Electrical and Electronics Engineers (IEEE). It’s a fascinating special issue exploring the “Unintended Consequences of Technology.” As the guest editor, Ramona Pringle explained it to me that the focus wasn’t on “the dark side” of tech, but rather the complicated nature of our increasingly connected lives.
The call for papers, however, emphasized the danger of not carefully examining our relationship to new technology:
With all great innovation comes responsibility; and with the exponential growth of technology, the window within which we can examine the ethics and consequences of our adoption of new technologies becomes increasingly narrow. Instead of fear mongering, how do we adjust our course, as a society, before it is too late?
My piece explores the role that recommendation systems play in our online pursuits of knowledge and pleasure. How is our personal taste affected by finely-tuned commercial algorithms that are optimized to sell us products and monetize our attention? While Eli Pariser and others have argued that these systems place us in “filter bubbles” that insulate us from new ideas, I argue that companies like Google, Amazon and Netflix have strong commercial incentives to develop recommendation systems that broaden their customers’ horizons rather than limiting them, effectively bursting filter bubbles rather than reinforcing them.
This couldn’t be a more timely argument considering that concerns about filter bubbles have grown exponentially during the last presidential election cycle. What complicates the debate about filter bubbles is that each site — whether it’s primarily an ecommerce, social media, search or content platform — has very different goals in mind and different proprietary algorithms in place to achieve them. I hope this article triggers a more thoughtful conversation when people claim that ideological insularity is the obvious outcome of filtering and recommendation technology.