Adding different features to the comment section is something to consider and there are some well tested techniques to encourage less hate through more speech. As mentioned in a previous piece we had two goals in mind at the start of our project and implementing experiments in the comment section was one of the tactics of reaching them. Experimenting in the comment section was not just a tactic, but a way to first test previous lab trials in a real life setting, then to get a better sense of what users are more responsive to in terms of engagement and last but not least to deliver on the initial promise to have more meaningful interactions with the users, while striving for less aggressive behaviour.

You will see from our experience what each and every one of the 7 experiments meant, how it worked and with what results.

We start with some of the easier features to implement: comment reaction buttons and comment ranking. Such functions can be a way of engaging the readers, by allowing them to voice their opinion on comments by other up voting or penalising them, if there is a down-vote button). In the best case scenario, the most thoughtful and interesting messages would rise to the top, allowing readers to go through the comment section by starting with the best content. Even if the best contributions do not get recommended, a lot of information can be gleaned about one’s users by looking into what kind of comments elicit positive or negative reactions. As mentioned in a previous piece, in our examination of comment management on 69 news and sports websites in several countries, we found that 36 of them enabled the recommendation and ranking of comments by popularity.

  1. We chose to use three reaction buttons: like (represented by a thumbs-up); respect (represented by an outstretched hand); and flag. Any reader could flag a comment, the result of which was that the message went to the top of the moderation queue and was colored in red in the moderation system (this buttons were not available for comments posted through Facebook). The comment database retained information about the number of likes, respects and flags accrued by each comment, and the system enabled moderators to see statistics on the number of reactions a user had both issued and received.

We decided to forgo a down-vote or dislike button because it did not directly serve the purpose of promoting good content – a goal that is better served by a respect button, which previous research from the Engaging News Project also revealed.

The dislike button would also run the risk of helping silence commenters with different or unpopular views, by pushing their contributions farther downstream. To the contrary a respect button, research has shown, has a better chance at making people engage with different political views, as the Engaging New Project revealed: “Among some participants, the “Respect” button increased rates of clicking on counter-attitudinal comments by up to 50 percent compared to “Like” and “Recommend.” (Stroud, Muddiman 2014)

  1. We also introduced an alternative manner of viewing the comment section, ranking the comments by popularity (number of likes and respects combined). In the most appreciated view, comments that had positive reactions climbed to the top of the queue and were ordered by popularity, while messages that had no reaction were left in their chronological order. If a comment was moderated, it could no longer climb in position relative to the other comments. The point was to encourage users to climb the ranking with their comments and take pride in making the “most appreciated” list as well as send a message that the newsroom was aware of the content they generated and made efforts to promote the best contributions.

Another feature that one could easily introduce in the comment section is comment highlight. This needs to be implemented alongside an internal decision regarding who can use it and under what circumstances.

This is also a great way to encourage higher quality contributions and it’s popular with big news media names like The Guardian and The New York Times, who both use staff picks to highlight the best contributions that are shown first in the comment feed or in a separate column. A user having his/hers comment hand picked by the staff in the newsroom should take pride in this achievement within the community and this effect has been seen proven in recent study that found that “receiving a recommendation or being selected as a “NYT Pick” relates to a boost in how many times a commenter posts.” (Muddiman, Stroud 2017)

Highlighting should be used mostly on those comments that are not moderated, and although the same study mentioned above, discovered that comments containing profanity or swear words were less likely to receive NYT Pick status, it did occur.

In our case, we decided that both moderators and newsroom staff (such as the author of a blog post) could highlight comments that “significantly improve the discussion, adding pertinent information or arguments or offering particularly interesting points of view, without necessarily representing a position that corresponds to the newsrooms views.”

An equally easy to apply experiment we did does not require any kind of work with a website developer if one is not available. It is called journalists’ intervention.

It basically meant that moderators would take turns to comment with an official ID in the comment section, either by addressing a question to a specific commenter or to the entire comment section, correcting a commenter (preferably of factual error and not an opinion) or message highlighting a valuable contribution from a previous commenter. This one did require some planning and training, mostly because we needed it to be systematic and done according to research requirements. The hope was that commenters would tone down their comments when the newsroom engaged with them in the comment section, as previous studies have shown.

We went further with the experiments and consistent with our goal we introduced tags and post-moderation messages.

The tags were applied to a moderated comment to diagnose the type of problem it exhibited, were not visible to the readers and they were used mainly for research purposes (as markers of the type of content that was moderated for our analyses).

The post-moderation messages, on the other hand, appeared under moderated comments and were visible to all readers, explaining briefly why an intervention was required. The main purpose was to induce learning about what kind of speech was considered offensive and unacceptable and to further explain the logic of the moderation procedure to users.

These messages were programmed in such a way that they appeared automatically and combinations between them were created depending on which set of tags were used. If no tag fitted, no tag was applied and a generic message about moderation was shown: “This comment was moderated because it did not abide by the rules of the website.”

The followings tags and post-moderation messages were in use:

Tags (internal use only) Post-moderation message (visible on website)
Incitement to violence This comment was moderated because it contained incitement to violence.
Racism

Xenophobia

Sexism

Intolerance towards persons
with diseases/disabilities

Intolerance on socio-economic grounds

Intolerance with regard to sexual orientation

This comment was moderated because it contained intolerant language.
Libel This comment was moderated because it contained serious accusations which are not proven.

We also invented an experiment called “the small font mask” when for about two weeks all moderated parts of a comment were shown with a very small font, instead of being replaced by “***“; the rest of the comment retained its regular font. The purpose of this short experiment was to see how users would react if they could, with some effort, read the moderated portions of someone else‘s comment or if they saw their own post “twisted“ in this way. We also saw the small font experiment as a teaching opportunity, in that commenters would now both see what is moderated and why, thanks to the retention of the explanatory post-moderation messages.

And the last experiment we ran was called the “preview experiment”, an attempt to decrease the level of incivility or intolerance by reminding users that real human beings are reading their posts and that they are supervised by moderators.

The experiment had two phases:

  1. During phase one, half of the comments on the website received a warning. Upon drafting a comment and hitting “publish,“ 50% of the time, users would see a pop-up with the following text: “Are you sure you want to publish the comment in this form? If your message does not abide by the rules of the website, it will be moderated.“

They then had the option of hitting “I publish the comment in its current form.“ or “I want to check my comment.“ If they chose the latter, they could then edit it and, upon hitting publish again, it would be posted (or queued for pre-moderation). In the comments database, we recorded what button everyone pressed and what both the initial and the final version of the comments looked like.

  1. In the second phase, a randomized warning message was applied to all comments on the website and there were 4 new ones in addition to the previously mentioned one.

The randomized 4 new messages were the following:

  1. Your comment will be moderated if it does not abide by the rules of the website. Please check if you would like to change something.
  2. The other users will take your comment seriously only if you treat everyone with respect. Please check if you would like to change something.
  3. Your comment should not offend anyone, even unintentionally. Please check if you would like to change something.
  4. Your comment should demonstrate the same politeness you would expect from someone addressing you on the street.
READ NEXT

Walk the talk: how to engage your readers and funnel important feedback through surveys and online focus groups

Share on FacebookTweet about this on TwitterShare on LinkedIn
2019-09-03T08:31:49+00:00
Confidenţialitatea ta este importantă pentru noi. Vrem să fim transparenţi și să îţi oferim posibilitatea să accepţi cookie-urile în funcţie de preferinţele tale.
De ce cookie-uri? Le utilizăm pentru a optimiza funcţionalitatea site-ului web, a îmbunătăţi experienţa de navigare, a se integra cu reţele de socializare şi a afişa reclame relevante pentru interesele tale. Prin clic pe butonul "DA, ACCEPT" accepţi utilizarea modulelor cookie. Îţi poţi totodată schimba preferinţele privind modulele cookie.
×
Alegerea dumneavoastră privind modulele cookie de pe acest site
FIŞIERE COOKIE NECESARE
Aceste cookies sunt strict necesare pentru funcţionarea site-ului și nu necesită acordul vizitatorilor site-ului, fiind activate automat.
Afisează modulele cookie necesare
Vă rugăm să alegeţi care dintre fişierele cookie de mai jos nu doriţi să fie utilizate în ce vă priveşte.
Aceste module cookie ne permit să analizăm modul de folosire a paginii web, putând astfel să ne adaptăm necesității userului prin îmbunătățirea permanentă a website-ului nostru.
Afisează modulele cookie necesare
Aceste module cookie vă permit să vă conectaţi la reţelele de socializare preferate și să interacţionaţi cu alţi utilizatori.
Afisează modulele cookie necesare
Aceste module cookie sunt folosite de noi și alte entităţi pentru a vă oferi publicitate relevantă intereselor dumneavoastră.