Turn Bad Members into Good Members in Online Communities

posted in: Digital Skills | 0

Are there ‘people’ in your online community that act like angry 11-year-olds? Do they ridicule others, make ridiculous claims, cause drama, and try to be as much of pain in the ASP (if you’re using Microsoft technologies) as possible? Social networks expert danah boyd posted some thoughts about problem users, and asked how we might fix the problems that bring out their negative behaviors. I think I have some answers…

It’s All About Attention

danah writes: “people regularly seek attention (even negative attention) in public situations and […] public forums notoriously draw in those who are lonely, bored, desperate, angry, depressed, and otherwise not in best form. Mix this with the lack of social feedback and you’ve got a recipe for disaster. There are few consequences for negative behaviors, but they generate a whole lot of attention.”

That last sentence jumped out at me the most, and I think cuts to the core of the issue once we put “attention” into the context of ‘reward’ or ‘positive outcome’ for people exhibiting negative behaviors.

Remember What Your Mother Told You

So what if we found a way of removing(or greatly diminishing) attention from people who exhibit negative behaviors in a community? There are plenty of places that have a “don’t feed the trolls” policy, but that’s a policy that is either:
a) unenforced, or
b) punishing to the people who react to negative behavior(even if they react negatively), rather than the person who initiates the negative behavior.

Instead of only asking other community members to ignore trolls/flamers/attention-seekers, what if we also implemented ways to make those people more ignorable? Some communities have put in place basic sanctions already. reddit, for instance, and Digg(and Slashdot before them both), don’t initially display the contents of comments that have been down-voted beyond a certain threshold by other members.

But clearly, that’s not enough. Despite our example comment being below the threshold, it’s still garnered 5 responses, some of which are even more negative than the initial comment itself. On Digg, the situation is even worse. And in both communities, not only do people displaying negative behaviors still get attention, some of them even develop a reputation within the community for their trolling, which only makes it more likely that other members exhibiting negative behaviors will emulate and attempt to out-do other trolls.

Remove the Benefits, Keep the Consequences

Let’s classify the attention people get from acting out into three categories:

  1. Responses – Members insulting them, arguing with them, or providing some other kind of opening for the troll to respond with further attention-seeking negative behavior.
  2. Disruption – Some negative behavior is caused by feelings of powerlessness. Because it’s easier to be destructive than constructive, some will take a perverse joy in disrupting the attempts of others to be productive, like killing unsuspecting team members in a video game rather than working with them to defeat the enemy. A disruptive member seeks attention in the form of causing any effect, even if it’s a negative one.
  3. Reputation – Becoming known as someone who trolls certain topics, in a certain way, or using certain words. Many trolls would rather do a lot of easy work to become infamous, than do a lot of hard work to become famous.

Assuming there’s some mechanism in place for members to assist in moderation, whether it’s up/down voting or flagging, all of the above types of attention can be removed by adjusting thresholds, or at the judgment of moderation staff if you’re lucky enough to employ moderators or have a dedicated group of volunteers.

For years, moderators of message boards have simply closed forum threads that they suspected of being trolling or otherwise inappropriate. Not only is this a valid option for comments of moderated communities, preventing members from responding to a troll has even more legitimacy when it’s due to members themselves designating a comment/post as a troll. For finer-grained control:

  • Community-moderated posts could display a reminder to those who reply to “not feed the troll”.
  • Community-moderated posts could allow replies, but no replies to replies; this allows members to respond or disagree, but prevents a troll from extending the amount of attention they receive.
  • Moderator-moderated posts could delay all replies from being publicly viewable until they’ve received moderator approval. This allows a moderator to let responses that are insightful disagreements or questions get through, but prevent people who are simply lashing back from giving the troll what it wants.

Even better than preventing a troll from receiving attention via responses, prevent them from even being recognized for it by anonymizing disruptive comments. Most negative behavior already occurs behind a veil of pseudonyms; transform that semi-anonymity into obscurity by removing the username from sufficiently down-voted/flagged comments. Only any central system that keeps track of total points/flags/karma should still know the identity of the troll.

Finally, remove any negative effects caused by poorly-behaved member comments.

  • Disemvoweling is becoming a popular method. “the net effect of disemvowelling text is to render it illegible or legible only through significant cognitive effort — this has the advantage of not causing offense to readers who do not spend time decrypting.”
  • In addition to removing the text of comments from sight, as reddit does, move all such comments to the bottom, or into a separate area all together (“View Removed Comments”).

Moderation vs. Censorship

Communities that implement the above solutions drastically reduce the benefits of bad behavior to problem-members. But communities must also make sure that their efforts to discourage trolls don’t discourage good members by accident. In a community where the moderation is done by a specific group of moderators, the above methods, properly used, can actually make it much easier for moderators to avoid accusations of censorship or abuse of power.

In a community where the moderation is done by the community, one of the main things the community administrators must do is make clear the purpose of the rating/flagging tools community members have access to. What does a down-vote on reddit mean? Does it mean the comment is poorly phrased, inarticulate, or unclear? Does it mean the down-voter disagrees with the commenter? Does it mean the comment contains trolling or other negative behavior? If there are no distinguishments, the thresholds for sanctions must be much more carefully chosen.

What’s a Better Way?

What do you think of the above methods? How would you make them better? What would you do in addition? What would you do instead?

I hope you’ll leave some excellent ideas in the comments. I also hope that if you found my own ideas useful, you’ll subscribe to my RSS feed for more.