Iron Blogging - Network Effects of Shared (Wrong) Belief

This post is a part of the GroupLens Iron Blogging effort, so take that for what you will.

From Ben Collins:

Every day now, Chris wakes up to find strangers’ hate on his Facebook wall that he has to personally delete. Or he’ll Google Alison to find the people he has to thank for donating to her scholarships and he’ll see, instead, another conspiracy theory YouTube video, viewed 800,000 times over, that says Alison was in on it all along, and that she’s been given a new life and maybe plastic surgery by the government.

The above paragraph describes an increasingly common scenario where an individual is targeted by large numbers of people, and is on the receiving end of their anger and vitriol. In this particular case, the target is the boyfriend of a woman killed on while she was being filmed is being told repeatedly (and seemingly angrily) that her death, and sometimes his existence, were a falsified story (“false flag”) being put forward by a government in order to distract from or justify totalitarian actions.

There’s a lot of conspiracy-theory weirdness in this particular story, but that doesn’t detract from the horribleness that Chris (or in general, the person being targeted) experiences. Nor does it change the way that this horribleness occurs – over the internet, leveraging the immense scale and network-effect that the internet and social media enable.

If you’ve been paying attention, you can likely name a number of different instances of this sort of thing occurring: to the Dentist (and his family) that killed Cecil the Lion, to Justine Sacco as she tweeted something offensive while she was getting on a plane. The list goes on.

In these two cases, I think, people would generally agree that the target did something gross: hunting endangered animals and saying racist things are generally considered bad. Intelligent people can have conversations about the severity of the lashback for the “crime”, and many have. I generally believe the scale is disproportionate to the act, but that’s a conversation for another time. Let’s grant for a moment what The Dentist and Ms. Sacco did warrant some form of social response to help delineate what the collective considers acceptable behavior. Platforms can, should and are (sort of) do something about the severity of the response.

There’s a difference though, when the target of this distributed lashback is being targeted because of conspiracy theory. It seems that the difference is explicitly about the validity of the reason. When the validity of the reason for this kind of response is called into question (setting aside how valid the severity of the response is), perhaps there’s more platforms ought to be doing.

This is a tricky problem. In a related space, Kate Starbird believes that censorship (or “nipping false information in the bud”, to be charitable) breaks social processes in other ways.

Regardless of how tricky of a problem it may be, strategies for mitigating the harmful outcomes of social network scale anger and vitriol seem critically important. The power of network-effect scale is both what makes the internet such an exciting communication platform, and what lets it turn into a mob. This is a bit of a chicken and egg problem, ideally we’d have the positives of network-effects, while minimizing or getting rid of the negatives. I’m not well versed in this space, but it seems that there’s an interesting, really hard set of problems at the intersection of “tools to mitigate distributed emotional attack” and “keeping the things that people use social networks for”.