Facebook logo. (JOEL SAGET/AFP/Getty Images)

Facebook logo. (JOEL SAGET/AFP/Getty Images)

GETTY

One of the greatest challenges confronting the modern Web is how to address the rise of the horrific content that is increasingly toxifying it. For their part, social media companies have largely pivoted from their public stances that once viewed free speech as more important than countering terrorism, but even while publicly touting their content moderation efforts and crafting a public image of success, the reality is that the companies are doing far worse than they acknowledge. One reason is that the companies face few legal repercussions for permitting such content to be shared on their platforms and most importantly, actually profit monetarily from its distribution. Would imposing legal consequences on the social platforms finally force them to tackle horrific content?

There are few incentives today for social media companies to remove horrific content from their platforms. In most of the world’s legal jurisdictions, there are few laws governing such material and those that do exist are typically limited to specific kinds of bans enforced only within that country. This means social companies confront little legal liability from the hosting and redistribution of horrific content.

In fact, social media companies actually benefit from such material. Horrific content tends to yield much higher rates of reposting, sharing, viewing, discussion and engagement than more mundane posts. Simply by allowing graphic murder images to remain on their platform, Instagram generated very real advertising revenue and enriched its interest and behavioral profile databases.

In short, social media companies profit directly from horror.

In turn, so too do their shareholders, including their senior executives. Mark Zuckerberg directly profited from the distribution of the Bianca Devins murder images by virtue of being a major shareholder of parent Facebook.

Even while they have no incentive to combat the sharing of horrific content, social platforms face very real legal and financial penalties for failing to stop the illegal sharing of copyrighted content through their services.

This is one of the driving forces behind the strange duality of companies that on the one hand have managed to do a fairly successful job at combatting the unauthorized spread of known copyrighted works through their services, yet fail disastrously at slowing the spread of blacklisted horrific content. The former has strong legal and financial penalties associated with it, while the latter actually earns them revenue.

Automated image filtering to stop images from being uploaded in the first place and similarity and hash-based blacklists are highly developed technologies available through myriad commercial offerings that are extremely robust today. Yet the tools deployed by the social platforms perform abysmally compared to these commercial offerings. In some cases, the same companies generating headlines for failing to stop horrific content from spreading on their services offer separate commercial content filters that successfully detected that content.

Why then are companies not doing a better job of stopping the spread of horrific content?

The sad but simple answer is that they profit from it.

It would be quite trivial for Facebook to institute a contractual obligation to its advertisers whereby it automatically refunds any revenue it earns from showing ads alongside a post that it later removes as a violation of its terms of service. Such a policy would not involve anything more than adding a single line to its policy guide and linking two databases and would likely be welcomed by an advertising community that is becoming ever more wary of their brands being associated with such horrific content.

Doing so would provide advertisers with real-time metrics on how often their brands are being associated with horrific content, helping them place greater pressure on social platforms to do a better job at stopping such material.

Yet each time it has been asked whether it would commit to such a scheme, Facebook has steadfastly declined to do so.

What if social companies went beyond merely forfeiting the profits of horrific content and actually paid a fixed fine for each violation? A standard fixed fine could apply a fee based on the number of shares and views a violating post received before being removed, paid into a fund distributed to victims’ rights groups.

Sadly, neither of these scenarios is likely without government intervention. Horrific content is simply too profitable for social companies to risk actually trying to combat.

Putting this all together, social companies are unlikely to take the removal of horrific content from their platforms seriously until they face consequences for failing to do so. Until companies are forced to relinquish the profits they earn from the horrors of society and even perhaps face legal and financial liabilities for such material through government intervention, there is little hope things will get better.

In the end, it is important to remember that for all their rhetoric about being societal goods connecting the world, social media companies are at the end of the day merely for-profit entities willingly monetizing society’s worst horrors.

[“source=forbes”]