Curbing the ills of anti-social media
The incentives that drive political conversations online are in dire need of an overhaul.
We have previously explored the benefits of technologies that harness reliable citizen contributions to legislation and governance. However, the need for such platforms is further justified by the current state of political discourse on social media networks. The seemingly democratic, well-intentioned ethos of these tools has been compromised, yielding to a “Wild West” of divisive noise, demagoguery, and misinformation.
In turn, the services rendered by Facebook, Twitter and Google are, to a tremendous extent, part of an eyeball-hungry oligopoly, where information and data have been accruing with unchecked profit motives and little regard for the very real implications we now find ourselves in. The United States Congress hearings on the matter have demonstrated how vulnerable these sites have been to subversive foreign interference, receiving money from virtually anyone while facilitating the unrestrained spread of dubious political advertising.
Should these tech giants be regulated as utilities? Media outlets? Or a hybrid of all the interconnected, interdependent features they currently offer and will deliver in the future?
These are complicated questions to ask while anti-intellectual agitators continue to enjoy the benefits of widely followed Twitter accounts and Facebook pages, side-by-side with reputable sources. But as long as misinformation and deception are financially rewarding or easy to advertise, these channels will be perpetually contaminated, and technology firms should commit themselves to eliminate incentives to pollute. While participation in informational initiatives such as The Trust Project signals a clear step in the right direction, these companies’ resources should be directed toward doing a lot more.
Instead of a mediocre permissiveness for conspiracies and fake news, notifications on the quality of sources and potential biases should be better implemented, particularly when users share links from unverified origins. Disclaimers, warnings and flagging mechanisms should also be used as brakes for the “virality” of questionable content. With fake news becoming more sophisticated, future AI algorithms and applications should be developed with adaptable safeguards against the frivolous monetization of political content. Filters for spam or junk e-mail have become an industry standard: why should we accept permanent exposure to fringe actors peddling twisted facts and political enmity on social media?
To be clear: none of this should be equated with censorship or government control of the Internet, both authoritarian strategies that spread propaganda and silence dissidents. But with Facebook and Google now accounting for over 70% of Internet traffic, it would be foolish to assume that free speech principles will prevail without a reasonable measure of parameters and adequate rules of engagement, especially in an environment of extreme corporate consolidation. These firms’ business models must recognize their responsibility not just in handling vast troves of information, but bolstering trust between users as well.
Until then, social media will remain polarizing, sensationalistic and fraudulent, at odds with the necessity of agreements founded upon verified facts. As long as Silicon Valley insists that it is able to police itself, it should not underestimate the need for substantial outside help–and a significant dose of internal disruption—as part of the process.