With Facebook’s announcement that its Oversight Board will make a decision about whether former President Donald Trump can regain access to his account after the company suspended it, this and other high-profile moves by technology companies to address misinformation have reignited the debate about what responsible self-regulation by technology companies should look like.
Here are three key ways social-media self-regulation can work:
Deprioritize engagement
Social-media platforms are built for constant interaction, and the companies design the algorithms that choose which posts people see to keep their users engaged. Studies show falsehoods spread faster than truth on social media, often because people find news that triggers emotions to be more engaging, which makes it more likely they will read, react to and share such news.
Deprioritizing engagement in content recommendations should lessen the “rabbit hole” effect of social media, where people look at post after post, video after video. Apple CEO Tim Cook recently summed up the problem: “At a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement—the longer the better—and all with the goal of collecting as much data as possible.”
Label misinformation
The technology companies could adopt a content-labeling system to identify whether a news item is verified or not. During the election, Twitter announced a civic integrity policy under which tweets labeled as disputed or misleading would not be recommended by their algorithms. Research shows that labeling works. Studies suggest that applying labels to posts from state-controlled media outlets, such as from the Russian media channel RT, could mitigate the effects of misinformation.
Crowdsource accuracy verification
Community-based enforcement Twitter recently announced that it is launching a community forum, Birdwatch, to combat misinformation. While Twitter hasn’t provided details about how this will be implemented, a crowd-based verification mechanism adding up votes or down votes to trending posts and using newsfeed algorithms to down-rank content from untrustworthy sources could help reduce misinformation. The Conversation