What Works and What Doesn’t Work?

Reddit:

In April of 2020 Reddit installed a new program aimed at curbing misinformation. As more individuals seek news and information on social media platforms and online discords, Reddit saw a mass influx of misinformation being shared and consumed on the platform. The new misinformation report category was added to, what they call, a ‘subreddit.’ The platform’s intent was to use this subreddit as a check and balance. Users would be able to see misinformation and tag the article to the subreddit, but why didn’t this work?

 The main issues were the human fault, disagreeing opinions and lack of knowledge. There was a surplus of credible information being deemed fake news and misinformation still being spread. Data posted by Reddit shared, In March 2023, only 16.18% of content reported for misinformation was removed by moderators. Reddit decided to set more strict ruling on content posted and now online moderators do not strictly remove content but they search for opinions and biases and label each reddit post as what it is. They decided you cannot restrict or prevent misinformation but you can deem it as what it is. 

Other journalists have taken notice of the policy change and have written about it in a positive light. For example, James Ball posted an article on the Columbia Journalism Review called, “The Most Civilized Place to Look at News Online? It Might be Reddit.” Ball talked about the growing rate of the subreddit, r/worldnews which currently houses 36 millions viewers (Reddit data, 1). The article discusses the impact of the new Reddit guidelines and how they allow for credible news to be spread and how well they label misinformation and opinion pieces published by users. I think with time these guidelines will adapt to the amount of information posted and start to work better. Overall I think this is a great start to have other users acknowledge the misinformation they see and professional monitors not remove connect but label it as opinion, bias and misinformation. It can educate users to spot misinformation and teach them how to find it on other platforms as well. 

X:

In April of 2023 X posted this statement, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.” (X Help Center, 1). This statement came along with an article which explained the steps that X is going to take to fight misinformation. The post explains that any posts that are found to be fabricated in any way will be taken down, along with any profound misinformation on the app. This was going to be regulated by monitors on the app who will deem what is appropriate information and what is not. 

Does this work? After reading around this has not done much to stop the spread of misinformation. I believe the mistake made was that you simply can not remove all of the misinformation shared on the app, especially an app as big as X. There is also a new aspect that has been included that similarly resembles what Reddit is doing, there is now a section available under each post to, ‘add context.’ This does differ from Reddit as it is not labeling the context but allowing users to add in another context or information that they deem necessary for the post. Which in return can lead to the spread of more misinformation. 

I think trying to regulate misinformation of a platform like X can be extremely difficult due to the culture and size of the app. If I were to brainstorm an approach to fight misinformation on the app I would follow what Reddits style to combat is. I would not try to take down and constantly monitor the app or let others add context to posts which can complexity alter what someone has already posted, truthful or not. I would have a group of monitors that search posts and label them as opinion pieces, bias, misinformation. This way we are not limiting what people can post or say but let others users know that the posts they are viewing are not in fact truthful and reliable information. This can help people learn what misinformation and opinions hidden in fact look like.

Rather than imposing strict content controls, a system where monitors categorize posts as opinion, bias, or misinformation could help users identify unreliable information without stifling expression. This approach encourages critical thinking and educates users on recognizing and evaluating misinformation across platforms, ultimately fostering a more informed online community.

Works Cited

Ball, James. “The Most Civilized Place to Look at News Online? It Might Be Reddit.” Columbia Journalism Review, 13 Feb. 2024, www.cjr.org/the_media_today/reddit-real-time-news-twitter-substitute.php.

Elliott, Vittoria. “Elon Musk’s Main Tool for Fighting Disinformation on X Is Making the Problem Worse, Insiders Claim.” Wired, 17 Oct. 2023, www.wired.com/story/x-community-notes-disinformation/.

jkohhey. “Updating Reddit’s Report Flow.” Reddit, 4 May 2023, www.reddit.com/r/modnews/comments/137ylvi/updating_reddits_report_flow/. Accessed 11 Apr. 2024.“Synthetic and Manipulated Media Policy.” Help.twitter.com, Apr. 2023, help.twitter.com/en/rules-and-policies/manipulated-media.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *