Essay Sample on Addressing Misinformation: Content-Specific Warnings

📌Category: Entertainment, Internet, Social Issues, Social Media
📌Words: 1182
📌Pages: 5
📌Published: 01 October 2022

Before the growth of the Internet and social media, most people received news and information from physical newspapers and, later, television news. However, as the Internet has grown and become more advanced, people have increasingly turned to the Internet and social media platforms to get information (Searching for Truth). According to a 2018 Pew Research Study, “about two-thirds of Americans get their news from social media” (“Social Media”). The most popular platforms for regularly getting information are Twitter, with 15% using it, YouTube, with 23% using it, and Facebook, with 36% using it. On all of these platforms, nearly anybody, from professional news organizations to the average citizen, can produce information that can reach other users, increasing the “speed and breadth with which… disinformation can spread” (“Social Media”). As such, these companies have started looking for solutions. In order to combat widespread and dangerous misinformation on their platforms, social media companies should issue content-specific warnings instead of generalized warnings.

Virtual misinformation on social media platforms can lead to very real consequences in real life. One example of this being the case is the COVID-19 pandemic. During the pandemic's early stages, advice on the internal use of household disinfectants began to spread. As a result, calls to poison centers regarding exposure to disinfectants spiked (Nelson et al.). Though misinformation like the one in this example might seem ludicrous, some people clearly believe in them, and it can have a substantial impact on the individual. During times of widespread uncertainty, it is important that people identify correct information so they can act accordingly. However, the abundance of misinformation on platforms prevents people from finding this information, leading to the disregard of public health measures vital to battling the spread of disease, and instead causes them to hold false beliefs, like what the correct usage of disinfectants is. Another notable example of misinformation having severe negative results is the misinformation spread during the 2020 U.S. presidential election. After the results of the election came out, Trump posted many messages on the prominent social media platform, Twitter, claiming that the electoral process was illegitimate and the results were fraudulent. His efforts were very effective. In a post-election poll, 78% of in-person voters who voted for Biden reported feeling very convinced that their vote had been counted correctly, compared to 19% of Trump voters being convinced (Pérez-curiel et al.). This widespread incorrect belief that was spread on social media has had dire consequences. On January 6, 2021, thousands of Trump supporters assaulted the Capitol Building in order to overturn the results, believing that the election had been rigged. Misinformation, spread across the nation through the Internet, was able to divide the nation so much on an issue that there was almost a four-fold difference between the beliefs of Democrats and Republicans to the point of inciting bloody violence in thousands of people. It was because of Twitter’s lenient policies on disinformation that Trump was able to spread such a pervasive mistruth that has caused major impacts in the nation. As misinformation becomes more harmful and extensive, new solutions must be put in place. However, not all solutions are created equal.

Content-specific warnings are more effective in correcting false beliefs as compared to non-specific warnings. A 2020 study examined the effectiveness of strategies that social media companies could use to counter misinformation, including general warnings and “Disputed” or “Rated false” tags (Clayton et al. 1). The general warning that they showed with news headlines to some participants consisted of a message about false information and ways to identify it, and the tags claimed that the headlines were either disputed or rated false (9). According to the authors, “‘Rated false’ tags… are more effective at reducing belief in misinformation than the ‘Disputed’ tags,” and “the effect of a general warning is small compared to either type of tag” (22). Furthermore, measures even more specific than those types of tags are shown to be even more effective. According to a 2020 study published in the British Journal of Psychology, “short‐format refutations were found to be more effective than simple retractions after a 1‐week delay but not a one‐day delay” (Ullrich et al.). By refutations, the authors mean responses that address specifically what makes a claim false, and by retractions, the study means a response that only states that a claim is not true. Therefore, specificity in responses is necessary to properly combat misinformation.

Another issue with general warnings is that invalid general warnings lead people to trust reliable sources and information less. A 2021 study determined that this is the result of what they call the tainted truth effect  (Freeze et al.). In the study, participants watch short recordings of speeches by U.S. House Representatives. Then, they either read an undetailed description with very little information, an article with true details that correspond to what they saw, or an article with a mix of true and false answers, deliberately emulating a misinformation source. Half of the articles shown to the participants were accompanied by a misinformation warning and the other half were not. The results of the study were that “invalid misinformation warnings can damage source credibility and cause people to reject accurate information that is associated with the tainted source.” This type of behavior can be best illustrated with a real-life example. During his presidency, Donald Trump has made several assertions that The New York Times, CNN, and The Washington Post are fake news, or in other words, contain misinformation. This is a case of an authority figure warning an audience about misinformation. As a result, there have been “particularly notable increases in distrust” of these outlets between 2014 and 2020 among Republicans (Gramlich).

Another solution to this problem that has become increasingly popular is media literacy education. Some believe that this would be the most effective way of combatting misinformation. Media literacy essentially describes knowing how to interact in a digital environment (Searching for Truth). A large part of this is learning critical thinking. Critical thinking, proponents argue, would allow people to assess both sources from which they get information, and whether or not the information is true. (De abreu). However, media literacy education mainly targets school-age children and adolescents. For example, debate, one of the activities that is promoted, is primarily for middle school and high school students. Adults are not affected nearly as much by media literacy education, as they are not the target of educational programs. They are frequently the victims of misinformation, especially the elderly (Nelson et al.). Misinformation is a current issue that affects these people now and therefore must be dealt with now, and general warnings remain the best solution in this regard. Though media literacy education is likely effective for future generations, it is not plausible to wait a generation for most people to be less susceptible to misinformation.

Overall, misinformation has become increasingly harmful and prevalent because of the role that the Internet plays in spreading it. It has a great impact, as shown in its role in public health during the COVID-19 pandemic and distrust in the electoral process during and after the 2020 presidential election. While warnings are generally effective solutions to this problem, it is important to differentiate between content-specific warnings and generalized warnings, as there are substantial differences between the two. Content-specific warnings are proven to be more effective in ridding people of their false beliefs. Moreover, generalized warnings have the added disadvantage of damaging the credibility of valid sources and the correct information that they contain. As misinformation has only begun to become a greater issue and play a larger role in society and crises, it is important to carefully consider any possible solution and any potential effects or implications that would affect their functionality.

+
x
Remember! This is just a sample.

You can order a custom paper by our expert writers

Order now
By clicking “Receive Essay”, you agree to our Terms of service and Privacy statement. We will occasionally send you account related emails.