We spend much of our time at the Institute for Media and Public Trust studying misinformation and disinformation that is shared on social media, and part of our mission is to empower news consumers with the tools to spot false content.

We learned in 2020 that this quest remains a major challenge in our society. Technology has allowed tricksters and those who are duped by them to easily spread lies. Much of the phony content this year involved the presidential election and the COVID-19 pandemic. But misinformation and disinformation have been pushed on a variety of topics, including the false claim that Alex Trebek’s family was “left in tears” after his death because of his net worth. That wasn’t true, but the clickbait headline got people to read what was actually an advertisement.

Let’s pause to explain some terms that we are using. “Misinformation” is inaccurate information that is spread on social media regardless of the intent to mislead, while “disinformation” is knowingly creating and spreading false information for nefarious reasons, according to Dictionary.com. Some use misinformation and disinformation interchangeably, but they are not the same. Be careful when using these terms so you are conveying the meaning you intend.

Another term you will hear in discussions about spreading false information on social media platforms is “bots.” Let’s go to some experts on the subject to explain what bots do, and why they are problematic. “They reside on social media platforms, created by someone with computer programming skills, comprised of nothing but code, that is, lines of computer instructions,” according to the Center for Information Technology and Society at UC Santa Barbara. “So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively.”

CITS reported that in 2017, there were an estimated 190 million bots just on Twitter, Facebook and Instagram. By using bots, those pushing disinformation can overwhelm a social media platform with false content that is then retweeted by actual people.

According to a bipartisan report released by the U.S. Senate Intelligence Committee, Russian trolls used Facebook to try to persuade Black voters not to vote in the 2016 presidential election or not to vote for Democrat Hillary Clinton, according to a bipartisan report released by the U.S. Senate Intelligence Committee. Here is the full Senate report.

Dr. Janey Lee, Ph.D., assistant professor of journalism at Lehigh University in Pennsylvania, says that in today’s world, it is difficult to spot fake news content. “Even experts have a hard time weeding out fake accounts and automatic messages,” she said in this NBC News report. Bots are programmed to pick up controversial keywords and hashtags and then post false content based on that information. “Those messages can create a perception of serious political polarization and huge divisions in society,” Lee said in the 2019 news article.

False content not only can mislead Americans about public policy, which can threaten our democracy during an election, but also can cause some people to react violently. The 2016 “Pizzagate” false rumor about a pedophilia ring spread on social media could have had a much more violent outcome after a man showed up with guns at the District of Columbia pizza parlor. He was apprehended after firing at least one shot.

You may wonder how someone could actually believe such an outrageous conspiracy, but if it gets repeated thousands of times on your social media platform of choice, some people will believe there is something to it.

So let’s not be part of the problem by unknowingly spreading false information on social media. Here are some tips that we have been assembling at our Media Institute than can help you identify phony content:

  1. Look past your personal biases. This is crucial in sorting out false content. We often believe the worst about people or politicians we despise. Those biases can blind us to what we are sharing on social media, even if there are red flags that suggest the stories may not be factual.
  2.  Do you recognize the source of the news item? Be skeptical if it comes from a source that you’ve never heard of. That doesn’t mean it’s false, and it could come from an obscure but legitimate news outlet. But take extra time to confirm the facts on sites you may not recognize.
  3. Use search engines to see if anyone else is reporting this particular story. If it is as big a story as being promoted on a social media site, surely other news outlets will have a version of the story at some point.
  4. Check the link in your browser. Many fake news sites try to mimic actual news sites. The link might have a slight variation from the legitimate news site. If the link looks odd, that’s another red flag.
  5. Look at other stories on the website. Does the content pass the “smell test?” Check out the writing style. Do the stories on the site have excessive capital letters, exclamation points, obvious grammatical errors, or other oddities that suggest the content may not be reliable?
  6. Read the “Contact Us” and “About Us” links. Are they working, and do they give information that is helpful? Can you email the story’s author and get a response?
  7. Go to fact-checking sites. Use them to see what they say about the news story before you post it on social media. Try factcheck.org, snopes.com, politifact.com, or other fact-checking sites. And if you have questions about the quality of a particular fact-checking site, use multiple fact-checking sites to verify the information. There are dozens of them.
  8. Be skeptical. It will help make you a smart news consumer.

Please let us if you have questions or suggestions about our work at the Media Institute. There is a comment section at the end of this post.