The State of the Internet – Misinformation and the Role of Fact Checking

The internet has transformed how we communicate, access information, and connect with each other, but it’s also brought new challenges, particularly when it comes to distinguishing fact from fiction. As misinformation and disinformation spread rapidly across social media and other online outlets, telling truth from fiction is becoming increasingly difficult. And while fact-checking is crucial for ensuring the accuracy of information, some will argue that fact-checking and content moderation is nothing but censorship and a threat to free speech. This raises important questions about the role of online platforms in fact-checking and moderating content, and what responsibility they have to their users. In this article, we’ll explore where fact-checking stands today, how platforms are handling content moderation, and the impact these changes are having.

Fact-Checking: The Gatekeeper of Truth

At its core, fact-checking is the process of verifying the accuracy of claims, statements, or information. Fact-checkers, whether they are independent organizations or in-house teams at media outlets, comb through sources, data, expert opinions, and other forms of evidence to determine whether something is true, partially true, or entirely false.

Fact-checking is crucial for combating the spread of misinformation (unintentional falsehoods) and disinformation (deliberate falsehoods), and holding accountable the responsible party. It is a safeguard against the spread of false or misleading claims. Without facts, making well-informed decisions becomes difficult.

Facts are the foundation of critical thinking, enabling us to assess situations, understand context, and evaluate consequences. In a democracy, where decisions affect the collective well-being, facts are essential for holding leaders accountable, shaping policies, and ensuring that citizens make informed choices. Without accurate information, we are more vulnerable to manipulation, division, and the erosion of trust in institutions. Without facts, we loose the ability to function as an informed, engaged democracy.

Content Moderation: The Protector of Platform Standards

While fact-checking is focused on verifying specific pieces of information, content moderation addresses what content is allowed, or not allowed, on platforms. Content moderation refers to the policies, practices, and tools used by platforms to enforce community guidelines and remove content that violates those rules. These rules can cover a wide range of concerns, including hate speech, harassment, violence, abuse, explicit content, and misinformation.

Content moderation is not a perfect system. Lack of consistency and bias have caused un-moderated content to proliferate, as we saw in the case of hate speech in Myanmar on Facebook, and recently in Britain on Twitter/X. When platforms such as Twitter/X and now Meta are moving away from content moderation and instead encourage users to submit “community notes”, citizens are left vulnerable to increased lack of accountability, reinforcement of existing biases, and increased spreading of false and misleading content.

Social media platforms use algorithms to promote content that generates strong reactions – likes, shares, comments – which of course leads to higher user engagement and increased ad revenue. As a result, emotional, sensational, and polarizing content, which often includes misleading opinions disguised as facts, receives more attention than carefully researched, fact-based reporting.

Linking Fact-Checking with Censorship

The linking of censorship with fact-checking is damaging because it creates a false narrative that trying to stop misinformation is the same as silencing free speech. Fact-checking isn’t about censorship; it’s about making sure people aren’t misled by harmful or false information. But by framing it as censorship, we allow disinformation to thrive unchecked. This not only spreads confusion but also undermines efforts to keep the public discourse honest and informed, especially when it comes to things like climate change, elections or health.

Social media platforms are under increasing scrutiny for how they moderate content, with accusations that they either censor too much or allow too much harmful content to flourish. There has been much debate over where to draw the line between offensive speech and actual harmful speech, and about who has a moral and legal responsibility to moderate extreme forms of speech which causes harm to others. We rely on free speech in a free society to be able to express ourselves, but what happens when that speech causes physical or psychological harm to others?

Real World Consequences

As many platforms have scaled back their fact-checking programs and content moderation efforts, the consequences for society have become increasingly apparent. Unchecked disinformation can divide societies, harm public health, and even threaten lives. When people are denied facts, social discourse suffers. Without a common understanding of what is true, meaningful conversations become difficult, and finding common ground is nearly impossible. This creates division, making it harder to address issues and build trust.

Misinformation about climate change has caused confusion and delay in addressing environmental issues, and is often perpetuated on online platforms. This makes it difficult for people to know what is fact and what is fiction. When people knowingly share disinformation on climate it not only distorts scientific facts, but it hurts citizens’ health and future economic stability. Perpetuating falsehoods about the state of the planet is often aimed at protecting interests in the oil and gas industry. This has left communities less prepared to handle the growing risks of extreme weather events, like floods, wildfires, and heatwaves, which are becoming more frequent and severe due to climate change.

During the Cambridge Analytica scandal in 2018, Facebook initially downplayed how badly users’ personal data was misused. They claimed it was just a small issue affecting only a few users, but in reality, it impacted over 87 million people. They also tried to shift the blame onto users, framing it as a privacy issue instead of addressing deeper problems in how they allowed third parties to access data. This misinformation spread through Facebook itself and public statements from the company, causing confusion and delaying the realization of just how serious the breach was.

We cannot overlook the profound impact that the spread of misinformation and disinformation by politicians and interest groups has on our democracies. From the U.S. to Europe, South America to Asia, the spread of false narratives and misleading claims has real, far-reaching consequences on citizens’ lives. Bad actors create distorted views of reality that aligns with their own agenda. They stoke anger, resentment, and fear. It causes a lot of confusion about what is actually true, it fuels political polarization, and erodes trust in our voting systems and institutions. In some cases, inciting violence and even death.

Avoiding Responsibility

Google and LinkedIn are scaling back their commitment to supporting the fact-checking community under the European Union’s Code of Practice on Disinformation. Fact-checking has never been a central part of Google’s content moderation process, but would be required to add fact-checking to Google and YouTube search results under the EU code.

Twitter (now X), significantly reduced its fact-checking initiatives and dismantled its misinformation moderation team. In 2023, the platform ceased enforcing policies that flagged misleading information about COVID-19 and other topics, signaling a move toward prioritizing free speech over content scrutiny. Twitter/X pulled out of the EU Code of Practice in 2023.

Facebook, Instagram and Threads (Meta) has also scaled back its content moderation in recent years, and has reduced its third-party fact-checking program. Meta has recently announced it is closing its fact checking program in the US, and will likely do the same in Europe.

Snap, TikTok, and Discord have also reduced their moderation efforts.

Who is Protecting Us?

If we can’t count on platforms to protect us from harmful content, who can we rely on?

In Europe, The Digital Services Act (DSA), the Code of Practice on Disinformation and recently the Code of Conduct of Disinformation are meant to hold online platforms accountable, pushing them to take stronger action against disinformation and be more transparent about how they moderate content. They also require big platforms to be more proactive in identifying and removing harmful content, with stricter rules for handling disinformation during elections and crises. With big companies like Meta and Google now withdrawing from their commitment to supporting fact-checking under the Code, organizations are calling of the European Commission to act resolutely.

In the U.S., Section 230 of the 1996 Communications Decency Act states that no service provider is responsible for content provided by someone else, which essentially means that social media platforms are not liable for content. In 2018, a bill passed to add exceptions for sex work and trafficking. There have been ongoing talks and proposed bills to regulate big tech, but no major new laws have passed yet. The political divide over free speech versus regulation, combined with powerful tech industry lobbying, has made it hard to reach a consensus.

In the UK, the Online Safety Bill seeks to regulate harmful content on digital platforms, including disinformation, and impose fines on companies that fail to protect users. In 2022, changes were made to increase protections for children, but some argue it goes too far in allowing backdoors in end-to-end encryption and scanning and filtering of all content. This raises question about legislation, censoring and privacy.

Striking a Balance

The decision to scale back content moderation and fact-checking by major online platforms has serious consequences for citizens and society. It erodes trust in reliable sources, fuels division, and puts the public and democracy at risk. The argument often arises that fact-checking may infringe on free speech, but it’s crucial to distinguish between free speech and efforts to prevent harm by stopping the spread of false information.

While platforms have a role in managing content, and governments have a role in ensuring that information is accurate and responsible, it’s just as important for us as individuals to build media literacy and critical thinking skills to better navigate the world. Sites such as Wikipedia, PolitiFact, FactCheck.org, and Snopes have become trusted sources for many people to verify facts. We all have a responsibility as citizens of democratic societies to verify facts and ensure the information we share is accurate, and that speech does not harm others.

Leave a Reply

Your email address will not be published. Required fields are marked *