Twitter shared its own research this week about the platform’s algorithmic amplification of political and news content. A post on Twitter’s blog revealed that the social media platform promotes right-leaning content more often than left. As a result, conservative posts on Twitter successfully spark more outrage causing amplification, similarly to Facebook’s algorithm.
Twitter looked at millions of tweets posted between April 1st and August 15th, 2020 from elected officials in seven countries — including the US, UK, Canada, France, Germany, Spain and Japan. According to the 27-page research document, Twitter stated that researchers found a “statistically significant difference favoring the political right wing” in all the countries except Germany.
Out of the seven countries examined, Germany was the only country that did not experience right-leaning algorithm bias — which could be related to Germany’s agreement with Facebook, Twitter, and Google to remove hate speech within 24 hours. Many users in other countries have even changed their country to Germany on Twitter to prevent Nazi imagery from appearing on their timelines.
The internal study also showed whether political content from US news organizations such as CNN, The New York Times, BuzzFeed, Fox News, and others were amplified on Twitter. Every outlet was put in a category based on media bias ratings from two independent organizations called AllSides and Ad Fontes Media. The study showed, “Right-leaning news outlets, as defined by the independent organizations listed above, see greater algorithmic amplification on Twitter compared to left-leaning news outlets.” To examine the algorithmic amplification of news outlets, Twitter analyzed millions of Tweets with links to articles shared by people during the same timeframe of the study, excluding non-political content such as recipes or sports.
Rumman Chowdhury, the head of Twitter’s Machine Learning, Ethics, Transparency and Accountability team, stated in the post, “The purpose of this study was to better understand the amplification of elected officials’ political content on our algorithmically ranked Home timeline versus the reverse chronological Home timeline.” The study explained that what users see on their Home timeline is a function of how they usually interact with the algorithmic system, combined with how the system is designed.
Twitter indicated in its research findings that it may now need to change its algorithm and that more studies need to be done in order to fully uncover why its Home timeline produced these specific results. Twitter emphasized its research findings could be “problematic” if certain tweets received preferential treatment as a result of how the Twitter algorithm was constructed, rather than on how people interacted with it.
Chowdhury and Twitter’s research team were prompted to address algorithmic transparency following months of Facebook battling with Congress over their unregulated algorithms. Mark Zuckerberg announced that Facebook changed its corporate name to Meta, in aims to rebrand the company’s reputation after a period of scrutiny about the harms its various platforms cause. The company has remained under massive fire for failing to share the company’s internal research documents, which were obtained and shared by former Facebook product manager, Frances Haugen.
Facebook’s leaked research exposed that the company knew about its dangerous effects of amplifying misinformation and extremism — but prioritized profit over eliminating harmful content and protecting users. This issue has increased calls for more government regulation on all major social media platforms.
The thousands of internal company documents provided to Congress by whistleblower Haugen have led to 17 US news organizations publishing a series of stories called “The Facebook Papers.” The documents highlighted the platform was used to facilitate human trafficking, hate speech, and global violence. Such as organizing the January 6 insurrection and sparking violence in Myanmar and Ethiopia. The data also showed the company’s automated systems deleted less than 1 percent of content that tried to incite violence, and only about 2 percent of hate speech in 2019. Although the documents capture the deepest look at the harms caused by Facebook’s algorithm and the decisions made by the company, Facebook has pushed back claims of being aware.
Right-leaning users claim that social media platforms censor conservative posts, which liberals disagree with after Twitter’s latest research showed the opposite. However, many other Twitter users are not surprised by the results of Twitter’s study, after witnessing how much their timelines are curated. “I’m not surprised at all by the result, but I do feel it’s a reason for a greater divide across social media,” shared Luke Tew, who works at a digital marketing company. He added, “No matter what side, stance, take or opinion on politics, anyone can join an algorithm online that only backs up their values.”
Psychology professor at NYU, Jay Van Bavel, has studied how humans create their identity through joining political parties and interacting with them on social media. He suggests in his research,, “The incentive structure of social media (especially Twitter) encourages individuals to express their outrage because it feels good, signals their identity, solicits rewards (likes + retweets) and aligns with social norms.” In other words, algorithms on Twitter and Facebook reward outrage from people by amplifying them, such as when a politician becomes a trending topic. When people share multiple angry posts, the algorithms generate more engagement. Therefore, rewarding them with more feedback, follows and an amplification of the controversial issues that ignite violence.
Twitter now plans to come up with methods to make their databases available to third party researchers who wish to extend these internal findings further, in a way that can protect the privacy of users on Twitter. The company added, “We hope our findings will contribute to an evidence-based discussion of the role these algorithms play in shaping political content consumption on the internet.”
A large number are flocking to the Eat News for quality news every day, and readers in Taiwan, United States, United Kingdom, India, Japan, France, Pakistan, China, Malaysia and more, now support us financially.
In these chaotic, perilous times, an independent, truth-seeking news organisation like the Eat News is essential. We believe everyone deserves access to trustworthy information. That’s why we choose to keep our reporting open for all readers, regardless of where they live or what they can afford to pay.
The Eat News has no shareholders or billionaire owner, meaning our journalism is free from influence and vested interests – this makes us different. Our editorial independence and autonomy allows us to provide fearless investigations and analysis of those with political and commercial power. We can give a voice to the oppressed and neglected, and help bring about a brighter, fairer future.
If there were ever a time to join us, it is now. You have the power to support us through these volatile economic times and enable our journalism to reach more people, in all countries.
Every contribution, however big or small, makes a difference. Support the Eat News for better reporting.Support the Eat News ➔