By: Claire Denny
In the fall of 2021, a former employee of Facebook provided the United States Senate Committee on Commerce, Science and Transportation with written testimony illustrating the harmful effects caused by the Facebook algorithm. In addition, she provided the Committee thousands of leaked pages of internal documents to substantiate her concerns. One leaked page included a slide from an internal presentation by a Facebook executive that stated, “Our algorithms exploit the human brains attraction to divisiveness. If left unchecked it will feed more and more divisive content in an effort to gain user attention and increase time on the platform.” Additional documents leaked by the Facebook whistleblower included internal research showing that the algorithm caused harmful effects on the platform’s teenage users by promoting harmful content on the platform. Other documents showed that the Facebook algorithm tends to spread misinformation, hate speech, and polarizing user content. Despite this, the Facebook algorithm remains unchanged.
The Facebook whistleblower, Frances Haugen, is an American data scientist and computer engineer. After graduating from Harvard, Haugen gained expertise in computer sciences and data engineering while working for numerous tech companies including Google, Yelp, and Pinterest. However, Haugen was drawn to working for a social media platform because she wanted to play an active role in establishing a better social media environment online. In 2019, Haugen was hired as an algorithm product manager for Facebook and handled data science issues pertaining to the spread of misinformation and counterespionage. She described her job as being “largely focused on algorithmic products, like Google Plus Search and recommendation systems like the one that powers the Facebook news feed.”
While at Facebook, Haugen worked on a team that addressed issues of civic integrity and civic misinformation. This position allowed Haugen to develop a first-hand understanding of the how the Facebook algorithm operates and how it has caused harmful effects on the platform’s users. Although the algorithm was initially designed for the Facebook website, the company uses the same algorithm across all of its platforms, including Instagram. The algorithm currently operates in a way that maximizes engagement in the platform by promoting and amplifying content that is likely to elicit a response from users. However, the content that is being amplified and is therefore shown more frequently to users across the platform, is problematic and harmful content. Internal research conducted by Facebook on its algorithm has shown that the algorithm expands and amplifies “hate speech, misinformation, violence inciting content, and graphic and violent content” because the algorithm recognizes that users are more likely to engage in and react to this type of content. Despite this, Facebook executives have consciously chosen to continue utilizing the algorithm because it leads to increased user engagement on the platforms, ultimately leading to increased company profits.
Facebook and Instagram have also conducted extensive internal research on the effects of social media use on teenage users. Haugen explained that Facebook’s own research shows that the use of their social media platforms by teenagers increases the risk of mental health issues, eating disorders and depression. Teenage users repeatedly reported that Instagram made them feel bad about their bodies and that they were “unhappy when I use Instagram and I can’t stop.”. Although Facebook executives are aware of the harmful effects of their platforms, users accounts are pushed towards radical or harmful content by the same algorithm that is causing and perpetuating these issues. In her testimony, Haugen stated “I saw Facebook repeatedly encounter conflicts between its own profits and our safety. Facebook consistently resolved these conflicts in favor of its own profits.”
Under current United States law, social media platforms like Facebook can continue to operate and implement harmful algorithms because there is very little congressional oversight on these issues. Regulation of social media content and use has been especially problematic because it inevitably implicates issues involving intellectual property ownership as well as First Amendment Rights. In addition, drafting and implementing effective regulations requires significant expertise in computer science and data engineering.
One current federal law, Section 230 of the Communications Decency Act, shields internet companies from liability for content posted by its users. 47 U.S.C.A. §230. Numerous individuals have called for reform of this Act in order to hold social media platforms to a higher standard of accountability. In addition, there are two bills before the House of Representatives aimed at restricting algorithm usage by social media platforms. One act, the Filter Bubble Transparency Act, would require social media platforms to offer a version of their platform that does not utilize engagement-based rankings to promote certain content to users. Instead, users would be able to use the app without algorithm generated recommendations. The second bill is the Algorithmic Justice and Online Platform Transparency Act which states its purpose as: “To prohibit the discriminatory use of personal information by online platforms in any algorithmic process, to require transparency in the use of algorithmic processes and content moderation, and for other purposes.”
A Congressional report titled: Social Media: Misinformation and Content Moderation Issues for Congress report issued last year further explores the issue. Other proposals include breaking up big tech companies, shrinking the shield against liability when the content pushed causes harm, or creating a regulatory body to oversee the industry. It appears likely that reforms in social media will continue to be debated as internet usage continues to pervade daily lives.