Tech giants such as Meta, Amazon, Alphabet, and Twitter have made significant layoffs in their teams dedicated to combating online misinformation and promoting trust and safety. The cost-cutting measures come at a time when cyberbullying and the spread of harmful content is increasing. Regulators express concerns over downsizing in AI ethics and trust and safety, while experts warn of the potential impact on long-term user trust. Smaller companies may follow suit, and the gaming industry could suffer from reduced oversight of abusive activity.
Major tech companies, including Meta (formerly Facebook), Amazon, Alphabet (Google’s parent company), and Twitter, have significantly reduced the size of their teams focused on combating online misinformation, and hate speech, and promoting internet trust and safety. These layoffs come as the companies prioritize cost-cutting measures and efficiency.
Meta, for instance, terminated a fact-checking project that took six months to develop as part of its widespread layoffs. The company was preparing to launch a key fact-checking tool that would allow third-party fact-checkers and credible experts to add comments to questionable articles on Facebook, thereby verifying their trustworthiness. However, CEO Mark Zuckerberg’s commitment to cost-cutting ended this ambitious effort.
The downsizing across the tech industry is occurring at a time when cyberbullying is on the rise and misinformation and violent content continues to spread, coinciding with the increased use of artificial intelligence. Layoffs in teams dedicated to trust and safety and AI ethics reflect the extent to which companies are willing to go to meet the demands for efficiency, even with the 2024 U.S. election season approaching.
The layoffs have affected companies in different ways. Twitter disbanded its ethical AI team and reduced its trust and safety department. Google cut about one-third of a unit addressing misinformation and radicalization. Amazon downsized its responsible AI team, while Microsoft laid off its entire ethics and social team.
The impact of these layoffs on online safety is concerning. With reduced investment in safety measures, companies struggle to keep pace with malicious activities and face an erosion of trust from users. Furthermore, the rise of chatbots and AI models contributes to the spread of fake accounts and toxic content, exacerbating the problem.
Regulators are closely monitoring the downsizing of AI ethics and trust and safety teams alongside the increasing influence of AI. The Federal Trade Commission highlighted the paradox of removing personnel dedicated to ethics and responsibility for AI and engineering, stating that such reductions might raise concerns when assessing risks and mitigating harms.
While some experts argue that fewer trust and safety workers may not necessarily result in worse platforms, there is concern that critical roles in design and policy changes are being affected. Companies should consider the long-term financial benefits of maintaining trust and safety, even if they are not easily measured in short-term profits.
The repercussions of these layoffs extend beyond the major tech companies, as smaller peers and startups are likely to follow similar layoff strategies. The impact is particularly significant in gaming platforms like Twitch, where a reduced team could result in overlooking dangerous and abusive activities.
Finally, the tech industry’s cost-cutting measures through layoffs in trust and safety and AI ethics teams pose risks to combating online misinformation and hate speech, raising concerns about the long-term effects on user trust and safety.