Executives from four of the biggest social media companies testified before the Senate Homeland Security Committee Wednesday, defending their platforms and their respective safety, privacy and moderation failures in recent years.
Congress managed to drag in a relatively fresh set of product-focused executives this time around, including TikTok COO Vanessa Pappas, who testified for the first time before lawmakers, and longtime Meta executive Chris Cox. The hearing was convened to explore social media’s impact on national security broadly and touched on topics ranging from domestic extremism and misinformation to CSAM and China.
Committee Chair Sen. Gary Peters pressed each company to disclose the number of employees they have working full-time on trust and safety and each company in turn refused to answer — even though they received the question prior to the hearing. Twitter General Manager of Consumer and Revenue Jay Sullivan chipped in the only numerical response, noting that the company has 2,200 people working on trust and safety “across Twitter,” though it wasn’t clear if those employees also did other kinds of work.https://jac.yahoosandbox.com/1.2.0/safeframe.html
It’s no secret that social media moderation is patchy, reactive and uneven, largely because these companies refuse to invest more deeply in the teams that protect people on their platforms. “We’ve been trying to get this information for a long time,” Peters said. “This is why we get so frustrated.”
Senator Alex Padilla (D-CA) steered the content moderation conversation in another important direction, questioning Meta Chief Product Officer Chris Cox about the safety efforts outside of the English language.
“[In] your testimony you state that you have over 40,000 people working on trust and safety issues. How many of those people focus on non English language content and how many of them focus on non U.S. users?” Padilla asked.
Cox didn’t provide an answer, nor did the three other companies when asked the same question. Though the executives pointed to the total number of workers who touch trust and safety, none made the meaningful distinction between external contract content moderators and employees working full-time on those issues.
Whistleblowers and industry have repeatedly raised alarms about inadequate content moderation in other languages, an issue that gets inadequate attention due to a bias toward English language concerns, both at the companies themselves and at U.S.-focused media outlets.https://jac.yahoosandbox.com/1.2.0/safeframe.html
In a different hearing yesterday, Twitter’s former security lead turned whistleblower Peiter “Mudge” Zatko noted that half of the content flagged for review on the platform is in a language the company doesn’t support. Facebook whistleblower Frances Haugen has also repeatedly called attention to the same issue, observing that the company devotes 87% of its misinformation spending to English language moderation even though only 9% of the platform’s users speak English.
In another eyebrow-raising exchange, Twitter’s Jay Sullivan declined to specifically deny accusations that the company “willfully misrepresented” information given to the FTC. “I can tell you, Twitter disputes the allegations,” Sullivan said, referring to testimony from the Twitter whistleblower on Tuesday.
Source: Tech Crunch
Leave a Reply