
Google is implementing new rules mandating disclosures for election ads created using artificial intelligence (AI) on Google and YouTube platforms. This move, effective in mid-November, aims to enhance transparency in the lead-up to the 2024 elections and counter concerns about the spread of deceptive AI-generated content. Advertisers will be required to clearly state when election ads contain digitally altered or AI-generated elements. Minor changes like image adjustments won’t necessitate disclosure. This policy mirrors similar efforts by Meta’s Facebook and Instagram to address misleading content, such as deepfakes, in election campaigns.
Google is set to introduce new regulations about AI-generated election ads on its platforms, including Google and YouTube. These regulations aim to enhance transparency and curb deceptive information during the lead-up to the 2024 presidential and congressional elections. Emerging AI tools like OpenAI’s ChatGPT and Google’s Bard have raised concerns about the ease with which misleading content can be disseminated online.
In response to these concerns, Google has decided to implement a disclosure requirement for digitally manipulated or AI-generated election-related content. A spokesperson for Google stated, “In light of the proliferation of synthetic content creation tools, we are taking a proactive step to mandate that advertisers reveal when their election ads incorporate digitally altered or AI-generated components.” This update builds upon the company’s ongoing efforts to promote transparency in political advertising, empowering voters to make well-informed choices.
This policy change will go into effect in mid-November. It obliges election advertisers to indicate that advertisements featuring AI-generated elements have been created by computers and do not depict actual events. It is important to note that minor adjustments like image brightness or resizing will not require such disclosures.
For election ads that have been digitally fabricated or modified, advertisers will need to include disclaimers such as “This audio was computer-generated” or “This image does not represent real events.” Google’s move aligns with the broader industry trend of addressing the challenges posed by AI-generated content in election campaigns.
It’s worth noting that Google is not alone in implementing such regulations. Other digital advertising platforms, including Meta’s Facebook and Instagram, have already introduced policies to address election ads and digitally manipulated content. In 2018, Google began requiring identity verification for advertisers running election-related ads on its platforms, while Meta took a significant step in 2020 by announcing a comprehensive ban on “misleading manipulated media,” which includes AI-generated deepfake videos that could potentially deceive viewers.