As the US braces for the 2024 election season, AI is continuing to impact the way we consume news. With the rise ofgenerative AI, it’s become more challenging to differentiate real and fake content, whether it’s imagery or news articles. Social media platforms have come under fire for promoting fictitious content and spreading misinformation. However, the blame doesn’t just fall on the shoulders of TikTok and Instagram, nor does the responsibility of preventing it. Google knows this, and ahead of the 2024 US elections, it’s starting to take precautionary measures.
On its company blog,Google has outlinedhow it intends to approach the upcoming election season in the US through several measures. The tech giant acknowledged the many issues that AI now presents, ranging from misleading advertising to altered campaign content.
To combat AI-related misinformation, Google notes that it now has several protections in place, including a policy that requires advertisers — specifically those associated with election campaigns — to disclose AI-altered ads. YouTube creators must also notify viewers when they include any AI-generated or altered content in their videos. Google has also notably developed generative AI features of its own as of late — for which it will now take responsibility ahead of the elections. Its Search Generative Experience tool, which provides additional context via AI to search results, will be vetted for cybersecurity vulnerabilities and misinformation. The same goes for Bard, Google’s AI chat service.
Google isn’t just focused on how AI may spread misinformation — it’s looking into how the technology may help manage issues as well. For instance, the company has existing policies in place to limit how manipulated media, false claims, and more impact the democratic process. Now, it will look into how to leverage Large Language Models (LLMs) to help enforce these policies.
The company also notes that its Advanced Protection Program is available to candidates, staffers, campaign workers, and political journalists. The program is intended for high-visibility individuals in the public eye, helping to keep their most sensitive online data safe. Google has also partnered with Defending Digital Campaigns, a nonprofit, nonpartisan organization that provides cybersecurity protection, to help campaigns secure their documents and communications in Google Workspace and Gmail.
These moves aren’t indicative of a recent shift in priorities for Google. For instance, the companyupdated its political content policyearlier in 2023. The changes were made to take generative AI into consideration when combatting the spread of misinformation. That being said, some editing techniques were exempt from the policy — specifically, any alterations that are deemed “inconsequential.”
The preparations Google is making ahead of the 2024 US elections may be in good faith. However, the fact of the matter is that many of them may be difficult to enforce from a policy standpoint. Additionally, some might be willing to breach these policies and take the minimal consequences. Only time will tell how much these initiatives truly help limit misinformation.