Meta's AI election integrity plan faces first major test soon

Meta, formerly known as Facebook, is taking proactive steps to prevent the spread of misinformation and abuse of artificial intelligence (AI) ahead of the EU Parliament elections. The company has invested over $20 billion into safety and security measures since 2016.

In an effort to improve transparency, Meta is adding labels to AI content and implementing new ad restrictions. The company is focusing on combating misinformation, influence operations, and generative AI abuse. Meta has partnered with fact-checking organizations and released quarterly reports on threat findings.

As generative AI technology advances, it poses a risk for the spread of disinformation and deepfakes impersonating political figures. Meta is working to monitor AI content by partnering with independent fact-checkers to review and rank down any fake, manipulated, or transformed content.

Additionally, Meta is developing tools to label AI-generated content, allowing users to disclose if the material uses AI-generated video or audio. Advertisers will also be required to disclose if they used AI to create their content and include a "paid for by" disclaimer on their ads.

Meta will have an Ad Library to show what ads are running, who they are targeting, and how much was spent on them. Advertisers will need to go through a verification process to ensure they are who they claim to be and that they reside in the EU.

Overall, Meta's efforts to combat AI abuse and misinformation ahead of the EU Parliament elections demonstrate a commitment to election integrity and user safety. By implementing new labeling and ad restrictions, Meta aims to provide users with more transparency and accountability in the digital space.


More from Press Rundown