Meta will Begin Demanding Disclosures for AI-Manipulated Political Advertisements

Advertisers who run political or issue-related advertising on Meta's platforms will have to start disclosing when their ads are "digitally created or altered" by AI starting in 2024.

Meta will Begin Demanding Disclosures for AI-Manipulated Political Advertisements

Advertisements involving politics, elections, and social issues on Facebook and Instagram will soon need to take an additional step, which advertisers will manage when they submit new advertisements.

When an advertisement "contains a photorealistic image or video, or realistic sounding audio" that fits into one of a few categories, advertisers will have to disclose this information.

The purpose of Meta's new regulations is to control deepfakes, or intentionally deceptive digitally altered media. The business will demand disclosures for advertisements that were either made or edited to depict someone saying or doing something they didn't.

The other special cases that call for disclosures are advertisements that show "realistic event[s] that allegedly occurred" but that are "not a true image, video, or audio recording of the event" as well as ads that show photo-realistic people who don't exist or events that appear realistic but never happened (including altered imagery from real-life events).

Meta clarifies that standard digital edits like as cropping, sharpening, and other basic adjustments are not covered by the new disclosure policy. Digitally modified advertisements will be recorded in Meta's Ad Library, a searchable database that compiles sponsored advertisements from the company's platforms.

Nick Clegg, President of Global Affairs at Meta, stated in a press release that "advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad."

The report that Meta will impose new restrictions on the kind of advertisements that its own generative AI tools might be used for preceded the announcement of the new policy regarding social and political issue ad disclosures.

The business released a new line of AI tools early last month that are geared toward advertising. Among other things, the tools let advertisers swiftly create several copies of creative assets and instantly resize photos to meet different aspect ratios.

As Reuters initially revealed, campaigns pertaining to politics, elections, and social concerns are no longer permitted to use certain AI techniques. This week, the business declared that it will not allow the AI technologies to be used for advertisements in any industry that deals with "potentially sensitive topics," such as housing, employment, health, drugs, or financial services. Given the focus on artificial intelligence at the moment, the firm might easily find itself in hot water with regulators in any of those areas, or in areas where Meta has already run afoul of the law, as in the example of discriminatory housing advertisements on Facebook.

Already, lawmakers were examining how AI and political advertising interacted. Legislation requiring disclaimers on political advertisements edited or developed using artificial intelligence (AI) was filed earlier this year by Senator Amy Klobuchar (D-MN) and Rep. Yvette Clarke (D-NY).

Regarding Meta's new limitations on its own in-house AI capabilities, Klobuchar stated, "Deceptive AI has the potential to upend our democracy, making voters question whether videos they are seeing of candidates are real or fake." "While Meta's decision is a positive start, we cannot rely solely on voluntary commitments."

While Meta is erecting barriers to prevent AI from being used in political and social issue advertisements, some platforms are content to avoid being involved at all. TikTok completely avoids being involved in political advertising, prohibiting all forms of political sponsored material, including brand advertisements and paid branded content.