advertisementadvertisersAdvertisingBusinessdisinformationGooglemisinformationNewstechnology

Google Rolls Out New Disclosure Policy For Digitally Altered Ads To Combat Election Disinformation

Election season is in full swing and search engine giant Google is pulling out all the stops to ensure disinformation is limited.

The company just updated its Political Content Policy which entails content that is digitally manipulated like pictures, videos, and audio. The new policy came into effect starting yesterday and the company feels it’s about time viewers were aware of what they were seeing.

If any information portrayed had been altered through digital means, viewers would now be informed as would the case be when synthetic content is up for grabs. But what exactly are the criteria for such a policy to be implemented in the first place?

According to the Android maker, all content that has been manipulated and incorrectly details real people or events would be a part of the change. This includes those displaying a person taking part in a conversation or action that never happened in reality. Similarly, any footage altered to gain attention toward a real event and also material that displays a realistic depiction of events that never happened in the first place would be included.

We saw Facebook’s parent firm roll out something similar for content made through AI means that was politically themed in February of this year and it makes sense why companies are pulling out all the stops to ensure viewers remain informed.

Google’s latest policy is said to explain how advertisers can give rise to a campaign only after they tick off a list of boxes explaining what sort of content is being generated and if it’s altered or produced through synthetic means.

Google hopes to set this as the new standard for all ad disclosures involving politically themed content online. Whether it’s operating on mobile devices, televisions, computers, or social media platforms across the board.

If any other sort of format is being used, advertisers could select the synthetic content option and then provide personalized prominent disclosures that are clearer in nature and put in a location that’s not noticed by others.

In situations where the policy is violated, a warning would be generated a week before serious action is taken like account suspension by Google. Similarly, the company also provides some clear-cut examples of terminology being used such as how audio was generated through technology, how images didn’t portray real events, and how the video was made through synthetic programs.

It’s about time Google took matters into its own hands and curbed the alarming rate of election misinformation taking center stage. The fact that it’s all happening during a time when AI is giving rise to misleading content and carries the potential to sabotage the whole elections by swaying voters in the wrong direction meant it was much needed.

We’ve already heard about advanced forms of technology from Russia and China taking part in this manipulative behavior to impact elections so it’s a global issue and that’s why plenty of tech giants are scrambling to curb the matter before it gets out of control.

Image: DIW-Aigen

Read next:

• EU Regulators Accuse Meta Of Failing To Comply With Antitrust Rules Over New Ad-Supported Subscription

• Median Salary of Magnificent Seven Companies Revealed

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.