Meta Under Fire for Approving Inflammatory Ads in India

Meta Under Fire for Approving Inflammatory Ads in India

By
Rahul Patel
2 min read

Meta Under Fire for Inflammatory Ads Targeting Indian Audiences

Meta, the parent company of Facebook, is facing scrutiny once again as a watchdog group has accused the company of approving inflammatory ads targeted at Indian audiences. The ads, which contained disinformation, calls for violence, and conspiracy theories about the upcoming elections, were reportedly approved by Meta despite violating its own advertising policies. Ekō, a nonprofit watchdog organization, intentionally submitted these ads to test Meta's ad systems, and shockingly, 14 out of 22 ads were given the green light. Despite Meta's claims of working on systems to detect AI-generated content, the organization's experiment also revealed that Meta failed to flag AI-generated images used in the ads. This incident marks another instance where Ekō has exposed flaws in Meta's ad systems, raising concerns about the company's ability to enforce its advertising policies effectively.

Key Takeaways

  • Meta's advertising policies were violated by 14 out of 22 inflammatory ads targeting Indian audiences.
  • The ads contained disinformation, calls for violence, and conspiracy theories about upcoming elections.
  • Ekō, a nonprofit watchdog, submitted the ads to test Meta's ad systems, using real hate speech and disinformation prevalent in India.
  • Ads called for violent uprisings targeting Muslim minorities, spread false information, and incited violence through Hindu supremacist narratives.
  • Meta failed to detect AI-generated images in the ads, despite their commitment to detect such content.
  • This is not the first time Ekō has exposed Meta's flawed ad systems, previously getting hate-filled ads approved in Europe.
  • Meta has yet to comment on the recent report.

Analysis

Meta's failure to enforce its advertising policies has been exposed once again, as a watchdog group managed to get 14 out of 22 inflammatory ads approved, highlighting significant flaws in the company's enforcement mechanisms.

Did You Know?

  • Ekō, the nonprofit watchdog organization: Ekō is a non-governmental organization focused on monitoring and ensuring the responsible use of technology, especially in advertising and social media platforms. They aim to protect communities from harm caused by misinformation, hate speech, and other forms of harmful content. Ekō conducts research, tests ad systems, and exposes flaws in platforms' policies and enforcement mechanisms to drive improvements and create safer online environments.
  • AI-generated images and Meta's commitment: AI-generated images are visual contents created using artificial intelligence algorithms, such as generative adversarial networks (GANs) or other machine learning techniques. Meta, the parent company of Facebook, has made public commitments to develop and implement systems to detect AI-generated content, as they can be used to spread misinformation, propaganda, and visually harmful materials. However, Ekō's recent experiment revealed that Meta's systems still struggle to detect such content effectively.
  • Meta's advertising policies: Meta's advertising policies are a set of rules and guidelines that govern the types of ads allowed on its platforms, including Facebook and Instagram. The policies prohibit various forms of harmful content, such as hate speech, disinformation, and content inciting violence. Meta's ad systems are designed to review and approve or reject ads based on these policies, but Ekō's recent investigation highlighted significant flaws in Meta's enforcement mechanisms, as 14 out of 22 inflammatory ads targeting Indian audiences were approved despite violating the company's rules.

You May Also Like

This article is submitted by our user under the News Submission Rules and Guidelines. The cover photo is computer generated art for illustrative purposes only; not indicative of factual content. If you believe this article infringes upon copyright rights, please do not hesitate to report it by sending an email to us. Your vigilance and cooperation are invaluable in helping us maintain a respectful and legally compliant community.

Subscribe to our Newsletter

Get the latest in enterprise business and tech with exclusive peeks at our new offerings