Experts report an increase in AI-generated hate content

A recent viral video featuring a speech by Adolf Hitler from 1939 has raised concerns about the rise of AI-generated hate content. The altered video, shared on social media platforms, quickly gained millions of views. Experts warn that as AI becomes more human-like, users must be more critical of its responses.

Peter Smith, a journalist with the Canadian Anti-Hate Network, noted an increase in AI-generated hate content. Hate groups, such as white supremacist organizations, have been early adopters of new online technologies. The UN advisory body also expressed concern about the potential for generative AI to supercharge antisemitic, Islamophobic, racist, and xenophobic content.

The issue of AI-generated hate content has been flagged by various organizations, including B'nai Brith Canada, which reported a rise in antisemitic images and videos created using AI. The spread of propaganda during conflicts, such as the Israel-Hamas war, has been influenced by deepfakes and other AI-generated content.

Defense Minister Bill Blair highlighted the threat of misinformation and disinformation spread through social media, affecting public perception. Experts warn that AI systems can be manipulated to generate hateful content, despite safeguards implemented by companies like OpenAI.

Legislation in Canada, such as Bill C-63 and Bill C-27, aims to address the issue of AI-generated hate content by requiring identification of such content and ensuring companies assess and mitigate risks associated with AI systems. However, concerns remain about bad actors exploiting open-source AI models to produce harmful content.

While there is no consensus on the scope of the problem, experts emphasize the need for further study and discussion on AI-generated hate content. The evolving nature of AI technology poses challenges in combating the spread of misinformation and hate speech online.


More from Press Rundown