post-thumb names 'hallucinate' as 2023 Word of the Year due to AI health concern

In recent years, the proliferation of artificial intelligence (AI) has led to a concerning phenomenon known as AI hallucination. This occurs when AI tools generate false and misleading information, presenting it as true and factual. has even named "hallucinate" its 2023 Word of the Year, reflecting the increasing prevalence of this issue.

The rise in searches for the term "hallucinate" on suggests that people are becoming more aware of this AI-specific definition. Additionally, searches for AI-related words like "chatbot" and "generative AI" have also seen a significant increase, indicating a growing interest in the topic.

AI's ability to produce misinformation and disinformation at a rapid pace is a cause for concern. A recent study demonstrated how OpenAI's GPT Playground could generate over 17,000 words of disinformation related to vaccines and vaping within just 65 minutes. This speed and efficiency surpasses what humans are capable of, highlighting the potential dangers of AI-generated false information.

Even without the intention to deceive, AI tools can inadvertently produce misleading information. For example, a study presented at a recent conference showed that a chatbot named ChatGPT provided inaccurate answers to medication-related questions, with only 10 out of 39 answers considered satisfactory. This highlights the potential risks of relying on AI tools for medical advice.

AI hallucinations are not limited to health-related issues. There have been instances where AI tools mistakenly identified birds in images and provided false information about the transportation of the Golden Gate Bridge across Egypt. These examples demonstrate the wide-ranging nature of AI hallucinations and their potential impact on various aspects of life.

The consequences of AI hallucinations can be significant, both for individuals and society as a whole. It can affect mental and emotional health, as well as distort one's perception of reality. Recognizing the severity of this issue, organizations like the World Health Organization and the American Medical Association have issued statements warning about the dangers of AI-generated misinformation and disinformation.

While efforts have been made to address this problem, there is still much work to be done. AI hallucinations pose a complex and growing health issue that requires ongoing attention and regulation. As AI continues to advance, it is crucial to develop safe and ethical practices to mitigate the risks associated with AI-generated false information.


More from Press Rundown