OpenAI has announced new research aimed at preventing AI "hallucinations" which occur when AI models fabricate information, behaving as if they are spouting facts. OpenAI's new strategy is to train AI models to reward themselves for each individual correct step of reasoning when they are arriving at an answer, instead of just rewarding a correct final conclusion. The approach is called "process supervision," and could lead to better explainable AI as it encourages models to follow a more human-like chain of thought approach. OpenAI's researchers hope this approach will help to mitigate issues of misinformation stemming from AI systems, which has become more hotly debated amid the generative AI boom and lead-up to the 2024 U.S. presidential election. OpenAI's new research comes after the startup accelerated the generative AI boom last year when it released ChatGPT, its chatbot powered by GPT-3 and GPT-4, and surpassed 100 million monthly users in just two months, setting a reported record for the fastest-growing app. OpenAI has released an accompanying dataset of 800,000 human labels it used to train the model mentioned in the research paper. However, skeptics have expressed concerns that the research alone does not significantly mitigate concerns about misinformation and incorrect results in the wild. It is unclear if the OpenAI paper has been peer-reviewed or reviewed in another format, and there is still a tremendous amount of opacity in the field of AI that is challenging any meaningful accountability efforts, even as these systems are directly affecting people already.
OpenAI tackles A.I. 'hallucinations' with new method
Share: