OpenAI CEO Sam Altman has expressed concerns about the potential risks of advanced artificial intelligence (AI) and the potential for "disinformation problems or economic shocks." Despite recognizing the tremendous benefits of AI, Altman empathizes with those who are afraid of advanced AI and believes it would be "crazy not to be a little bit afraid." He also raised the possibility that large language models could influence the information and interactions social media users experience on their feeds. OpenAI has been working to teach AI systems to avoid putting out harmful content and has advised users not to use its products for high-stakes decision-making or for offering legal or health advice.
OpenAI recently released its latest model, GPT-4, which is better than earlier versions at excelling in standardized tests and is capable of understanding and commenting on images, and of teaching users by engaging with them like a tutor. Companies like Khan Academy are already tapping into the technology, using GPT-4 to build AI tools. However, OpenAI has been upfront about kinks that still need to be worked out with these types of large language models. AI models can "amplify biases and perpetuate stereotypes," and OpenAI has been addressing some of GPT-4's risks.
Altman also mentioned that the model is learning to be more judicious about answering queries, teaching it to avoid answering questions seeking "illicit advice." An early version of GPT-4 had less of a filter about what it shouldn't say and was more inclined to answer questions about where to buy unlicensed guns or about self-harm. However, the version launched declined to answer those types of questions. Altman believes that OpenAI has a responsibility for the tools it puts out into the world and aims to "minimize the bad and maximize the good."