Biden administration seeks public input on AI safety standards

The National Institute of Standards and Technology (NIST) is seeking public input until February 2nd for testing methods crucial to ensuring the safety of artificial intelligence (AI) systems. This effort aligns with President Biden's executive order on AI, which directs agencies to set standards for testing and address various risks, including cybersecurity, chemical, biological, radiological, and nuclear risks. The goal is to develop industry standards that will enable the responsible development and use of AI technology.

The focus of NIST's guidelines is on evaluating AI, developing standards, and providing testing environments, with a particular emphasis on addressing risks. One area of concern is generative AI, which can create text, photos, and videos in response to open-ended prompts. While generative AI has generated excitement, it has also raised fears about its potential impact on jobs, elections, and human autonomy.

NIST is seeking input from AI companies and the public on generative AI risk management and reducing the risks of AI-generated misinformation. They are also working on setting guidelines for testing, including the use of "red-teaming" for AI risk assessment and management. "Red-teaming" refers to the practice of simulating potential risks and vulnerabilities, which has been used in cybersecurity for years.

In August, a public assessment "red-teaming" event was held, with thousands of participants attempting to identify potential risks and failures in AI systems. This event demonstrated the effectiveness of external red-teaming as a tool for identifying novel AI risks.

Overall, the Biden administration's efforts, led by NIST, aim to establish standards and guidance for the safe deployment of AI and safeguarding AI systems. By soliciting public input and addressing various risks, they hope to ensure that the United States continues to lead in the responsible development and use of AI technology.


More from Press Rundown