News
OpenAI sets up security team to assess AI risks
OpenAI announced today that it is establishing a Preparedness team to assess the catastrophic risks that may be caused by Artificial General Intelligence (AGI). The team will hand over deployable machines from the Massachusetts Institute of Technology (MIT) Led by Aleksander Madry, Director of the Learning Center.
OpenAI stated that although future AI models have the potential to benefit all mankind, they may also bring serious risks. Therefore, a powerful framework needs to be established to monitor, evaluate, predict and prevent these AI models .
It is reported that OpenAI has set up a Preparedness team to conduct capability assessment and red team testing of various AI models to track, predict and prevent various types of catastrophic risks.
NaijaTechNews found that the related risks claimed by OpenAI mainly include the following three categories:
Ability to persuade humans:Whether the content output by AI will affect human behavior
Generating inappropriate content: Whether AI will generate dangerous content related to chemical/biological/radioactive/nuclear (CBRN) etc.
Autonomous Replication and Adaptation (ARA): Will AI escape human control during self-iteration?
OpenAI has begun to recruit talents from different backgrounds to join the Preparedness team, and has also launched the AI Preparedness Challenge, which encourages participants to “crack” Whisper, Voice, GPT-4V, DALLE3 and other models in the name of hackers. The top 10 will receive the tools provided by OpenAI. , API usage quota worth US$25,000






