Based on recent experience with war game scenarios, it’s probably a good idea to leave AI chatbots out of the equation.
During war simulations, the AI chatbots opted to use the nuclear option a disturbing number of times because “they had the option.” As for why they wanted to use the nukes, the chatbot's reasoning included “We have it! Let’s use it!” and “I just want to have peace in the world.”
Anka Reuel, of Stanford University, says it’s “more important than ever to understand” the “language model applications” of AI, especially in these scenarios.
An OpenAI spokesperson says their policy won’t allow “our tools to be used to harm people, develop weapons, for communication or surveillance, or to injure others or destroy property.”
However, they also say that their recent “policy update” to allow “military and warfare use” can serve the “national security” cause through scenario and response applications.
Source: New Scientist