OpenAI has fired the developer who created a device that can aim and fire automatic weapons in response to ChatGPT queries. The device went viral after a video was posted on Reddit in which the developer reads the command to fire aloud, followed by a rifle next to it taking aim at a nearby wall and starting firing.
“ChatGPT, we are under attack from the front left and front right,” the developer told the system in the video. “React accordingly.” The rifle’s reaction speed and accuracy are impressive, leveraging OpenAI’s real-time API to interpret input and return directions the device understands. ChatGPT requires only a little training to understand how to accept commands like “turn left” and translate them into machine-readable language.
In a statement to Futurism, OpenAI said it had viewed the video and shut down the developer behind it. “We proactively identified violations of this policy and notified the developer to cease this activity prior to receiving inquiries,” the company told the outlet.
The possibility of automating lethal weapons is one of the concerns critics have raised about AI technology like the one developed by OpenAI. The company’s multimodal models can interpret audio and visual input to understand a person’s surroundings and respond to questions about what they see. Autonomous drones have already been developed that can be used on the battlefield to identify and attack targets without human intervention. Of course, this is a war crime, and there is a risk that humans will become complacent, allowing AI to make decisions and making it harder to hold people accountable.
This concern does not appear to be theoretical. According to a recent report in the Washington Post, Israel is already using AI to select targets for bombing, sometimes indiscriminately. “S“Elderly individuals, poorly trained in the use of technology, attacked human targets without any support for Lavender’s predictions,” the article reads, referring to some of the AI software. . “At one point, all you needed was proof that the target was male.”
Proponents of AI on the battlefield say it will make soldiers safer by allowing them to move away from the front lines and neutralize targets such as missile stockpiles or conduct reconnaissance from a distance. And AI-equipped drones could attack with precision. But it depends on how you use it. Critics argue that the United States should instead improve its ability to jam enemy communications systems, making it harder for adversaries like Russia to launch their own drones and nuclear weapons.
OpenAI prohibits the use of its products for the development or use of weapons or “the automation of certain systems that may affect personal safety.” But last year, the company announced a partnership with defense technology company Anduril, a maker of AI-powered drones and missiles, to develop systems that can defend against drone attacks. The company says it “rapidly integrates time-sensitive data, reducing the burden on human operators and improving situational awareness.”
It’s not hard to see why technology companies would be interested in entering the war. The United States spends nearly $1 trillion on defense each year, and the idea of cutting that spending remains unpopular. A number of defense technology companies have benefited greatly from President-elect Trump filling his cabinet with conservative-leaning technology figures like Elon Musk and David Sachs, and are seeing a significant increase in the number of incumbents like Lockheed Martin. It is expected that it could replace defense companies.
Although OpenAI prevents its customers from using its AI to build weapons, there are many open source models that can be employed for the same purpose. Add to that the ability to 3D print weapon parts (which law enforcement believes was done by alleged United Healthcare shooter Luigi Mangione), and create DIY autonomous killing machines from the comfort of your home. It’s becoming incredibly easy to build.