OpenAI quietly eased restrictions on military applications of its technology earlier this week. In an unannounced update to its usage policies on January 10, OpenAI lifted a broad ban on using its technology for “military and warfare.” The new language still prohibits OpenAI’s services from being used for more specific purposes like developing weapons, injuring others, or destroying property, a spokesperson for OpenAI told Business Insider. The spokesperson added that the company “aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs.” On January 10, OpenAI rolled out its GPT Store, a marketplace for users to share and browse customized versions of ChatGPT known as “GPTs.” OpenAI’s new usage policy now includes principles like “Don’t harm others,” which are “broad yet easily grasped and relevant in numerous contexts,” as well as bans on specific use cases like developing or using weapons, OpenAI’s spokesperson said. Some AI experts worry that OpenAI’s policy rewrite is too generalized, especially when AI technology is already being used in the conflict in Gaza. The Israeli military said it used AI to pinpoint targets to bomb inside the Palestinian territory. “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement,” Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst said.
Full report : OpenAI quietly deletes the ban on using ChatGPT for “Military and Warfare” making it possible for U.S. defense to use it.