US Space Force has temporarily banned the use of web-based generative artificial intelligence tools and so-called large language models that power them, citing data security and other concerns, according to a memo seen by Bloomberg News. The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved. Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles. Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.” No further explanations were provided. Experts have flagged a risk that, under some conditions, voluminous and potentially non-public data involved in feeding models with documents and prompts could end up leaking out into the public arena or get hacked in other ways.
Breaking news : US Space Force Pauses Generative AI Use Based on Security Concerns