The U.S. Space Force has temporarily stopped using ChatGPT and other web-based generative AI tools for its personnel due to concerns about data security.
Until they gain formal approval from the force’s Chief Technology and Innovation Office, workers are not permitted to use such AI technologies, particularly large-language models, on government systems, according to a message dated Sept. 29 and sent to Guardians, the term Space Force uses to refer to its workforce. The temporary prohibition, it stated, was “due to data aggregation risks.”
In the past year, generative AI applications have multiplied. These applications are supported by constantly evolving technologies like OpenAI’s ChatGPT, which can quickly generate material like text, graphics, or video off of a single prompt and is powered by big language models that absorb enormous amounts of historical data to learn.
The technology, according to Lisa Costa, chief technology and innovation officer at Space Force, “will undoubtedly revolutionise our workforce and enhance Guardian’s ability to operate at speed.”
The temporary prohibition is real, according to an Air Force spokesperson, who first told Bloomberg about it.
In the document, Costa stated that her department has established a task group on generative AI with other Pentagon agencies to consider how to deploy the technology in a “responsible and strategic manner.”