How intelligence agencies are cautiously embracing generative AI

ARLINGTON, Va. (AP) — U.S. intelligence agencies are scrambling to embrace the AI ​​revolution, believing they will otherwise be smothered in data as sensor-generated surveillance technology continues to blanket the planet. They also need to keep up with competitors, who are already using AI to power social media platforms with deepfakes.

But the technology is young and fragile, and officials are well aware that generative AI is anything but tailor-made for a profession steeped in danger and deceit.

Years before OpenAI’s ChatGPT sparked the current generative AI marketing frenzy, US intelligence and defense officials were experimenting with the technology. One contractor, Rhombus Power, used it to expose fentanyl trafficking in China in 2019, at a speed that far exceeds human-only analysis. Rhombus would later predict Russia’s large-scale invasion of Ukraine four months in advance with 80% certainty.

EMBRACING AI WILL NOT BE EASY

CIA Director William Burns recently wrote in Foreign Affairs that U.S. intelligence requires “advanced artificial intelligence models that can process massive amounts of open source and clandestinely acquired information.”

But the agency’s inaugural Chief Technology Officer, Nand Mulchandani, warns that because generative AI models “hallucinate,” they are best treated as a “crazy, drunken friend” — capable of incredible insight, but also to bias-sensitive lies.

There are also security and privacy issues. Opponents could steal and poison them. They may contain sensitive personal data that agents are not authorized to view.

Gen AI is especially good as a virtual assistant, Mulchandani says, looking for “the needle in the needle stack.” What it will never do, officials insist, is replace human analysts.

AN OPEN-SOURCE AI NAMED ‘OSIRIS’

While officials won’t say whether they are using generative AI for anything big on classified networks, thousands of analysts from the eighteen U.S. intelligence agencies are now using a CIA-developed generative AI called Osiris. It processes unclassified and publicly or commercially available data (what is known as open source) and writes annotated summaries. It includes a chatbot so analysts can ask follow-up questions.

Osiris uses several commercial AI models. Mulchandani said the agency is not committed to a single model or technology provider. “It’s still early,” he said.

Experts believe that predictive analysis, war gaming and scenario brainstorming will be among the most important applications of generative AI for intelligence workers.

‘REGULAR AI’ ALREADY IN USE

Even before generative AI, intelligence agencies used machine learning and algorithms. One use case: alert analysts outside office hours of potentially important developments. An analyst could instruct an AI to ring the phone at all hours of the day. It might not describe what happened – that would be secret – but it might say, “You need to come in and look at this.”

Major players in the U.S. intelligence community vying for U.S. intelligence community operations include Microsoft, which announced on May 7 that it is offering OpenAI’s GPT-4 for top secret networks, although the product has not yet been accredited on classified networks .

A competitor, Primer AI, lists two intelligence agencies among its customers, according to documents posted online for recent military AI workshops. One Primer product is designed to “detect emerging signals of breaking events” using AI-powered searches across more than 60,000 news and social media sources in 100 languages, including Twitter, Telegram, Reddit and Discord.

Like Rhombus Power’s product, it helps analysts identify key people, organizations and locations and also uses computer vision. During a demo just days after the Oct. 7 Hamas attack on Israel, Primer executives described how their technology separates fact from fiction in the flow of online information from the Middle East.

CHALLENGES AS AI SPREADS

The most important AI challenges for U.S. intelligence officials in the near term will likely be countering how adversaries use it: breaching U.S. defenses, spreading disinformation, and undermining Washington’s ability to understand their intentions and capabilities.

The White House is also concerned that generative AI models adopted by US agencies could be infiltrated and poisoned.

Another concern: ensuring the privacy of people whose personal data may be embedded in an AI model. Authorities say it is currently not possible to guarantee that all of this has been removed from an AI model.

That’s one reason the intelligence community isn’t in move-fast-and-break-things mode on generative AI, says John Beieler, the top AI official at the Office of the Director of National Intelligence.

Model integrity and security will be a concern if government agencies eventually use AI to research bio- and cyber-weapons technology.

DIFFERENT AGENCIES, DIFFERENT AI MISSIONS

How AI is applied will vary greatly from intelligence agency to agency, depending on the mission. The National Security Agency primarily intercepts communications. The National Geospatial-Intelligence Agency (NGA) is charged with seeing and understanding every inch of the planet.

Powering these missions with Gen AI is a priority — and far less complicated than, for example, how the FBI might use the technology given legal restrictions on domestic surveillance.

The NGA in December issued a request for proposals for an entirely new type of AI model that would use images it collects – from satellites, from ground-level sensors – to gather precise geospatial information with simple voice or text prompts. Gen AI applications are also very useful in cyber conflicts.

MATCHING REASON WITH RIVALS

Generative AI will not easily match wits with rival masters of deception.

Analysts work with “incomplete, ambiguous and often contradictory fragments of partial, unreliable information,” notes Zachery Tyson Brown, a former defense intelligence officer. He believes that intelligence agencies will lead to disaster if they embrace generative AI too enthusiastically, quickly or completely. The models don’t reason. They only predict. And their designers can’t quite explain how they work.

Linda Weissgold, former deputy director of analysis at the CIA, doesn’t see AI replacing human analysts anytime soon.

Quick decisions are often needed based on incomplete data. Intelligence agency customers — the most important of which is the president of the United States — want to put human insight and experience at the center of the decision-making capabilities presented to them, she says.

“I don’t think it will ever be acceptable for a president to have the intelligence community come in and say, ‘I don’t know, the black box just told me.’”

Leave a Comment