The US wants to stay ahead of China in using AI to pilot fighter jets, navigate without GPS and more

WASHINGTON (AP) — Two Air Force fighter jets recently faced off in a dogfight in California. One was flown by a pilot. The other doesn’t.

That second plane was piloted by artificial intelligence, with the Air Force’s highest-ranking civilian sitting in the front seat. It was the ultimate demonstration of how far the Air Force has come in developing a technology that has its roots in the 1950s. But it’s just a hint of the technology to come.

The United States is competing to stay ahead of China in AI and its use in weapons systems. The focus on AI has led to public concern that future wars will be fought by machines that select and attack targets without direct human intervention. Officials say that will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternative but to quickly deploy U.S. capabilities.

“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “We both recognized that this will be a very critical part of the future battlefield. China is working on it as hard as we are.”

A look at the history of AI military development, what technologies are coming and how they will be controlled:


The roots of AI in the military are actually a hybrid of machine learning and autonomy. Machine learning happens when a computer analyzes data and rule sets to come to conclusions. Autonomy arises when these conclusions are applied to take action without further human input.

This took shape early in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained using a series of human-programmed if/then rules to detect and intercept incoming missiles autonomously and faster than a human could. But the Aegis system wasn’t designed to learn from its decisions, and its responses were limited to the rules it had.

“If a system uses ‘if/then,’ it’s probably not machine learning, which is an area of ​​AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute. of Technology to assist in the Air Force’s AI development.

AI took a big step forward in 2012 when the combination of big data and advanced computing power allowed computers to analyze the information themselves and write the rule sets. It’s what AI experts call the “Big Bang” of AI.

The new data created by a computer that writes the rules is artificial intelligence. Systems can be programmed to act autonomously based on the conclusions drawn from machine-written rules, which is a form of AI-enabled autonomy.


Air Force Secretary Frank Kendall got a taste of that advanced warfighting experience this month when he flew Vista, the first F-16 fighter jet piloted by AI, during a dogfight exercise over Edwards Air Force Base in California.

While that plane is the most visible sign of ongoing AI work, there are hundreds of ongoing AI projects in the Pentagon.

At MIT, service members worked to clean up thousands of hours of pilots’ recorded conversations to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so that the AI ​​could learn the difference between critical messages, such as a runway closing and mundane cockpit chatter. The goal was to let the AI ​​learn which messages are critical to improve so that controllers can see them more quickly.

In another important project, the Army is working on an AI alternative to GPS satellite-dependent navigation.

In a future war, high-quality GPS satellites would likely be hit or disrupted. The loss of GPS could blind U.S. communications, navigation and banking systems and leave the U.S. military’s fleet of aircraft and warships less able to coordinate a response.

So last year, the Air Force flew an AI program — loaded onto a laptop strapped to the floor of a C-17 military cargo plane — to work on an alternative solution using Earth’s magnetic fields.

It is known that aircraft can navigate by tracking Earth’s magnetic fields, but so far that has not proven practical because each aircraft generates so much of its own electromagnetic noise that it is impossible to filter Earth’s emissions alone.

“Magnetometers are very sensitive,” said Col. Garry Floyd, director of the Department of Air Force-MIT Artificial Intelligence Accelerator program. “If you turned on the strobe lights on a C-17, we would see it.”

The AI ​​learned through the flights and piles of data which signals to ignore and which to follow, and the results “were very, very impressive,” Floyd said. “We’re talking about tactical airdrop quality.”

“We think we may have added an arrow to the arrow in the things we can do if we end up operating in a GPS-less environment. And we will,” Floyd said.

The AI ​​has only been tested on the C-17 so far. Other aircraft will also be tested, and if it works, it could give the military another way to operate if the GPS goes down.


Vista, the AI-controlled F-16, has significant safety rails as the Air Force trains it. There are mechanical limits that prevent the still-learning AI from performing maneuvers that would endanger the aircraft. There is also a safety pilot, who can take control of the AI ​​at the touch of a button.

The algorithm can’t learn during a flight, so each time it goes up, it only has the data and rulesets it created from previous flights. When a new flight is over, the algorithm is sent back to a simulator where it is given new data collected during the flight to learn from, create new rule sets and improve its performance.

But the AI ​​learns quickly. Because of the supercomputing speed that AI uses to analyze data and then fly those new rulesets in the simulator, the pace of finding the most efficient way to fly and maneuver has already led it to employ some human pilots defeated in air battles.

But safety is still a critical concern, and officials said the most important way to take safety into account is to monitor what data is re-entered into the simulator so the AI ​​can learn from it. In the case of the aircraft, it ensures that the data reflects safe flying. Ultimately, the Air Force hopes that a version of the AI ​​being developed could serve as the brains for a fleet of 1,000 unmanned combat aircraft being developed by General Atomics and Anduril.

In the experiment that trained AI on how pilots communicate, MIT’s assigned service members cleaned up the recordings to remove classified information and the pilots’ sometimes salty language.

Learning how pilots communicate is “a reflection of command and control, of how pilots think. The machines have to understand that too if they want to be really good,” said Grady, vice chairman of the Joint Chiefs. ‘They don’t have to learn to swear.’

Leave a Comment