
In one example of the IC’s successful use of artificial intelligence, after exhausting all other avenues (from human spying to signals intelligence), the US was able to locate an unidentified bus in an Asian power by locating a bus going to and from it of the WMD research and development facility and other known facilities. To do this, analysts use algorithms to search and evaluate nearly every square inch of imagery in the country, according to a senior U.S. intelligence official who spoke on condition of anonymity.
While AI can compute, retrieve, and use programs that perform bounded rational analysis, it lacks calculus that properly dissect the more emotional or unconscious components of human intelligence that psychologists describe as System 1 thinking.
For example, AI can draft intelligence reports similar to newspaper articles about baseball, with structured illogical flows and repetitive content elements. However, AI was found to be lacking when briefs required complex reasoning or logical arguments to justify or justify conclusions. When the intelligence community tested the capability, the product looked like an intelligence briefing but was otherwise absurd, intelligence officials said.
Such algorithmic processes can overlap, increasing the complexity of computational reasoning, but even then these algorithms cannot interpret context as well as humans, especially when it comes to language, such as hate speech.
The AI’s understanding may be more similar to that of a human toddler, said Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to customers from violence to disinformation. “For example, AI can understand the basics of human language, but the underlying models don’t have the underlying or contextual knowledge to accomplish a specific task,” Curwin said.
“From an analytics standpoint, it’s hard for AI to interpret intent,” Curwin added. “Computer science is a valuable and important field, but social computational scientists have made giant leaps in enabling machines to explain, understand, and predict behavior.”
To “build models that can begin to replace human intuition or cognition,” Curwin explained, “researchers must first understand how to interpret behavior and translate that behavior into something that AI can learn.”
While machine learning and big data analytics provide predictive analytics about what might or might happen, it cannot explain to analysts how or why it came to those conclusions. The opacity of AI reasoning and the difficulty of censoring sources (consisting of very large datasets) may affect the actual or perceived plausibility and transparency of these conclusions.
Transparency in reasoning and procurement is a requirement of analytical trade standards for products produced by the intelligence community. Objectivity of analysis is also legally required, prompting calls within the U.S. government to update such standards and laws in line with the growing popularity of artificial intelligence.
Some intelligence practitioners also believe that machine learning and algorithms are more art than science when used to make predictive judgments. That is, they are prone to bias, noise, and can be accompanied by unsound methods and lead to errors similar to those found in criminal forensics and art.