The Power and Pitfalls of AI for US Intelligence | MarketingwithAnoy

In an example of the IC’s successful use of artificial intelligence, after exhausting all other avenues – from human spies to signal intelligence – the United States was able to find an unidentified WMD research and development facility in a major Asian country by locating a bus, who drove between it and other known facilities. To do so, analysts used algorithms to search and evaluate images of nearly every square inch of the country, according to a senior U.S. intelligence official who spoke on the basis of the understanding of not being named.

While AI can compute, retrieve, and use programming that performs limited rational analysis, it lacks the computation to properly dissect more emotional or unconscious components of human intelligence described by psychologists as system 1 thinking.

AI can, for example, compile intelligence reports akin to baseball newspaper articles that contain structured non-logical flow and repetitive content elements. But when briefs require complexity of reasoning or logical arguments that justify or demonstrate conclusions, AI has proven to be lacking. When the intelligence community tested the capability, the intelligence officer said the product looked like an intelligence brief but was otherwise junk.

Such algorithmic processes can be made to overlap, adding layers of complexity to computational reasoning, but even then, these algorithms can not interpret context as well as humans, especially when it comes to language, as hateful utterances.

AI’s understanding may be more analogous to the understanding of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. “For example, AI can understand the basics of human language, but basic models do not have the latent or contextual knowledge to perform specific tasks,” Curwin says.

“From an analytical perspective, AI has a hard time interpreting intentions,” Curwin adds. “Computer science is a valuable and important field, but it is social data scientists who are taking the big leaps to enable machines to interpret, understand and predict behavior.”

To “build models that can begin to replace human intuition or cognition,” Curwin explains, “scientists must first understand how to interpret behavior and translate that behavior into something AI can learn.”

Although machine learning and big data analysis provide predictive analysis of what can or is likely to happen, it cannot explain to analysts how or why it came to these conclusions. That opacity in AI reasoning and the difficulty of examining sources that consist of extremely large data sets can affect the actual or perceived solidity and transparency of these conclusions.

Transparency in reasoning and sourcing are required analytical craft standards of products produced by and for the intelligence community. It is also analytical objectivity legally requiredtriggered calls within the U.S. government to update such standards and laws in light of the increasing prevalence of AI.

Machine learning and algorithms, when used for predictive assessment, are also considered by some practitioners to be more art than science. That is, they are prone to biases, noise, and may be accompanied by methods that are not sound and lead to errors similar to those found in the criminal. jurisprudence and art.

Leave a comment