The power and pitfalls of AI for US intelligence

In one example of a successful use of IC’s AI, after exhausting all other avenues – from human espionage to signal intelligence – the US was able to find a research base. and develop an unknown WMD in a large Asian country by locating a bus traveling between it and other known facilities. To do that, analysts used algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke on the background. background with knowledge of not being named.

While AI can compute, retrieve, and programmatically perform limited rational analyses, it lacks the computation to accurately analyze the emotional or more unconscious components of human intelligence. described by psychologists as system 1 thinking.

For example, AI can compose intelligence reports similar to baseball articles, containing structured illogical flows and repetitive content elements. However, when summaries required complex arguments or logical arguments to justify or prove the conclusions, AI was found to be lacking. When the intelligence community examined this possibility, the intelligence official said, the product looked like an intelligence summary but was meaningless.

Such algorithmic processes can be made to overlap, adding layers of complexity to computational inference, but even then such algorithms cannot interpret context as well as humans, especially is when it comes to language, such as hate speech.

Eric Curwin, chief technology officer at Pyrra Technologies, which specializes in identifying virtual threats to customers ranging from violence to disinformation. For example, AI can understand the basics of human language, but foundational models don’t have the implicit or contextual knowledge to complete specific tasks, says Curwin.

Curwin adds: “From an analytics perspective, AI has difficulty interpreting intent. “Computer science is a valuable and important field, but it is social computational scientists who are making great strides in enabling machines to interpret, understand, and predict behaviour.”

To “build models that can begin to replace human intuition or perception,” Curwin explains, researchers must understand how to interpret behavior and translate that behavior into something the AI ​​can learn. Okay. “

While machine learning and big data analytics provide predictive analysis of what might or will happen, it cannot explain to analysts how or why it came to those conclusions. . The opaqueness AI reasoning and difficulty-testing sources, including extremely large data sets, can affect the actual or perceived clarity and transparency of such conclusions.

Transparency in reasoning and sourcing are requirements for professional standards for analysts of products manufactured by and for the intelligence community. The objectivity of the analysis is also statutory requirementssparking calls within the US government to update such standards and laws amid the growing popularity of AI.

Machine learning and algorithms when used to make predictive judgments are also seen by some intelligence professionals as more of an art than a science. That is, they are prone to bias and interference, and can be accompanied by improper methodologies and lead to errors similar to those found in crime. forensic science and art.

Source link


News5h: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button