Get In Touch
541 Melville Ave, Palo Alto, CA 94301,
ask@ohio.colabr.io
Ph: +1.831.705.5448
Work Inquiries
work@ohio.colabr.io
Ph: +1.831.306.6725

[January 7, 2021]

What AI Can And Cannot Do For The Intelligence Community

A realistic appraisal of artificial inteligence shows limits but real promise.

DEFENSE ONE | BY ZIGFRIED HAMPEL-ARIAS AND JOHN SPEED MEYERS | JANUARY 5, 2021

A seasoned intelligence professional can be forgiven for raising her eyebrows about artificial intelligence, a nascent and booming field in which it can be hard to sort real potential from hype. Addressing that raised eyebrow — and helping senior leaders understand how to invest precious time and money — will take more than vague generalities and myopic case studies. We therefore offer a hypothesis for debate: AI, specifically machine learning, can help with tasks related to collection, processing, and analysis — half of the Steps in the Intelligence Cycle —  but will struggle with tasks related to intelligence planning, dissemination, and evaluation.

When we talk about AI’s prospective value in intelligence work, we are generally talking about the specific field of deep learning, a term that refers to multi-layer neural network machine learning techniques. Deep learning tools have made tremendous progress in fields such as image recognition, speech recognition, and language translation. But there are limits to its abilities. 

Deep learning excels at “tasks that consist of mapping an input vector to an output vector and that are easy for a person to do rapidly,” wrote three of the field’s leading lights — Apple’s Ian Goodfellow and University of Montreal professors Yoshua Bengio and Aaron Courville — in their 2016 textbook Deep Learning. “Other tasks, that cannot be described as associating one vector to another, or that are difficult enough that a person would require time to think and reflect in order to accomplish the task, remain beyond the scope of deep learning for now.”

To recast these answers in simpler terms, these scholars are suggesting that modern AI can achieve extraordinary performance on what might be called “thinking fast” tasks but not on “thinking slow” tasks, to trade on the memorable terminology of Daniel Kahneman’s Thinking, Fast and Slow. “Thinking fast” tasks, for this essay, refer to tasks that involve a human or machine quickly and intuitively associating an input with an output, like spotting and recognizing planes. “Thinking slow” tasks are deliberate and do not require matching an input with an output, like determining the wisdom of purchasing a particular satellite.

READ MORE

Author avatar
safehouse2021
https://safehouse.global
We use cookies to give you the best experience.