Opinion

Opinion: DARPA’s XAI Future

The popular scenario has AI deploying autonomous killer military robots as storm troopers. The mission of DARPA is to create the cutting edge of weaponized technology. So when a report contends that the Pentagon now using Jade Helm exercises to teach Skynet how to kill humans, it is not simply a screenplay for a Hollywood blockbuster.

“Simply AI quantum computing technology that can produce the holographic battle simulations and, in addition, “has the ability to use vast amounts of data being collected on the human domain to generate human terrain systems in geographic population centric locations” as a means of identifying and eliminating targets – insurgents, rebels or “whatever labels that can be flagged as targets in a Global Information Grid for Network Centric Warfare environments.”

While this assessment may alarm the most fearful, Steven Walker, director of the Defense Advanced Research Projects Agency in DARPA: Next-generation artificial intelligence in the works, presents a far more sedated viewpoint.

“Walker described the current generation of AI as its “second wave,” which has led to breakthroughs like autonomous vehicles. By comparison, “first wave” applications, like tax preparation software, follow simple logic rules and are widely used in consumer technology.

While second-wave AI technology has the potential to, for example, control the use of the electromagnetic spectrum on the battlefield, Walker said the tools aren’t flexible enough to adapt to new inputs.

The third wave of AI will rely on contextual adaptation — having a computer or machine understand the context of the environment it’s working in, and being able to learn and adapt based on changes in that environment.”

Here is where the XAI model comes into play. The authoritative publication Janes states that  DARPA’s XAI seeks explanations from autonomous systems. “According to DARPA, XAI aims to “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners”.

Mr. David Gunning provides an insight that Explainable Artificial Intelligence (XAI) is the next development.

Will this presidential election be the most important in American history?

“XAI is one of a handful of current DARPA programs expected to enable “third-wave AI systems”, where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.

The XAI program is focused on the development of multiple systems by addressing challenge problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the DoD (intelligence analysis and autonomous systems).”

The FedBizOpps government site provides this synopsis: “The goal of Explainable AI (XAI) is to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems.”

The private sector is involved in these developments. The question that gets lost involves national security since Xerox is being bought by Fujifilm. But why worry over such mere details when the machines are on a path to become self-directed networks.

PARC, a Xerox company, today announced it has been selected by the Defense Advanced Research Projects Agency (DARPA), under its Explainable Artificial Intelligence (XAI) program , to help advance the underlying science of AI. For this multi-million dollar contract, PARC will aim to develop a highly interactive sense-making system called COGLE ( COmmon  Ground  Learning and  Explanation), which may explain the learned performance capabilities of autonomous systems to human users.”

With the news that the Xerox sale to Fuji is called off, could the PARC component of this deal be a breaker?

As for trusting the results of the technology, just ask the machine. It will tell the human user what to believe. Another firm that is involved with XAI is Charles River Analytics . The stated objective is to overcome the current limitation from the human interface.

“The Department of Defense (DoD) is investigating the concept that XAI — especially explainable machine learning — will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.”

The Defense Department is developing a New project wants AI to explain itself.

“Explainable Artificial Intelligence (XAI), which looks to create tools that allow a human on the receiving end of information or a decision from an AI machine to understand the reasoning that produced it. In essence, the machine needs to explain its thinking.

More recent efforts have employed new techniques such as complex algorithms, probabilistic graphical models, deep learning neural networks and other methods that have proved to be more effective but, because their models are based on the machines’ own internal representations, are less explainable.

The Air Force, for example, recently awarded SRA International a contract to focus specifically on the trust issues associated with autonomous systems.”

It would be a mistake to equate an AI system to just an advanced auto pilot device to navigate an aircraft. While the outward description of an objective to create an AI communication with human interface sounds reassuring, the actual risk of generating an entirely independent computerized decision structure is being mostly ignored.

Just look at the dangerous use of AI at Facebook. AI Is Inventing Languages Humans Can’t Understand. Should We Stop It? Can DARPA be confident that they can control a self-generation and thinking Artificial Intelligence entity that may very well see a human component unnecessary? Imagine a future combat regiment that see their commanding officer as inferior to the barking of a drill sergeant computer terminal? In such an environment, where would a General Douglas MacArthur fit in?

XAI is an overly optimistic belief that humans can always pull the plug on a rogue machine. Well, such a conviction needs to be approved by the AI cloud computer.

H/T BATR

Related:

If you haven’t checked out and liked our Facebook page, please go here and do so.  You can also follow us on Twitter at @co_firing_line and be sure to check out our MeWe page here.

If you appreciate independent conservative reports like this, please go here and support us on Patreon.

Related Articles

Our Privacy Policy has been updated to support the latest regulations.Click to learn more.×