DARPA trying to make autonomous systems more trustworthy
- By Susan Miller
- Feb 25, 2019
The Defense Advanced Research Projects Agency wants autonomous systems to be able to evaluate how well they're doing a specific task and explain that to their human partners.
In a broad agency announcement issued Feb. 19, DARPA outlined the Competency-Aware Machine Learning (CAML) program, which aims to transform autonomous systems from tools into trusted partners by virtue of the systems' ability to evaluate their effectiveness and communicate that information to humans.
With CAML, machines will be able to match their behavior to human expectations and allow their operators to quickly understand how they are operating in complex, changing, high-stakes environments.
Today's machine-learning trained systems can adapt their behaviors to circumstances similar to those they've been trained on, but they are unable to communicate how they plan to carry out the task, the adequacy of their training relative to the job or other factors that could affect their probability of success. That means humans must help systems make choices -- a poor use of resources in a combat environment.
CAML plans to create a machine learning framework for object recognition, robotic navigation, action planning and decision-making that will significantly improve teaming capabilities between humans and autonomous systems.
The 48-month program will address three areas:
Self-knowledge of experiences, in which machine learning systems capture and encode experiences first discovered during tasks so that they can remember previous operations.
Self-knowledge of task strategies, in which systems analyze task behaviors, summarize them into general patterns and identify task dependencies.
Competency-aware learning, which establishes a learning framework through which descriptions of task strategies and expected performance can be communicated through a human-understandable, competency statement.
Initial testing will be performed on proposers' machine learning platforms and applications of choice, but the systems will eventually be tested on Defense Department platforms using realistic vignettes that evaluate the accuracy of the machine’s communication of its competency, not the human's perception of the machine’s competency. Examples of potential platforms include autonomous ground resupply vehicle platforms, UAV intelligence, surveillance, reconnaissance platforms and mission planning systems.
Scenarios might include autonomous vehicles "following the leader" while monitoring environmental conditions for necessary changes or a human asking various image recognition systems which is best at identifying a specific type of subject, such as cars and trucks.
Responses are due April 22. Read the full BAA here.
Susan Miller is executive editor at GCN.
Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDG’s ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginia’s Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.
Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.
Connect with Susan at firstname.lastname@example.org or @sjaymiller.