DARPA

DARPA Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) by Admin

spirograph.jpg

Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON)

Program Goals and Objective

Current AI systems excel at tasks defined by rigid rules – such as mastering the board games Go and chess with proficiency surpassing world-class human players. However, AI systems aren’t very good at adapting to constantly changing conditions commonly faced by troops in the real world – from reacting to an adversary’s surprise actions, to fluctuating weather, to operating in unfamiliar terrain. For AI systems to effectively partner with humans across a spectrum of military applications, intelligent machines need to graduate from closed-world problem solving within confined boundaries to open-world challenges characterized by fluid and novel situations.

To attempt this leap, DARPA today announced the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program. SAIL-ON intends to research and develop the underlying scientific principles and general engineering techniques and algorithms needed to create AI systems that act appropriately and effectively in novel situations that occur in open worlds. The program’s goals are to develop scientific principles to quantify and characterize novelty in open-world domains, create AI systems that react to novelty in those domains, and to demonstrate and evaluate these systems in a selected DoD domain

SAIL-ON intends to research and develop the underlying scientific principles and general engineering techniques and algorithms needed to create artificial intelligence (AI) systems that act appropriately and effectively in novel situations which occur in open worlds - a key characteristic of potential military applications of AI.  Specifically, the program will aim to:  (1) develop scientific principles to quantify and characterize novelty in open world domains; (2) create AI systems that act appropriately and effectively in open world domains; and (3) demonstrate and evaluate these systems in a selected DoD domain.

The anticipated SAIL-ON program will require performers to characterize and quantify types and degrees of novelty in open worlds, to construct software that generates novel situations at distinct levels of a novelty hierarchy in selected domains, and to develop algorithms and systems that are capable of identifying and responding to novelty in multiple open world domains.

“Imagine if the rules for chess were changed mid-game,” said Ted Senator, program manager in DARPA’s Defense Sciences Office. “How would an AI system know if the board had become larger, or if the object of the game was no longer to checkmate your opponent’s king but to capture all his pawns? Or what if rooks could now move like bishops? Would the AI be able to figure out what had changed and be able to adapt to it?”

Existing AI systems become ineffective and are unable to adapt when something significant and unexpected occurs. Unlike people, who recognize new experiences and adjust their behavior accordingly, machines continue to apply outmoded techniques until they are retrained.

Given enough data, machines are able to do statistical reasoning well, such as classifying images for face-recognition, Senator said. Another example is DARPA’s AI push in self-driving cars in the early 2000s, which led to the current revolution in autonomous vehicles. Thanks to massive amounts of data that include rare-event experiences collected from tens of millions of autonomous miles, self-driving technology is coming into its own. But the available data is specific to generally well-defined environments with known rules of the road.

“It wouldn’t be practical to try to generate a similar data set of millions of self-driving miles for military ground systems that travel off-road, in hostile environments and constantly face novel conditions with high stakes, let alone for autonomous military systems operating in the air and on sea,” Senator said. 

If successful, SAIL-ON would teach an AI system how to learn and react appropriately without needing to be retrained on a large data set. The program seeks to lay the technical foundation that would empower machines, regardless of the domain, to go through the military OODA loop process themselves – observe the situation, orient to what they observe, decide the best course of action, and then act. 

“The first thing an AI system has to do is recognize the world has changed. The second thing it needs to do is characterize how the world changed. The third thing it needs to do is adapt its response appropriately,” Senator said. “The fourth thing, once it learns to adapt, is for it to update its model of the world.” 

SAIL-ON will require performers and teams to characterize and quantify types and degrees of novelty in open worlds, to construct software that generates novel situations at distinct levels of a novelty hierarchy in selected domains, and to develop algorithms and systems that are capable of identifying and responding to novelty in multiple open-world domains.

SAIL-ON seeks expertise in multiple subfields of AI, including machine learning, plan recognition, knowledge representation, anomaly detection, fault diagnosis and recovery, probabilistic programming, and others. A Broad Agency Announcement (BAA) solicitation is expected to be posted in the near future and will be available on DARPA’s FedBizOpps page: http://go.usa.gov/Dom

More Information and Further Reading

Information about SAIL-ON was taken from DARPA.



DARPA Competency Aware Machine Learning Proposer's Day (CAML) by Admin

d119037ff106fc7e277fb5abf66877dc.png

The goal of the DARPA CAML program is to building trusted Human-Machine Partnerships by developing the technology to allow autonomous systems to assess their own competence and communicate it to human teammates. I attended the proposers day, and the following article represents my notes and observations.

The opening session was presented by Dr Valerie Browning from the Defense Science Office at DARPA.. The Defense Science Office. The mission of the Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is to identify and pursue high-risk, high-payoff research initiatives across a broad spectrum of science and engineering disciplines and to transform these initiatives into disruptive technologies for U.S. national security. Their investments represent a large fraction of DARPA’s current portfolio. Important Dates for DARPA CAML

  • Abstract Due Date: March 6th 2019

  • FAQ submission deadline: April 5, 2019

  • Full proposal Due Date: April 22, 2019

DARPA CAML Strategy and Insight

The increasingly complex technologically sophisticated and dynamic military operational environment presents multiple fundamental challenges in how DARPA assists agencies in designing and planning for future military needs ranging from materials to platforms to systems, To address these challenges, a complex social behavior sciences and social systems thrust is a main alignment for DARPA DSO.

The CAML program is part of a broader agency wide activity to develop the next generation of AI technologies. As part of the DARPA AI NEXT campaign, DARPA continues its investment into fundamental AI technologies. DARPA has been investing in foundational AI technology since the inception of the AI field, and continues the investment into artificial intelligence. The DARPA AI next campaign, of which the Competency Aware Machine Learning program is a part, by investing in fundamental research in the science that underpins the technology.

Screen Shot 2019-02-22 at 11.50.23 AM.png

Finding Funding within DARPA Competency Aware Machine Learning Proposer's Day (CAML)

Writing Your Abstract

DARPA operates on the principle that generating big rewards requires taking big risks. But how does the Agency determine what risks are worth taking? George H. Heilmeier, a former DARPA director (1975-1977), crafted a set of questions known as the "Heilmeier Catechism" to help Agency officials think through and evaluate proposed research programs. During the abstract phase of the proposal, the questions should be answered in an abbreviated format.

  • What are you trying to do? Articulate your objectives using absolutely no jargon. 

  • How is it done today, and what are the limits of current practice? 

  • What is new in your approach and why do you think it will be successful? 

  • Who cares? If you are successful, what difference will it make? 

  • What are the risks? 

  • How much will it cost? 

  • How long will it take? 

  • What are the mid-term and final “exams” to check for success?

Partner For Success

If your company wants to find funding with research projects, solid partnerships with companies who can prime the award is imperative. There is an art and a science to finding these relationships, and finding potential priming partners is much easier these days with business social networking like LinkedIn. In a future article I'll get into more detail around partnering for OTA’s vs Grants, and the different types of funding and how they drive a partnership.

Notes from the Contracting Officer

The content and quality of your abstract and proposal submission determines whether you will do business with DARPA. The scientific review panel evaluates each conforming proposal, and following the proposal instructions allows the program manager to understand fully your technical approach and how you will achieve the program milestones. A conforming proposal also allows an award to be made more quickly, and gives your proposal a better chance of success.

Keys to success for working with DARPA

  • Read the entire BAA, and understand what objectives, milestones, and metrics the program manager is trying to achieve.

  • Read the attachments included with the BAA, and follow the instructions closely.

  • Submit all attachments and closely follow the proposal instructions; DARPA will evaluate only a conforming proposal.


Notes from the Program Director


A key ingredient in effective teams – whether athletic, business, or military – is trust, which is based in part on mutual understanding of team members’ competence to fulfill assigned roles. When it comes to forming effective teams of humans and autonomous systems, humans need timely and accurate insights about their machine partners’ skills, experience, and reliability to trust them in dynamic environments. At present, autonomous systems cannot provide real-time feedback when changing conditions such as weather or lighting cause their competency to fluctuate. The machines’ lack of awareness of their own competence and their inability to communicate it to their human partners reduces trust and undermines team effectiveness.

To help transform machines from simple tools to trusted partners, DARPA today announced the Competency-Aware Machine Learning (CAML) program. CAML aims to develop machine learning systems that continuously assess their own performance in time-critical, dynamic situations and communicate that information to human team-members in an easily understood format.

“If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” said Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office. “The partner then can make a more informed choice.” 

That dynamic would support a force-multiplying effect, since the human would know the capabilities of his or her machine partners at all times and could employ them efficiently and effectively.

In contrast, Zhou noted the challenge with state-of-the-art autonomous systems, which cannot assess or communicate their competence in rapidly changing situations.

“Under what conditions do you let the machine do its job? Under what conditions should you put supervision on it? Which assets, or combination of assets, are best for your task? These are the kinds of questions CAML systems would be able to answer,” she said.

Using a simplified example involving autonomous car technology, Zhou described how valuable CAML technology could be to a rider trying to decide which of two self-driving vehicles would be better suited for driving at night in the rain. The first vehicle might communicate that at night in the rain it knows if it is seeing a person or an inanimate object with 90 percent accuracy, and that it has completed the task more than 1,000 times. The second vehicle might communicate that it can distinguish between a person and an inanimate object at night in the rain with 99 percent accuracy, but has performed the task less than 100 times. Equipped with this information, the rider could make an informed decision about which vehicle to use.

DARPA has scheduled a pre-recorded webcast CAML Proposers Day for potential proposers on February 20, 2019. Details are available at: https://go.usa.gov/xE9aQ

The CAML program seeks expertise in machine learning, artificial intelligence, pattern recognition, knowledge representation and reasoning, autonomous system modeling, human-machine interface, and cognitive computing. To maximize the pool of innovative proposal concepts, DARPA strongly encourages participation by non-traditional proposers, including small businesses, academic and research institutions, and first-time Government contractors.