Updated

The Defense Advanced Research Projects Agency is pursuing an unprecedented machine-learning “breakthrough” technology -- and pioneering a new cybersecurity method intended to thwart multiple attacks at one time and stop newer attacks less recognizable to existing defenses.

A DARPA-led “Lifelong Learning Machines” (L2M) program, intended to massively improve real-time AI and machine learning, rests upon the fundamental premise that certain machine-learning-capable systems might struggle to identify, integrate and organize some kinds of new or complicated yet-to-be-seen information.

“If something new is different enough, the system may fail. This is why I wanted to have some kind of machine learning that learns during experiences. Systems do not know what to do in some situations,” Hava Siegelmann, DARPA program manager at the Information Innovation Office and Professor of Computer Science at the University of Massachusetts.

FACEBOOK DOESN'T ALLOW PHONE NUMBER 'LOOK UP' OPT-OUT

The goal of the emerging high-tech program could be explained in terms of immediate “real-time training.” If machines learn even the most difficult or ambiguous things while performing analysis in real time, then, as Siegelmann explains it -- “we are not bound to the training set.(previously compiled or stored information). We Put old data and new data all together to retrain the network on all the training data.”

The goal of the emerging high-tech program could be explained in terms of immediate “real-time training.” If machines learn even the most difficult or ambiguous things while performing analysis in real time, then, as Siegelmann explains it -- “we are not bound to the training set.(previously compiled or stored information). We Put old data and new data all together to retrain the network on all the training data.”

As Sigelmann explained, there are certain kinds of never-before-seen nuances or data permutations which represent a departure from what a machine-learning can typically analyze. Also, there also appear to be some limits to AI, meaning it may not yet have an ability to fully digest and assimilate some very subjective variables such as “feelings”...”instincts”... certain kinds of nuanced decision-making uniquely enabled by human cognition… or anything which is not compatible with computer algorithms, mathematical formulas or some purely scientific methods of analysis. Conversely, it can also be said that by drawing upon databases including things like speech patterns, prior behavior and other kinds of catalogued evidence, AI is now on the cutting edge of being able to handle much more subjective phenomena, according to some industry computer scientists.

Interestingly, LM2 has some conceptual parallels to human biological phenomena, Seigelmann explained. Advanced synergy between input and output, in real time, is analogous to how a baby apprehends its surroundings, she said.

“When a baby is born it is learning all the time to adapt and learn all the time. People are afraid of surprises. This is precisely the point; the faster a machine is able to absorb and process new information by instantly adding it and synchronizing with its existing database, the faster it can train to recognize and compute new things” Siegelmann added.

Exploring biology as it pertains to creating new computer algorithms is by no means unprecedented. Pentagon scientists have long been immersed in something called “biomimetics” wherein flocking patterns of birds and bees are analyzed as a way to develop new algorithms for drones -- enabling them to coordinate integrated functions, swarm accurately or operate in tandem without colliding.

Alongside the ongoing L2M effort, which is progressing quickly, Siegelmann also emphasized a related, yet distinct cybersecurity-oriented exploration geared toward thwarting cyber attacks far more advanced than what typically takes place.

MIT'S CREEPY CHEETAH ROBOT CAN NOW DO BACKFLIPS

The cybersecurity concept, called Guarantee AI Robustness Against Deception, is designed to understand a new kind of more sophisticated cyber attack and, as Siegelmann put it, “make machine learning more sensitive and make AI more robust and resilient.”

The GARD program is engineered to address emerging methods of attempted intrusion engineered to “spoof,” “confuse” or re-direct the machine-learning oriented system it is attacking.

“This kind of attack could involve a particular algorithm designed to send something to the machine learning system and actually send something to cause the AI to respond in a way that would not be expected...essentially confuse and trick the machine to force it to make a decision,” Siegelmann said.

Should such an attack be successful, for instance, an attacker could instruct an AI-enabled system to “allow access” to a protected network and “open a door” as Siegelmann put it.

Siegelmann explained it in terms of a certain simultaneous synergy between input and output. The approach enables cybersecurity to identify track and thwart a broader range of attacks than is currently possible, a DARPA official said.

“Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach machine learning defense differently,” the DARPA official explained in a written statement.

While based on science, the GARD effort is very early on. Having just sent out a Broad Area Announcement to industry to solicit input, DARPA plans to formally launch the program by December of this year.

“We will be making AI better to create defenses so existing machine learning will be defendable, by either defending the current one or making new machine learning,” Siegelmann added.

More Weapons and Technology - WARRIORMAVEN (CLICK HERE)