Researchers look for ways for humans to maintain control over artificial intelligence

Friday, December 22, 2017 by

Artificial intelligence (AI) is designed to put us out of the picture. Still, we shouldn’t fret since a recently published study discovered how humans can be on top of things in systems that rely on AI.

The study, carried out by researchers at the Ecole Polytechnique Fédérale de Lausanne (EPFL), explained that while AI will always find ways to bypass human intervention and build an independent solution, operators must look for ways to keep themselves above the machines and prevent them from circumventing human command.

The solution that the researchers found was to change the rules midstream: To borrow a psychological term, instead of punishing AI for learning the process and gaining independence, operators opined that they be one step ahead of the machines and keep leading them by moving the proverbial carrot.

In AI, machines are programmed to learn from their tasks — the do an activity, observe what happens, adapt their behavior, and apply it to the next action. This repeating process is what’s called ‘learning’ for AI. Moreover, the process is programmed to have as little human interaction as possible and ultimately seek a situation where it can be entirely independent.

“AI will always seek to avoid human intervention and create a situation where it can’t be stopped,” co-author Rachid Guerraoui of the Distributed Programming Laboratory at the EPFL explained. (Related: Researchers are teaching robots to interact intelligently with humans.)

The team then explained that the goal of challenging AI is not to interrupt the robot, but instead program it in a way that will ensure the interruption will not change its learning process or avoid it altogether.

An example that they provided in the study, which was presented in the Neural Information Processing Systems conference in California, was that of a robot. They explained: “A robot could be incentivized for stacking boxes in a warehouse or retrieving them from outside on a loading dock. Each action gives the machine one point. But during a rainy day, instead of trying to stop the robot completely from going outside, the human operators could instead interrupt the commands – and change the reward for going outside to zero. In so doing, the robot will start to stay inside, knowing its total accumulation of points will be higher from stacking boxes.”

This example, in theory, would work in simple operations. In systems that use multiple machines to operate, such as unmanned drones or self-driving cars, this would be challenging since these will not only learn how to adapt to individual interruptions but also observe interruptions from others as well.

To counter this, the researchers aimed to develop the concept of “safe interruptibility.” This strategy allows humans to “interrupt AI learning processes when necessary — while making sure that the interruptions don’t change the way the machines learn.” Safe interruptibility functions as a “forgetting” mechanism: this will delete parts of a machine’s memory, in a way that will not influence that machine’s learning and reward system.

Fast facts on AI

Artificial intelligence is defined as “the study and design of intelligent agents.” In this definition, an intelligent agent uses the world around it and acts to increase his odds of success.

It may look relatively new, but the idea has been floating around since the 1950s when John McCarthy coined the term to describe the “science of making intelligent machines.”

AI utilizes a lot of fields, namely computer science, economics, mathematics, cognitive science, and logic. It is also used to cover fields such as robotics, speech recognition, and logistics.

One of the most prominent features of AI is its capacity to ‘learn.’ It does this by empirical experience — where knowledge is acquired through experience.

For more stories regarding artificial intelligence, visit Robotics.news today.

Sources include:

ScienceDaily.com 1

ScienceDaily.com 2

LaboratoryEquipment.com

 

 

 

 

 



Comments

comments powered by Disqus