Study warns that AI expansion cannot be fully understood or measured and that humans should “proceed with caution”


Some consider artificial intelligence to be a boon to mankind, while others believe that it will be the destruction of civilization. A team of Canadian researchers warns that it will be very difficult to predict how AI will grow in the future – and that they are not leaving out the possibility of the much-touted technology utterly failing to perform as expected.

Many organizations are banking on the constantly increasing capabilities of AI to enable their long-term plans of industrial automation and innovation in science and technology. But the University of Waterloo researchers say that this faith in machine learning could end with a letdown.

After looking at the methods by which machine learning tools tackle mathematical problems, the researchers concluded that there is no calculation method that can reliably predict if an AI can succeed in solving a given problem. Their findings can be found in the science journal Nature Machine Intelligence.

“We have to proceed with caution,” said Waterloo researcher Shai Ben-David, the main author of the study. “There is a big trend of tools that are very successful, but nobody understands why they are successful, and nobody can provide guarantees that they will continue to be successful.” (Related: In two decades half of all jobs predicted to be lost to automation.)

Mathematical frameworks are used to predict the learning ability of AI

One of the most important objectives of machine learning is to figure out what can be learned by the AI. Developers start this process by selecting a mathematical framework that will enable the process of learning.

The chosen framework must cover a broad range of learning problems. A variety of tasks will give the AI plenty of different kinds of learning experiences.

Furthermore, the learning framework also serves as the means by which the developers can characterize learnability. They can mathematically determine just how well an AI will learn its lesson in the future.

This approach has proven itself many times in the past. However, it meets its match in a well-known learning model, which resists math-based attempts to predict the efficiency of the learning process.

“In situations where just a yes or no answer is required, we know exactly what can or cannot be done by machine learning algorithms,” Ben-David said. “However, when it comes to more general setups, we can’t distinguish learnable from un-learnable tasks.”

Despite mathematical predictions, a machine learning tool can still fail at a task

The Waterloo researchers used the “estimating the maximum” (EMX) learning model, which contains many tasks for machine learning tools to try out. One such task is to select the best location for distribution facilities so that the buildings can be easily accessed by expected customers in the future. This and other tasks can be learned and performed by humans.

The results of their testing showed that mathematical methods are unable to tell if an artificial intelligence-based tool will be able to tackle that task or not. This led the researchers to the conclusion that there is no dimension-like quality that can place a solid estimate on learnability.

Before this study, most experts believed that the ability of a machine learning algorithm to learn and perform a task can be calculated – as long as the task has an accurate description. However, it turned out that even the most detailed mathematical framework could end up drawing a blank on the expected performance of an AI.

Ben-David and his team said that the problem lies in how learnability is considered the existence of a learning function. Instead, it should be defined by the presence of a learning algorithm in an artificial intelligence.

Sources include:

NewsWise.com

Nature.com



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES