Aug 082018
 

DeepMind

For years, one of the biggest worries about the growth of the use of AI has been in the way it will deal with reason. As humans, we find ourselves making many irrational choices. So, how can we expect AI to make up for own failings? If we program the framework, then how can we make sure the AI is no fallible to the same limitations that we are?

One person has a rough idea on how we might be able to get around this: DeepMind. A Google subsidiary that is all about AI, they recently produced a paper that laid out the answers which came from measuring an AI’ ability to reason.

To do so, they used the same kind of reasoning test that we would take as people: an IQ test. After all, why not test the AI to the same standards as a person?

DeepMind created a new software solution that could generate unique matrix problems, which the AI wold then need to solve. For those worrying that AI might be about to take over the world, fear not: the results aren’t exactly going to see us overpowered and overthrown just yet.

Better luck next time’

The results were, to say the least, not impressive. When the AI continued to work on the problem, the system managed to fare quite well when the same factor existed for the training and test problems. They were getting around 75% success rates when the test and then training program modules were more or less the same.

All it took to rock this, though, was a single change. What happened was that the AI would then start to produce much poorer and inconsistent scores when the platform would change. If the testing set was different from the training, the AI simply had no real answer.

This was the same even if the change was extremely small. Even a simple change in the color tone would be enough to leave the AI badly struggling to be able to make the most of the information in front of it. So, for those who are worried that AI might be getting beyond a point of safety, this should assuage some of your fears.

The AI IQ test immediately shows something that we already thought we knew: even the most advanced AIs at present at not quite ready to meet the human mind. If we have no ‘trained’ the AI to be fully ready to take on a highly specific test, it can struggle quite significantly.

For now, then, we are some way away from being able to make a general AI that can think, feel and deduce complex challenges. At least now we have a means of being able to see how the AI is improving and changing over time.

However, this should – at the very least – show people that the imminent replacement of humans with machines is still quite a way off: at least for the time being.

 Posted by at 11:23 am

Sorry, the comment form is closed at this time.