Key Players Talk about the Future of Artificial Intelligence

 Artificial Intelligence  Comments Off on Key Players Talk about the Future of Artificial Intelligence
Oct 262016

Today, Artificial Intelligence (AI) is not simply a movie title or a concept. There have been recent breakthroughs in AI and machine learning that do not only include AI in mobile phones and drones but also other developments. These talks are soon to become reality as the biggest names in the industry are said to be working together in addressing issues that include safety, privacy and the connection between AI and human beings.

Big players like IBM, Microsoft, Facebook, Amazon and DeepMind, which was acquired by Google in 2014, are the ones vocal about the role of AI and what direction it is heading to.  The consortium is known as Partnership on Artificial Intelligence to Benefit People and Society or simply, Partnership on AI. As it name connotes, it is the goal of these five technology companies to promote best practices and conduct research. The group also aims to publish the research under an open license and will cover fairness and inclusivity, transparency, privacy, interoperability, robustness, trustworthiness and reliability.


What will basically happen is that the first five members will be communicating and working with each other as well as discussing the advancements in Artificial Intelligence while competition for coming up with the best products and services is ongoing among the players.

According to University of Montreal professor, Yoshua Bengio, AI offers companies and organization alike a plethora of opportunities and that the development of AI is fast and continuous. However, he also talked about the concerns on how the development process will stand. But he also mentioned that the coming together of these companies will ensure that these five players will be one in terms of goals and objectives that will bring about common good.


What to Expect

The group said that it has no plans to lobby government bodies. Instead, there will be equal representation of corporate and non-corporate members from all the five companies. Moreover, talks are already ongoing among the Partnership on AI and organizations such as the Allen Institute for Artificial Intelligence and the Association for the Advancement of Artificial Intelligence. As for the potential of AI, it is said to benefit different aspects of life, such as education, entertainment, manufacturing, transport, home automation and healthcare.

The five pioneer corporate members have AI research teams and some have become popular like Watson for IBM, DeepMind for Google and Alexa for Amazon. London-based DeepMind made it to the headlines in March after it was able to develop a machine that was able to beat a human player known for being a world-class player of the ancient board game from Asia, Go.

Managing director of research for Microsoft said that the partnership was historic and will be a major influence on the people while ethics researcher for IBM. Francesca Rossi regarded the collaboration as “a vital voice in the advancement of the defining technology of this century”.

Meanwhile, the absence of Apple and OpenAI, the non-profit group founded by Elon Musk, did not escape the eyes of enthusiasts but Apple is said to be enthusiastic about the project.




U.C. Berkeley to Create Human-Compatible Artificial Intelligence

 Artificial Intelligence  Comments Off on U.C. Berkeley to Create Human-Compatible Artificial Intelligence
Sep 072016

A team at U.C. Berkeley, lead by artificial intelligence (AI) expert Stuart Russell, created the new Center for Human-Compatible Artificial Intelligence, which focuses on ensuring AI systems will be beneficial to human beings. Russell is the co-author of “Artificial Intelligence: A Modern Approach”, which is regarded as the standard text in the AI field, and is an advocate for incorporating human values into AI design. The center was launched with a $5.5 million grant of from the Open Philanthropy Project, with additional grants from the Future of Life Institute and the Leverhulme Trust.

With regards to the imaginary threat from the evil robots of science fiction known as sentient, Russell quickly dismissed the issue, explaining that machines that are currently designed in fields, such as robotics, operations research, control theory and AI, will literally take objectives humans give them. Citing the Cat in the Hat robot, he said that domestic robots are told to do tasks without understanding the hierarchy of the values of the programmed tasks at hand.


The center aims to work on solutions to guarantee that most sophisticated AI systems, which might perform essential services for people or might be entrusted with control over critical infrastructure, will act in a way that is aligned with human values. Russell said, “AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own. This means we need cast-iron formal proofs, not just good intentions.”

One solution that Russell and his team are exploring is the so-called “inverse reinforcement learning”, which allows robots to learn about human values through human behavior observations. By allowing them to observe how people behave in their daily lives, the robots would learn about the value of certain things in our lives. Explaining this, Russell states, “Rather than have robot designers specify the values, which would probably be a disaster, instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”

However, Russell acknowledged that this will not be an easy task, considering that people are highly varied in terms of values, which will be far from perfect in putting them into practice. He said that this aspect can cause problems for robots that are trying to exactly learn what people want, as there will be conflicting desires among different individuals. Summing things up in his article, Russell said, “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Among the principal investigators who are helping Russell for the new center include cognitive scientist Tom Griffiths and computer scientist Pieter Abbeel.