What You Must Know About Google TensorFlow, and Why It Could Be AI’s Future

 Artificial Intelligence  Comments Off on What You Must Know About Google TensorFlow, and Why It Could Be AI’s Future
Jun 282017
 

If you’ve been reading updates about the Google I/O conference (which took place on May 17 to 19, 2017), you’re probably impressed at the announcements that were made during the event. Google Assistant’s real-world analysis, Google Home’s call-making abilities, Google Photos’s new shared libraries — these are some of the new and exciting features that the tech giant has revealed.

But, if you take the event as a whole, you might notice two things: 1) Google is no longer hyper-focused on mobile technology since 2) it’s now setting its sights on artificial intelligence.

AI isn’t exactly a newcomer to Google. Many of the tools and features offered by the company, such as image search and speech recognition, are powered by artificial intelligence. However, Google and its CEO Sundar Pichai point out that AI isn’t just a tool to provide better services to users — it’s actually the foundation on which the company’s future (and the rest of the world’s) will be built. The fact that the Menlo Park-based tech giant readily admits this should make the tech community stand up and take notice.

 

Google’s AI Powerhouse

Google has already started to make artificial intelligence a central part of its service offering. One of the biggest steps it has taken so far is to make TensorFlow an open-source software back in 2015. By doing this, Google has made it easier for developers and software engineers to incorporate AI into their projects.

But what exactly is TensorFlow? Technically, it’s a software library that makes machine learning a quicker and faster process. It stemmed from DistBelief, a machine learning system that was built by Google Brain in 2011 to assist Google engineers in their research and product development tasks. Several computer scientists worked on DistBelief to make it faster and more robust, eventually turning it into TensorFlow.

The library was made open to the public on November 9, 2015. It can only be deployed single machines as of the moment, which is one of its few limitations, but users are hoping that Google will improve this in the feature. Thankfully,  TensorFlow supports multiple CPUs and GPUs and can be used on 64-bit Windows, Mac, and Linux computers as well as Android and iOS mobile devices.

 

Leading the Way

It’s important to note that TensorFlow is not the only machine learning library available. There are several other options out there, such as Swiss-built Torch, Facebook-supported Caffe (and its newest version Caffe2), and Keras (whose creator has become a Google engineer).

Despite these options, many developers and businesses decide to use TensorFlow because of several reasons. Some like the fact that TensorFlow promotes quick scaling, while others love that it easily integrates with the rest of Google’s services (which makes it easier for developers to push their products). Another good thing about TensorFlow: you can either build and customize your own algorithms or purchase off-the-shelf components if you’re squeezed for time or don’t have enough resources to go the DIY way.

With these in mind, it’s clear that TensorFlow is the future of Google’s foray into artificial intelligence. In fact, the tech giant has recently released a new version of the library called TensorFlowLite, along with an API that’s designed to interface with future AI-powered smartphone processors. These are just the beginning, though: we can expect to see more developments that will expand Google’s AI empire and help it dominate the artificial intelligence platform.

 

Key Players Talk about the Future of Artificial Intelligence

 Artificial Intelligence  Comments Off on Key Players Talk about the Future of Artificial Intelligence
Oct 262016
 

Today, Artificial Intelligence (AI) is not simply a movie title or a concept. There have been recent breakthroughs in AI and machine learning that do not only include AI in mobile phones and drones but also other developments. These talks are soon to become reality as the biggest names in the industry are said to be working together in addressing issues that include safety, privacy and the connection between AI and human beings.

Big players like IBM, Microsoft, Facebook, Amazon and DeepMind, which was acquired by Google in 2014, are the ones vocal about the role of AI and what direction it is heading to.  The consortium is known as Partnership on Artificial Intelligence to Benefit People and Society or simply, Partnership on AI. As it name connotes, it is the goal of these five technology companies to promote best practices and conduct research. The group also aims to publish the research under an open license and will cover fairness and inclusivity, transparency, privacy, interoperability, robustness, trustworthiness and reliability.

ai-human

What will basically happen is that the first five members will be communicating and working with each other as well as discussing the advancements in Artificial Intelligence while competition for coming up with the best products and services is ongoing among the players.

According to University of Montreal professor, Yoshua Bengio, AI offers companies and organization alike a plethora of opportunities and that the development of AI is fast and continuous. However, he also talked about the concerns on how the development process will stand. But he also mentioned that the coming together of these companies will ensure that these five players will be one in terms of goals and objectives that will bring about common good.

 

What to Expect

The group said that it has no plans to lobby government bodies. Instead, there will be equal representation of corporate and non-corporate members from all the five companies. Moreover, talks are already ongoing among the Partnership on AI and organizations such as the Allen Institute for Artificial Intelligence and the Association for the Advancement of Artificial Intelligence. As for the potential of AI, it is said to benefit different aspects of life, such as education, entertainment, manufacturing, transport, home automation and healthcare.

The five pioneer corporate members have AI research teams and some have become popular like Watson for IBM, DeepMind for Google and Alexa for Amazon. London-based DeepMind made it to the headlines in March after it was able to develop a machine that was able to beat a human player known for being a world-class player of the ancient board game from Asia, Go.

Managing director of research for Microsoft said that the partnership was historic and will be a major influence on the people while ethics researcher for IBM. Francesca Rossi regarded the collaboration as “a vital voice in the advancement of the defining technology of this century”.

Meanwhile, the absence of Apple and OpenAI, the non-profit group founded by Elon Musk, did not escape the eyes of enthusiasts but Apple is said to be enthusiastic about the project.

 

Reference

http://www.bbc.co.uk/news/technology-37494863

 

U.C. Berkeley to Create Human-Compatible Artificial Intelligence

 Artificial Intelligence  Comments Off on U.C. Berkeley to Create Human-Compatible Artificial Intelligence
Sep 072016
 

A team at U.C. Berkeley, lead by artificial intelligence (AI) expert Stuart Russell, created the new Center for Human-Compatible Artificial Intelligence, which focuses on ensuring AI systems will be beneficial to human beings. Russell is the co-author of “Artificial Intelligence: A Modern Approach”, which is regarded as the standard text in the AI field, and is an advocate for incorporating human values into AI design. The center was launched with a $5.5 million grant of from the Open Philanthropy Project, with additional grants from the Future of Life Institute and the Leverhulme Trust.

With regards to the imaginary threat from the evil robots of science fiction known as sentient, Russell quickly dismissed the issue, explaining that machines that are currently designed in fields, such as robotics, operations research, control theory and AI, will literally take objectives humans give them. Citing the Cat in the Hat robot, he said that domestic robots are told to do tasks without understanding the hierarchy of the values of the programmed tasks at hand.

robotknot750

The center aims to work on solutions to guarantee that most sophisticated AI systems, which might perform essential services for people or might be entrusted with control over critical infrastructure, will act in a way that is aligned with human values. Russell said, “AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own. This means we need cast-iron formal proofs, not just good intentions.”

One solution that Russell and his team are exploring is the so-called “inverse reinforcement learning”, which allows robots to learn about human values through human behavior observations. By allowing them to observe how people behave in their daily lives, the robots would learn about the value of certain things in our lives. Explaining this, Russell states, “Rather than have robot designers specify the values, which would probably be a disaster, instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”

However, Russell acknowledged that this will not be an easy task, considering that people are highly varied in terms of values, which will be far from perfect in putting them into practice. He said that this aspect can cause problems for robots that are trying to exactly learn what people want, as there will be conflicting desires among different individuals. Summing things up in his article, Russell said, “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Among the principal investigators who are helping Russell for the new center include cognitive scientist Tom Griffiths and computer scientist Pieter Abbeel.

 

Reference

https://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/