Is Google LaMDA the Omega?

 Artificial Intelligence  Comments Off on Is Google LaMDA the Omega?
Jun 252022
 

Google laMDA AI

When we talk about artificial intelligence, there is always going to a comparison to science fiction. From the likes of The Terminator film franchise to the likes of The Matrix, we have many fictional examples of AI gone wrong. Yet, for many, the creeping move towards the use of AI in far more diverse ways has some people on edge. Could we actually be facing the potential for a sentient AI coming to life in the near future? According to one Google engineer, this has already happened.

Recently, the internet was shaken by the reports that a Google engineer was suspended from his role for, in essence, claiming that one of their AI bots had become sentient. Claiming it had the sentience and intelligence of a young child, people were naturally worried about what was being said. Could we really have reached a level where AI sentience is not just a pipedream, but a reality?

The engineer suggested that the AI, LaMDA, was able to communicate that it had human-like emotions – including fear. Given it supposedly spoke about a fear of being turned off, sci-fi buffs will know where this usually leads to. LaMDA is a language model that has been developed to analyse the use of language.

This means that it can carry out cool tasks like predicting the next word, and even predicting full word sequences. Given that LaMDA is one of the few models to be trained on dialogue, not text, there are some growing concerns about what this actually means.

LaMDA is currently able to fully generate conversations in a manner that is not restrained. In the past, task-based responses would limit just how far the development could go. With LaMDA, though, that is not the case. This means that the topic can actually bounce around from topic to topic, just as it would if you were talking to another person.

Can LaMDA actually understand and converse, though?

Given all of the facts and details out there about LaMDA now, the burning question is also the simplest: can this actually talk?

In essence, it is trained to be able to understand whether or not its responses make sense within the context of the discussion. This means that it can keep up with a conversation and provide the sensation – if not the reality – that it is listening and responding to the conversation in front of it. It is, though, still based on a three-tier algorithm that focuses on the safety, the quality, and the grounded nature of the responses being given.

Given it was developed using human examples and rating systems, then, it makes sense that LaMDA can create a human-ish conversation. it also used a search engine to try and add more authenticity to the topics it can converse on and the things that it can bring up. That is very interesting. However, LaMDA is not sentient – there is no existing proof that this is actually alive.

It might be able to go beyond the level of conversation one might expect from an AI in 2022, but it is not sentient. There is enough research information out there about LaMDA to dismiss this theory, as exciting as it might sound.

So, we can stand down for now – we are not, it appears, on the brink of creating truly sentient AI. LaMDA is impressive, but it is not a person.

 

 Posted by at 10:57 am

3 Types of AI

 Artificial Intelligence  Comments Off on 3 Types of AI
Nov 052019
 

AI computer board

For years, artificial intelligence has been a major part of the way that we go about our lives. From making our lives easier in computing to helping us drive safer, AI is becoming something that we rely upon on a regular basis. However, are you aware that there are more kinds of AI than just a singular style? AI is diverse, unique and growing all the time. That’s why we recommend that if you are looking to learn more about AI that you read on.

In here, we break down the three types of AI that are worth knowing about. If you are intent on knowing more about AI, then these three types of artificial intellect are almost certainly there for you to pick up. Where, then, should you start as you continue your journey?

Artificial General Intelligence

Usually known as AGI, this is the kind of AI that we wish to see made in the future. While not present today, these will be realities one day; AI that can hold the intellectual capacity and quality of a human. They will come one day, but the fact that we lack so much knowledge of how the human brain functions means that this is hardly a scientific endeavour which is just around the corner from us.

For that reason, it pays to look at AGI as something emerging – something very important, too. AGI has become a big reason for us to keep investing in AI, though, as the potential benefits on offer are simply immense.

Artificial Narrow Intelligence

Another form of AI is known as Artificial Narrow Intelligence, or ANI. It’s become a very popular form of AI as it’s used to focus on one task in particular. It’s the only AI that we have in working condition today, too, so it’s something that you should look to understand as it is quite literally what you will be encountering today.

ANI is typically the kind of AI that we come across in things like chatbots. Their sole use is to try and understand the speech being given to it, before giving some form of human-like response to what has been said. For those looking for AI they can interact with today, then this is the only kind that you’ll find available today.

Artificial Super Intelligence

ASI, though, is normally the kind of intellect that cynics of AI would fear – AI that can far exceed the thought thinking process of a human. That would be something that we would need to be wary of; would it be wise to build a computer system that can broadly be better than us in almost every conceivable way? It hardly sounds like something that would be well advised.

However, given we’re still so far away from creating an AI on a par with us, making one that is ahead of us in any respect seems a touch far-fetched. Still, don’t overestimate the speed of human progress; even basic AI like our much-vaunted voice assistants would have been deemed a pipedream 10-15 years ago.

 

Citation

https://interestingengineering.com/the-three-types-of-artificial-intelligence-understanding-ai

 Posted by at 9:12 am

Could AI Retina Scanning Be Used to Help Diabetic Patients?

 Artificial Intelligence  Comments Off on Could AI Retina Scanning Be Used to Help Diabetic Patients?
Sep 122018
 

AI Retinal Scan

For years, one of the most challenging and debilitating illnesses someone could suffer from is diabetes. Hard to spot the symptoms of and harder to treat for patients, diabetes has become a major problem for so many people. However, a technological advancement could help us to finally start making the diagnosis and management of diabetic life so much easier. Thanks to the help of this new AI retinal scanning option, eye screening will change rapidly.

With diabetes being the leading cause of blindness worldwide, this new technology could offer an amazing way to solve this problem in years to come. Retinal scans are presently on offer, and is a recommended part of diabetic diagnosis and care. However, with uptake sometimes falling in the 33-50% margins, it’s not a commonly used solution.

Thanks to the work of Dr. Simon Kos, the Chief Medical Officer at Microsoft, that could be about to change. By improving both the quality and accuracy of retinal scanning, Kos believes that this could help to optimize uptake. “[In the US,] patients have to turn up to the ophthalmologist’s office and it takes two to three hours because they dilate their eyes with these drops and you can’t drive afterwards. From a patient experience perspective, it’s a real inconvenience, hence the poor compliance.”

His latest program, then, Iris, might offer something entirely new: “I’ve been working with a business partner here in the US called Iris [Intelligent Retinal Imaging System] and they have created an ophthalmic visit in a box. It’s actually a combined hardware and software appliance; you pop your chin into a chin strap and a little voice guides you through taking a perfect picture of the back of your eye in a few minutes.”

Greater accuracy

At present, your retina scan would be looked at by a single expert in ophthalmology. Now, it would be sent to a cloud where a team of experts could look at and understand the issues involved. This results in faster and quicker planning for a solution, and can deliver an answer within hours rather than days.

By working alongside AI, the aim is to allow for the image data-sets to eventually be interpreted by the ophthalmologists to make the machine pick up on the crucial details. Accuracy rates with Iris are improving all the time, and the hope is that it will provide a lasting and genuine solution to eye issues in diabetics.

With an accuracy of 97% – a human expert has an average of 92% – it’s safe to say that Iris is learning quickly. In just a year it has made an improvement of 12%. With more accuracy and greater speed of response, it’s easy to see why so many diabetics might be more inclined to make the most of this interesting new development by Microsoft.

Alongside other machine-learning inspired medical tools, Iris offers another glimpse into a world where we can use technology to solve even the most specific conditions in the shortest space of time imaginable.

 

 Posted by at 9:40 am