In today’s Tech Insider…the end of days at the hands of a machine…why we’re all at least a little afraid of our own creations…why we need reassurance that AI won’t harm us…how the best in the business are tackling this problem…and more…
Armageddon at the hands of a machine. It’s one of society’s biggest nightmares and has been ever since the dawn of the information age.
There’s just something about having your destiny decided for you by a cold-hearted machine that doesn’t sit well with people. Perhaps because there would likely be no emotion behind the act.
Or perhaps it’s because a machine is our own creation. The thought of something we built spelling the end for humanity doesn’t seem to sit well with people. Never mind all the weapons we’ve made over the years.
Still, I can empathise a little. Change is scary, and right now the biggest change all over the world is AI.
We’re rapidly turning over more and more responsibilities to the machines. As they become smarter, we start to rely on them even more.
But for some people, this raises some serious doubts. After all, once the machines are in charge, what’s to stop them from deciding that we were the problem all along.
Again, it comes back to the heartless machine stereotype. We can’t envision a bunch of circuits and wires ever being able to understand human emotion. Well most can’t anyway.
Personally, I think that AI will be both intelligent and empathetic one day. When or how it will be made possible though, is still unclear. This is just pure speculation on my part.
We can’t rely upon blind trust, though. I know that and I’m sure you know that as well. Humanity needs reassurance that AI won’t end up heralding the end of society as we know it.
It’s time to send in the experts.
TODAY: Join a small group of Aussie
‘legal drug dealers’ pocketing
big money from cannabis
Early cannabis investors have already seen gains like 1,720%, 9,971%, and even 12,775%.
But that’s nothing compared to what could come NEXT when controversial cannabis bill C-45 becomes law on 17 October.
Click here for more.
Top minds for a tough job
DeepMind are without doubt the leading AI company on the planet. Though technically, they are owned by Alphabet (Google’s parent company).
If you ever wanted a team to handle AI, these are the people at the top of the list.
Their greatest achievement to date is the famous AlphaGo system. An AI program that was the first to beat a human Go player back in 2015. And last year, it backed up this result with another win over the best Go player in the world.
Today, AlphaGo has since retired. However, DeepMind has only just been getting started. They’re now turning their attention to the digital realm for their AI. Video games is DeepMind’s new focus.
But, this company isn’t just about making machines that can play games. What they really want to do is show the world the power of AI. Although, it seems even DeepMind realises that they may be moving too fast for some people’s comfort.
So in order to assure the world about AI, DeepMind is embarking on a new project. Their new goal isn’t just about creating smarter AI, but also safer AI.
To be fair, the company has always had at least some focus on safety. Now though, they’re offering some transparency into how they hope to do it.
In a recent blog post, DeepMind revealed their framework for AI safety. A loose guide that is defined by three technical directives: specification, robustness and assurance:
‘Specification ensures that an AI system’s behaviour aligns with the operator’s true intentions.
‘Robustness ensures that an AI system continues to operate within safe limits upon perturbations.
‘Assurance ensures that we can understand and control AI systems during operation.’
Now at first glance, these are some very reasonable goals. They’re not even that technical, which is great for engaging with people who aren’t technically minded.
However, that doesn’t mean this will be a simple task. You’ve got to think like a machine to understand how challenging this endeavour really is.
What do you really want?
In the blog post, DeepMind compares AI thinking with that of the fabled King Midas. A man who wished to be able to turn everything he touched into gold.
But, as you’re probably well aware, that wish didn’t work out too well for the king. In the end, he couldn’t eat or drink because everything became gold.
DeepMind notes that we can learn a lot from dear old Midas when it comes to AI. Specifically, the idea of understanding what we want and how we state it. Let me give you an example to show you what I mean.
Imagine that you’re sitting at home and you’re watching a movie. It’s a really great movie too, one of your favourites. But, you’re also really, really thirsty.
You need a glass of water. Fortunately, you’ve got your new state-of-the-art ButlerBot: an AI machine that will adhere to your every whim. You hand the ButlerBot your glass and tell it to ‘get me some water, fast’.
The robot bustles off down the hall and you keep watching your film. After about a minute, the robot returns with your glass. You take one look at the contents of the glass and recoil in horror. The water has tinges of green and brown in it. You’re not going to drink that.
You decide to ask the robot what the hell is in the glass. It simply responds with ‘water’. Clearly, you’re not going to get any answers out of the thing. So, begrudgingly you head to the kitchen to fill the glass yourself.
But as you head down the hall you notice something, there’s a bunch of wet flowers on the floor. And to the side is a desk with an empty vase. It suddenly dawns on you.
The robot poured the vase water into your glass. Your robot butler was about to get you to drink water filled with bacteria. He wasn’t intentionally trying to cause you harm. He was simply following your orders…
See, you only told the ButlerBot to get you water, not necessarily drinking water. Plus, you said fast, so it looked for the closest source of water available.
A human being obviously would have inferred that you meant a glass of drinkable water. I don’t need to tell you that I want drinkable water, because that’s pretty obvious. AI does need to be told though, very explicitly.
And this is why DeepMind will have such a challenge on their hands. They have to find a way to bridge this divide in our thinking. In other words, make a machine think like we can.
That’s just the first step. But it’s an important step nonetheless.
Thankfully, DeepMind are the best in the business. And that should at least give you a little reassurance that AI won’t be as homicidal as some people think.
At the very least, we should be able to avoid drinking the vase water sometime soon. If we’re lucky anyway.
Editor, Tech Insider