Hello Ryan,
Thanks for the well-grounded comment on AI below. Clearly we are at the start of a strange and long journey, one that will test our sense of self-worth and survival.
A thought that has arisen for me on this topic, not that I give it much thought, is this...
Science has so far been unable to quantify the human mind, as to scope and location. Therein lies a big clue as to where we should focus our attention.
Science is doing a good job of slowly unravelling the mysterious workings of the human brain, but that is a whole other matter than 'Mind'.
The quintessential problem that science has, as I see it, is that it is inherently atheistic,, a product of 17th century scientific rationalism that is yet to accept the working reality of a Quantum universe. By atheistic I don't simply mean a rejection of standard, and erroneous religious belief, but a complete rejection of the Universal Divine Intelligence that has created everything and is the substance of life itself and the grist of Human Mind. Until we cast off these Godless Newtonian welding goggles we will not be able to perceive the gross error in our presumption that AI is anything more than a fancy video game, albeit a potentially dangerous one.
The piece that is missing in this puzzle is a proper understanding of the Human Mind itself - not to be confused with the brain which is just a very clever cytoplasmic computer..
I take comfort from the fact that this wannabee noble obsession is on the one hand taken so seriously and on the other hand is so far adrift from Quantum reality as to lay bare the spiritual poverty of its proponents. The funny thing is that it may well draw upon Quantum physics to achieve its goals and hopefully by doing so may finally realize its gross misconception. I find it comforting that when the spiritual lights go on and the AI quest loses its lustre, we will return to where we once started who knows how many eons ago. The Garden of Eden experience is not just a metaphor, but a long forgotton dream, one that we are destined to recover probably very soon. If you have spent any time in deep meditation, as I have done, you will know that this is not just a 'pipe dream.
The pendulum has now swung so very far in the other direction,, so far in fact that we have forgotten our connection to Source. And our historically compromised religions are struggling to stay relevant. Until the bipolar schism between Scientism and Religionism is finally resolved such that science and religion become one and the same, ie Truth, we will continue to struggle with phantasms such as AI.
There is only one Universal Mind and we all share equally in it. It is only the blindness of our egotistical disconnection from Source that stands in our way of developing our unlimited and unexplored, God-given divine potential that is the Human Mind.  Where then the need for the boondoggle of Artificial Intelligence?
Robert Eady
E.  ready1nz@yahoo.com
On Tuesday, October 16, 2018, 7:30:31 PM GMT+13, Tech Insider <techinsider@techinsider.com.au> wrote:
Tech Insider Header
Three Rules to Stop Armageddon

16 October 2018, by Ryan Clarkson-Ledward, Albert Park, Australia

In today’s Tech Insider…the end of days at the hands of a machine…why we’re all at least a little afraid of our own creations…why we need reassurance that AI won’t harm us…how the best in the business are tackling this problem…and more…

Armageddon at the hands of a machine. It’s one of society’s biggest nightmares and has been ever since the dawn of the information age.

There’s just something about having your destiny decided for you by a cold-hearted machine that doesn’t sit well with people. Perhaps because there would likely be no emotion behind the act.

Or perhaps it’s because a machine is our own creation. The thought of something we built spelling the end for humanity doesn’t seem to sit well with people. Never mind all the weapons we’ve made over the years.

Still, I can empathise a little. Change is scary, and right now the biggest change all over the world is AI.

We’re rapidly turning over more and more responsibilities to the machines. As they become smarter, we start to rely on them even more.

But for some people, this raises some serious doubts. After all, once the machines are in charge, what’s to stop them from deciding that we were the problem all along.

Again, it comes back to the heartless machine stereotype. We can’t envision a bunch of circuits and wires ever being able to understand human emotion. Well most can’t anyway.

Personally, I think that AI will be both intelligent and empathetic one day. When or how it will be made possible though, is still unclear. This is just pure speculation on my part.

We can’t rely upon blind trust, though. I know that and I’m sure you know that as well. Humanity needs reassurance that AI won’t end up heralding the end of society as we know it.

It’s time to send in the experts.


TODAY: Join a small group of Aussie
‘legal drug dealers’ pocketing
big money from cannabis

Early cannabis investors have already seen gains like 1,720%, 9,971%, and even 12,775%.

But that’s nothing compared to what could come NEXT when controversial cannabis bill C-45 becomes law on 17 October.

Click here for more.


Top minds for a tough job

DeepMind are without doubt the leading AI company on the planet. Though technically, they are owned by Alphabet (Google’s parent company).

If you ever wanted a team to handle AI, these are the people at the top of the list.

Their greatest achievement to date is the famous AlphaGo system. An AI program that was the first to beat a human Go player back in 2015. And last year, it backed up this result with another win over the best Go player in the world.

Today, AlphaGo has since retired. However, DeepMind has only just been getting started. They’re now turning their attention to the digital realm for their AI. Video games is DeepMind’s new focus.

But, this company isn’t just about making machines that can play games. What they really want to do is show the world the power of AI. Although, it seems even DeepMind realises that they may be moving too fast for some people’s comfort.

So in order to assure the world about AI, DeepMind is embarking on a new project. Their new goal isn’t just about creating smarter AI, but also safer AI.

To be fair, the company has always had at least some focus on safety. Now though, they’re offering some transparency into how they hope to do it.

In a recent blog post, DeepMind revealed their framework for AI safety. A loose guide that is defined by three technical directives: specification, robustness and assurance:

Specification ensures that an AI system’s behaviour aligns with the operator’s true intentions.

Robustness ensures that an AI system continues to operate within safe limits upon perturbations.

Assurance ensures that we can understand and control AI systems during operation.

Now at first glance, these are some very reasonable goals. They’re not even that technical, which is great for engaging with people who aren’t technically minded.

However, that doesn’t mean this will be a simple task. You’ve got to think like a machine to understand how challenging this endeavour really is.

What do you really want?

In the blog post, DeepMind compares AI thinking with that of the fabled King Midas. A man who wished to be able to turn everything he touched into gold.

But, as you’re probably well aware, that wish didn’t work out too well for the king. In the end, he couldn’t eat or drink because everything became gold.

DeepMind notes that we can learn a lot from dear old Midas when it comes to AI. Specifically, the idea of understanding what we want and how we state it. Let me give you an example to show you what I mean.

Imagine that you’re sitting at home and you’re watching a movie. It’s a really great movie too, one of your favourites. But, you’re also really, really thirsty.

You need a glass of water. Fortunately, you’ve got your new state-of-the-art ButlerBot: an AI machine that will adhere to your every whim. You hand the ButlerBot your glass and tell it to ‘get me some water, fast’.

The robot bustles off down the hall and you keep watching your film. After about a minute, the robot returns with your glass. You take one look at the contents of the glass and recoil in horror. The water has tinges of green and brown in it. You’re not going to drink that.

You decide to ask the robot what the hell is in the glass. It simply responds with ‘water’. Clearly, you’re not going to get any answers out of the thing. So, begrudgingly you head to the kitchen to fill the glass yourself.

But as you head down the hall you notice something, there’s a bunch of wet flowers on the floor. And to the side is a desk with an empty vase. It suddenly dawns on you.

The robot poured the vase water into your glass. Your robot butler was about to get you to drink water filled with bacteria. He wasn’t intentionally trying to cause you harm. He was simply following your orders…

See, you only told the ButlerBot to get you water, not necessarily drinking water. Plus, you said fast, so it looked for the closest source of water available.

A human being obviously would have inferred that you meant a glass of drinkable water. I don’t need to tell you that I want drinkable water, because that’s pretty obvious. AI does need to be told though, very explicitly.

And this is why DeepMind will have such a challenge on their hands. They have to find a way to bridge this divide in our thinking. In other words, make a machine think like we can.

That’s just the first step. But it’s an important step nonetheless.

Thankfully, DeepMind are the best in the business. And that should at least give you a little reassurance that AI won’t be as homicidal as some people think.

At the very least, we should be able to avoid drinking the vase water sometime soon. If we’re lucky anyway.


Ryan Clarkson-Ledward,
Editor, Tech Insider

Tech Extra

Views: 80

Add a Comment

You need to be a member of The ConTrail to add comments!

Join The ConTrail

Comment by Rainbow Cat on October 17, 2018 at 8:18

Good letter, Robert, hope you get a good reply.

© 2019   Created by rose.   Powered by

Badges  |  Report an Issue  |  Terms of Service