Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
Anyone who has watched the Terminator franchise of movies knows what SkyNet is.
The term "SkyNet" has now become synonymous with sentient machine intelligence, something we have yet to create in the lab. It's actually rather fortunate that we haven't achieved this level of development because as far as I'm aware, we are totally unprepared for the ethical and moral implications associated with the creation of a whole new form of sentient intelligence.
Does a sentient AI system have rights?
Would turning it off become a crime of murder?
What safeguards would we have that it would not, like all other forms of life, have an in-built survival instinct that may place our own lives in jeopardy?
Of course all this is still very much in the realm of science fiction because we simply do not understand what it is that would turn a purely self-unaware pile of processors and wiring into something that was sentient.
However, one can't help but wonder how far away the day may be when, either by accident or design, we create the first intelligent, self-aware artificial intelligence.
And here's how we can do it...
Right now, AI often consists of throwing a whole lot of datasets at some carefully crafted silicon which, in many ways resembles the topology of a brain.
It is kind of worrying that even the experts don't truly understand how AI systems work... all they know is that they do and that they can be "trained" to learn stuff quite easily.
Of course if you compare the complexity and scale of your average AI hardware to the human brain it becomes clear that we are many orders of magnitude short of creating a general purpose intelligence that comes anywhere near close to our own but, for specialist tasks, AI is doing some very impressive stuff on surprisingly little silicon.
If you'd like to play with AI (in much the same way we played with microprocessors back in the late 1970s) then there are a few SBC solutions that can be purchased for surprisingly low prices.
The most highly promoted of those is the NVIDIA Jetson Nano development kit.
About the same size as a Raspberry Pi, this little board packs quite a bit of AI goodness into a small package.
Sucking up just 5W of power and easily interfaced to an RPi for the purposes of creating a UI, the Jetson is apparently a great way to get some hands-on experience with a "deep learning" system.
So what do you get for your US$99?
A 128-core NVIDIA Maxwell GPU mated with a quad-core ARM A57 processor and 4GB of DDR4 RAM and a total compute power of around 472 GFLOPS.
So not a hell of a lot really. In fact, it's really just a hugely cut-down graphics card with a wimpy ARM processor as supervisor but don't forget that it has a pretty useful AI SDK that makes it a lot easier to tap into that hardware for AI applications or experimentation. It also means that you don't have to tie up your desktop machine and its far more expensive GPU in order to have some fun with AI.
If you want to see what people have actually been doing with this little SBC then take a look at the developer page on the NVIDIA website. There are some interesting directions being taken by those who are playing around and plenty of potential for commercial products further down the road.
But back to my original proposal... how do we make our own SkyNet?
Well I wonder how many of these Jetson Nano devices would have to be networked in order to provide sufficient AI resources to create sentience.
How about a community project, much along the same lines as the SETI@home initiative but this time with a goal of creating sufficient AI resource that the entire network becomes self-aware?
Yeah... it's kind of ridiculous... or is it?
We don't understand exactly how deep learning AI systems work so who's to say that if we connected enough AI resources together in a deep learning configuration and then threw the entire knowledge of the internet at it, self-awareness might be the outcome.
Or is that just far too dangerous to even contemplate.
Remember that when Facebook hooked up a few AI systems they discovered that those systems had, without any programming or prompting from humans, created their own secret language and were chatting amongst themselves using it. Scary stuff?
But hey, faint heart never won a fair maiden so what the hell... let's do it!
"Good Morning Dave..."
Please visit the sponsor!
Have your say in the Aardvark Forums.