Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
I suspect that everyone reading today's column has seen one or more of the Terminator movies.
For the possible cave-dweller who hasn't, the premise is that a defense system created using artificial intelligence and a network of many computers suddenly reaches a point where it becomes "self aware" and, in doing so, moves to eradicate humans from the face of the planet.
Yeah, it's sci-fi... but for how long?
We're already seeing dramatic improvements in AI to an extent hardly even dreamed of just a decade or two ago.
Back in the late 1980s I was loosely involved with a group that were working on AI systems that were, at the time, more accurately referred to as "expert systems". At the time, these computers and software were simply a pre-programmed set of rules that were applied to various sets of data and the "intelligence" was merely a reflection of how smart the designers and programmers were. There was no threat to mankind and these were hardly "learning systems".
But my, how times have changed.
These days, AI systems seem to consist of a learning framework into which huge sets of data are plugged.
Through analysis of the results and corrective feedback, the AI systems of today do actually learn and are highly adaptive. They can do remarkable things, such as facial recognition, voice recognition, trend analysis and forecasting, etc.
Hell, some sites such as Google and YouTube are now run almost entirely by AI systems that constantly scan, catalog, vet and promote content based on seemingly unfathomable criteria. Indeed, without this AI technology, YouTube itself would be unmanagable, given the amount of content that is uploaded every hour of every day.
However, AI is far from infallible. In fact, if YouTube is any indicator, it seems to make as many mistakes as it does correct decisions.
I've already waffled on far too many times about the way in which YT's AI makes the most outrageous and unjustified decisions in respect to content and channels on the platform so I won't go there again.
Suffice to say, AI is still very much in its infancy and it would appear that even those who create such systems aren't totally sure how a particular set of responses to a particular dataset is created. The learning process itself is well understood and documented but the exact form of the resulting decision matrix seems to be something of a mystery at times.
So, given the sometimes unpredictable way that AI makes the most outrageous mistakes, does this weapons system really seem like a good idea?
If you look to the Terminator movies as a huge extrapolation of the possible outcome, does it really seem sensible to arm our AI?
I am not suggesting for one moment that the military's AI will become sentient and strive to overthrow humans... but I am suggesting that we ought not place highly capable weaponry in the hands of systems that are still unpredictably unreliable and which can (and do) sometimes fail spectacularly without warning.
The term "collateral damage" springs to mind.
As I asked in the title of this column: how can this end well?
Please visit the sponsor!
Have your say in the Aardvark Forums.