Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
Please visit the sponsor! |
Back in the 1980s, programmers and researchers started working on systems that would supposedly create a degree of artificial intelligence.
The buzzword of the day was "expert systems".
The concept was fairly simple... just load up a database with all the knowledge needed to address a particular topic and then teach the computer how to access that data in response to queries, sometimes even using natural language.
The most promising way of "teaching" the computer how to and what to access was by way of neural networks. A neural network created a kind of "fuzzy logic" which was not driven by clinical 0s and 1s but by a methodology where logic functions were driven by multiple inputs, each having its own weighing factor to determine how much influence it had.
Early systems were both promising and impressive whilst also being disappointing.
Sometimes these systems would appear to be quite miraculous in the way they responded to queries and in the way they analysed and interpreted the data fed to them.
Such systems could sometimes pull off impressive feats of processing that convinvced some that they were ready for the big-time.
It was promised that pretty soon doctors, mechanics, engineers, teachers and all sorts of other professions would use these "expert systems" to make life easier, improve productivity and deliver better results.
Although these expert systems did have some application, they were not the silver bullet the promised to be and the whole concept of AI seemed to fall from favour in the mid 1990s, perhaps eclipsed by the internet.
Well the internet bubble came along and likely soaked up most all of the funding that might have otherwise been poured into these early attempts at AI -- but then that bubble burst.
Now, it seems, investors have rediscovered AI and thanks to huge increases in raw computing power as well as chips designed specifically for neural processing, artificial intelligence is once again the darling child of the investor community.
Billions and billions of dollars are being poured into a growing number of AI startups, all of which are promising to change the world (hopefully for the better).
Now we have LLMs (large language models) driven by AI that can easily pass the turing test and have in some cases replaced help-desk personel by way of chatbots and even systems with a voice interface that are virtually indistinguishable from real wetware.
We're told that we are getting quite close to creating an AI that is as powerful as the human brain and that systems actually capable of reasoning are just around the corner.
The pace and growth of AI has now reached the point where some big players are actually considering partnering with nuclear power companies simply so that they can be sure of having enough energy to run the massive amount of hardware involved.
Personally however, I wonder if we're in the middle of an AI bubble.
Just like in the 1980s and early 1990s, today's AI seems very impressive at some things. It can create incredible videos, images and even come up with popular music tracks that sound indistinguishable from real bands and singers.
But is this simply a bit of an illusion?
Is AI, at this point in time, really little more than a fancy data-access and retreival system that is more sizzle than steak?
When I use Google's Gemini I discover that it constantly makes really basic mistakes -- with great confidence. When corrected it says "of course you are correct...", which makes me wonder why, if it knows I'm correct, it actually dished out false info in the first place.
I can't help but get the feeling that what we're seeing is just a very sophisticated modern-day version of Eliza, a version of which I recall running on a microprocessor back in the late 1970s.
Will modern-day AI really be able to deliver on the more extreme promises being made for it?
Or will it remain just a (briefly) entertaining diversion in the form of a chatbot or creator of graphic art and video?
Might some other "next big thing" come along and take the wind out of AI's sails as investors see greener pastures elsewhere as AI fails to deliver?
Only time will tell I guess but, at least for the time being, I really don't think that AI is a form of "intelligence". I think we're still a long way from creating a machine that can truly "think" and "reason" in a way that denotes true intelligence.
But I could be wrong... maybe I should ask ChatGPT.
Carpe Diem folks!
Please visit the sponsor! |
Here is a PERMANENT link to this column
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam