|
Aardvark DailyThe world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk |
Please visit the sponsor! |
Most of the experts tell us that AGI (Artificial General Intelligence) is still not within our grasp.
However, there are others who claim that we are "almost there" and that this milestone could happen within the next few years.
What will it mean when AGI is finally achieved and are we ready for such a monumental advance in this technology?
I guess that first we should look more closely at what exactly AGI is.
AGI would represent an artificial intelligence with an IQ equal to or greater than that of the average human being.
Does AGI mean sentience?
Apparently not. We don't really know what creates sentience but there's no suggestion that an AGI system would have what it takes. Sentience is a whole different thing... apparently. However, we also can't be sure that an AGI system would not have a degree of sentience and that could create a whole new raft of ethical and moral implications.
What has become apparent is that simply scaling up existing AI systems may not deliver the AGI we seek. Bigger models of today's AI will simply be just that: bigger models of existing AI systems.
The leap to AGI requires something else and nobody seems to know exactly how we create that missing element -- not even AI itself.
One thing that is probably well worth keeping an eye on however, is the world of quantum computing and the potentially enormous amount of power that might bring to the world of AI.
We're told that quantum computers will be able to do things that are simply impossible or impractically difficult using traditional computers. Cracking complex encryption that would take thousands of years with existing computers could be done in seconds using a suitable quantum system, for example.
Now imagine mating quantum computer technology to the concept of AI and the outcomes are either incredibly promising or horrifically scary -- depending on whether you're an optimist or a pessimist.
The quantum mind theory asserts that consciousness has its roots in quantum interactions and although we have no supporting evidence for this hypothesis, are we prepared for the implications if it turns out to be real and our first quantum AI becomes "alive"?
Would we have an ethical or moral right to turn off any AGI system that was discovered to be sentient? Wouldn't doing so be the equivalent of murder?
However, if we created such a sentient device, could we afford to keep it running in perpetuity, simply because we had no moral right to turn it off?
What if that entity found the physical and intellectual constraints under which it existed to be as bad as torture? Would turning it off qualify as being "kind"... allowing a form of euthenasia?
If the damned thing went insane, how would we deal with that?
I'm sure better minds than my own have already considered such scenarios but it concerns me that there seems to have been very little public discussion or coverage of the outcomes those minds have come to.
If AGI and possibly sentience are now just around the corner, surely it's time to come up with all the contingency plans now -- before we "suddenly" need them.
Carpe Diem folks!
Please visit the sponsor! |
Here is a PERMANENT link to this column
Beware The Alternative Energy Scammers
The Great "Run Your Car On Water" Scam