Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
Is there life elsewhere in the universe?
One of the many theories regarding the answer to this question is that any civilization that establishes in the universe is likely to self-destruct long before it could develop the technology needed for interstellar travel, so we'll never ever meet them.
The common belief is that this self-destruction would occur due to war or perhaps devastation of the ecosystem in which they lived -- much as we see happening right here on Earth.
However, I'm wondering if there isn't another reason why advanced civilizations might fail before they develop their warp drives.
The answer can still be found in the realms of science-fiction intertainment however.
Could the demise of advanced civilisations be due to their creation of artificial intelligence?
We're already living in a world that has become heavily dependent on AI systems for many of the services we take for granted.
Social media is the perfect example of this.
Google, Facebook and a number of other social media systems are now so heavily reliant on AI that they could not exist without it. Such is the volume of postings and interactions that, without AI, there could be no oversight or moderation of offensive or illegal content.
Most of the time this AI does a "satisfactory" job but every now and then it screws up completely and produces alarmingly bad outcomes. Google and Facebook usually respond by saying that they're improving their systems and that this will be less of a problem over time -- which is fair because these systems develop their "intelligence" by machine learning, a process that is highly reliant on corrective external feedback to correct false assumptions and conclusions.
The problem is however, as we place increasing reliance on these AI systems, we actually lose direct control of outcomes they create.
Since these systems teach themselves how to solve problems, there is no master program listing from which the computer's output can be derived. The decisions are based on extremely complex multi-dimensional matrices that would take a very long time to untangle and thus they remain something of a "black box", even to those who create such systems.
Scientists at the Max Planck Institute for Human Development have this month issued a warning that mankind would be unable to control super-intelligent systems once they were developed. Science Daily story.
They warn that it would actually be impossible to create an effective algorithm that could automatically shut down any rogue-AI system. That must surely be a worry.
So, how likely is it that advanced civilizations are not snuffed out by war or ecological disaster, but by their own cleverness in creating AI systems that then "go postal", as has been predicted by such sci-fi franchises as the Terminator movies?
Are we playing with gunpowder through our use and increased reliance on AI?
Sure, it's unlikely that AI will become sentient in our lifetime but are we taking a huge risk that, at some time in the future, we will create an entity that sees us as no longer important and treats us as we treat any parasite or pathogen?
Food for thought or just sci-fi nonsense?
Please visit the sponsor!
Have your say in the Aardvark Forums.