Aardvark DailyNew Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.
Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk
Please visit the sponsor!
Stephen Hawking, who was one of the world's leading physicists, once stated that "The development of full artificial intelligence could spell the end of the human race".
He was joined in a chorus of caution by many other well-known names from within the ranks of science and industry.
The first thought that probably pops into your head on hearing these warnings is of some kind of supercomputer-based intelligence taking control of key infrastructure and disabling it or using it to eliminate the nasty biological virus that is mankind.
Well AI is unlikely to turn The Terminator franchise retrospectively into a prophetic documentary series. However, it is becoming clear that there are some very real dangers arising from the misuse of AI by "bad actors" (no, not Arnie).
Recently, deepfake images and video in which songwriter and singer Taylor Swift appears in pornographic content have caused providers of AI services and social media platforms to swing into damage-control mode. The ease with which such material could be conjured up and disseminated has left these entities with a lot of egg on their faces.
Whilst that material may be embarrassing to Ms Swift, a far more financially painful example of AI deepfakes being used surfaced last week.
According to several media reports, a deepfake scammer has stolen $25m from a Hong Kong based company.
Allegedly, the scammer used AI deepfake technology to populate a video conferencing session with facsimiles of people from the company's executive team, including the CFO.
The deefakes directed the only real staff member in the call to transfer funds to various bank accounts the scammer had access to.
Full points for the level of daring audacity involved!
This takes spoofing to a whole new level and will likely cause all companies to have a rethink on the levels of authorisation required before any action is taken as the result of an online video conference or even a one-on-one interaction via the internet or phone.
This is where the real danger from AI exists.
It's not some super-AI going rogue, it's good AI being harnessed for nefarious purposes by someone with evil intent. Computers can do bad things but to be truly evil requires some wetware in the loop.
Although the risks to an invidual may be low, my wife and I have arranged a "keyword" that we will use if we ever need to verify our actual identity over some kind of electronic communications system and I recommend that everyone does this with friends/family. It sounds kind of paranoid but there are already far too many reports of people who've been fleeced out of thousands when scammers send txts or emails purporting to be from loved ones saying they're in dire need of money in an awkward situation and "can you please help?"
In such cases, including the keyword provides authentication that the request is real and, in an era when you can fake *anyone* saying *anything*, it's cheap insurance against being tricked.
Carpe Diem folks!
Please visit the sponsor!