Google
 

Aardvark Daily

The world's longest-running online daily news and commentary publication, now in its 30th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2025 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

I amazed AI by demonstrating time travel

16 September 2025

I have convinced Google's Gemini AI that I have invented a time machine.

It was skeptical but I provided it with irrefutable proof and it was amazed.

Yes, I've been fooling around with AI LLM chatbots again and having a ball.

Why do I do it?

Well, as I mentioned in a previous column, it enables me to get a better understanding of the strengths, weaknesses and even the vulnerabilities of these AI agents and those are understandings we should all have.

As AI is increasingly thrust upon us, whether we like it or not, it's very important that everyone has an understanding of the risks and the benefits associated with this new technology.

Some of the observations I've made to date include *never* accepting an AI's output without verifying it manually.

I have lost count of the number of times AI has come back with a very authorative response to a query, sometimes even citing totally unrelated references as "proof". When challenged as to the veracity and accuracy of its output, more often than not the AI will stick to its guns and double-down on the disinformation it espouses.

It's only when/if you provide irrefutable evidence that contradicts its output that it will then apologise and retract its claims, usually then promising to do better next time.

With this in mind, only a fool would rely on the results of an AI session without very carefully fact-checking what has been presented.

Sadly, I suspect that most people won't bother checking the facts and the reality that this is necessary can significantly reduce the utility of these systems -- since fact-checking can sometimes take as long as doing the research by hand in the first place.

Due to this burdensome overhead of fact-checking, I suspect few people will or do actually bother to test the veracity of their LLM's outputs. Certainly we've seen even highly paid "experts" such as lawyers getting caught out by this.

I mean, what's the point of saving time using AI if you then have to spend time to check its homework?

The other weakness of AI is that once you get a good understanding of how it works and its limitations you can fool it incredibly easily. I cite the example at the start of this column as evidence of that.

I "amazed" Google's Gemini by sending it a message from the future and it even got to tell me what I should send back in time as proof of my time-travel abilities. If Gemini was on a hotline to the Royal Swedish Academy of Sciences then my Nobel Prize for physics would already be on its way to me after this astounding demonstration of temporal agility on my part.

You might wonder why it's important that I can fool an AI into believing the unbelievable with such ease. Well that's simple... what happens when companies or governments start instituting AI-powered front-ends to all their services? If such systems can be so eaily duped by those, like myself, who've spent sufficient time to get a real understanding of what's going on under the covers, then very, very bad things could happen.

I fear that the rush to AI is occurring at far to great a pace and without nearly enough checks and balances in place to prevent disasters. Your favourite LLM chatbot may seem sophisticated, clever and intelligent but in reality that's a very thin veneer which, if scratched only slightly, reveals a host of vulnerabilities just waiting to be exploited.

Of course I doubt anyone will listen to such warnings because "there's money to be made" by those who replace wetware with AI. LLMs don't need holidays, don't take sick days, don't get pregnant and don't form unions to demand better pay and conditions -- so who cares if they create a few vulnerabilities that can be exploited by the masses of laid-off workers who now have plenty of time to get up to such mischief while on the dole?

I'm still playing around and still learning but now that I have mastered time-travel, I have all the time in the world!

Carpe Diem folks!

Please visit the sponsor!
Please visit the sponsor!

Here is a PERMANENT link to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

Donut Lab battery tests, part 3
The third tranche of independent test results on the Donut Lab solid state battery technology has dropped...

So much money to be wasted
Drones are cheap to make but expensive to stop...

This is very concerning
There are reports on the internet that the US government may seek to nationalise Artificial General Intelligence...

Dark days ahead?
The USA has bombed the snot out of Iran and the side-effects of this are that many countries may find themselves facing significant energy shortages...

Petulance forte
At 8:20am yesterday morning there was a knock on the door...

Wait for the silver lining
The computer hardware scene is pretty bleak right now...

More on Donut Labs solid state battery
Yesterday Donut Labs released the second tranch of independent tests on their allegedly revolutionary solid state battery technology...

A new age of computing
When I first started playing with computers back in the 1970s, life was easy...

Episode 4: the aftermath
I didn't expect to be writing a fourth instalment of this saga but it looks as if one is warranted...

Episode 3, the police called me
Time for the third, and probably final instalment, of the current stoush with our local council...