Google
 

Aardvark Daily

New Zealand's longest-running online daily news and commentary publication, now in its 25th year. The opinion pieces presented here are not purported to be fact but reasonable effort is made to ensure accuracy.

Content copyright © 1995 - 2019 to Bruce Simpson (aka Aardvark), the logo was kindly created for Aardvark Daily by the folks at aardvark.co.uk



Please visit the sponsor!
Please visit the sponsor!

Time to break out the assembler?

14 January 2020

There was an interesting piece on SlashDot this morning in which it was suggested that as computers hit a brick wall performance-wise, we'll need to go back to more efficient programming.

Anyone who was programming back when microprocessors first came along will recall that all the "fast" code was written in assembly code and run as machine-code. Even today it's quite surprising just how fast and efficient this old code was, given that we were running those programs on 8-bit processors with less than 64MB of RAM and clock speeds measured in just a few MHz.

Believe it or not, despite the incredibly puny hardware, we had very practical and useful word processors, spreadsheets and databases that were fast and fun to use.

So how is it that today, in an age when clock speeds are three orders of magnitude faster, memory is 500 times more plentiful and data-busses are eight times wider, the core functionality of most programs is only a few times faster than in 1979?

It's the way we write software, of course.

A piece of software that was coded by hand and assembled into raw machine code had none of the overhead that slows today's modern highly virtualised programming code.

Instead of talking directly to the CPU, almost all modern applications must weave their way through multiple levels of abstraction and virtualisation to achieve their ends.

In the name of portability and compatibility, many of today's most popular languages are either interpreted or compiled via JIT systems which add additional overhead to all programs written in them.

Sure, modern compilers can be pretty damned smart and apply optimisations that hand-coders would never have the time or skills to perform -- but they also can be pretty dumb and the overheads associated with support libraries and OS interfaces can be very burdensome.

So if, as suggested by experts, we're only a few generations away from hitting some serious roadblocks to Moore's Law, as imposed by the laws of physics themselves, just how will we manage to write faster software if we don't start going back to basics and cut out all the dross that's slowly been added to our systems?

Also, what would it mean to productivity if we started leaning more heavily on bare-metal code rather than the highly virtualised stuff that makes it so much easier to turn a system spec into a commissioned project?

It should be remembered that whilst it's fine for small projects, assembler code can be a nightmare to write, debug and maintain once you start dealing with larger, more complex systems. In fact it was this very problem that spawned some of the first high(er) level languages (HLLs) such as C and BASIC. Using an HLL, development time could be slashed to a tiny fraction of that associated with using assembly code to create the same result.

Time is money and computers were getting faster every year so it was only natural that code-hauses were keen to shift to the more productive development environments and take advantage of newer hardware. "What's that? Our code is running too slowly? You need to upgrade your computers!" was the easy solution.

However, what happens when computers stop getting faster?

What happens when the laws of physics make it impossible to increase chip-density and clock-speeds?

Perhaps "the next big thing" will be AI systems that simply analyse code and work out ways to make it smaller and faster automatically -- maybe even by automatically rewriting large swathes of that code in ways that programmers might never be able to.

In fact, will programmers themselves become redundant, once we teach computers how to turn a system spec into a reliable, efficient, running application?

What an exciting world we live in and the future looks to be even moreso!

Please visit the sponsor!
Please visit the sponsor!

Have your say in the Aardvark Forums.

PERMALINK to this column


Rank This Aardvark Page

 

Change Font

Sci-Tech headlines

 


Features:

The EZ Battery Reconditioning scam

Beware The Alternative Energy Scammers

The Great "Run Your Car On Water" Scam

 

Recent Columns

A pandemic in the making?
Every decade or two the world stands on the edge of a precipice...

A global energy network?
Renewable energy sources have become an important thing in the first half of the 21st century, as we try to wean ourselves off fossil fuels...

Will this fly?
Interesting things are going on down South...

The future of food
Are we about to see some significant changes in the sources and form of the food we eat?...

Hottest year? Not this one
According to the media, the last decade was the hottest on record and it's all down to the evils of climate change...

Predictions for 2020?
At the beginning of each year I try to dust off my crystal ball and see if I can make some prophet-like predictions as to the year ahead in tech...

Time to break out the assembler?
There was an interesting piece on SlashDot this morning in which it was suggested that as computers hit a brick wall performance-wise, we'll need to go back to more efficient programming...

Is Iran more honest than the USA?
Well here we are, the festive season is behind us and it's time to settle back into the daily routine for another year... and what a year it's shaping up to be...

Good morning 2020
The first Aardvark of 2020 and the start of the third decade of the 21st century...

Why online shopping may have peaked
Retail is changing... a lot...

Boxing day sales?
I am so tired...