Saturday, December 20, 2014 ... Deutsch/Español/Related posts from blogosphere

PC revolution, Altair 8800: 40th anniversary

Fourty years and one day ago, the PC revolution started when Micro Instrumentation and Telemetry Systems (MITS) released its Altair 8800 personal computer.

In 1994, this guy, Bill Gates, said a few words about his and Paul Allen's decision to write BASIC for that machine (which was released in early 1975). Note that BASIC was invented as a popular language at a New Hampshire school in 1964.

I was just one year and two weeks old when the model was introduced. But even for those of you who are older, it must feel like some mysterious pre-history of the PCs because almost no one bought it. It was using the Intel 8080 microprocessor that is, up to relatively minor variations, still around in Windows PCs. That microprocessor was introduced in 1974, two years after Intel's first microprocessor, Intel 8008 (see its restricted instruction set).

I find the modesty and innocence of those early computers touching and remarkable, being aware that pretty much all of the gadgets we own today grew up as variations of those early attempts. It's funny to see a few videos about Altair 8800. For example, I enjoyed this demonstration where a not-so-good text game of a sort is loaded and started.

You had to write the programs in the machine code by pressing the switches on the ugly box. Gates and Allen decided that they needed a more modern way to insert the code, and they did it rather easily. When this small company Micro-Soft's Altair BASIC became a part of the computer, it was easier to write the code, I suppose that it was later sold with the Altair BASIC and with some TV output and keyboard that we knew from the epoch of Sinclairs and Commodores.

You had something like 4 kB or 8 kB of memory to be used for the software and the data, for everything, and one could still do nontrivial things. It seemed "more than enough" for many graphics-non-intensive projects to have 64 kB (actually just 38 kB or how much) on Commodore 64.

Compare it with the technology in 2014.

The cheapest cell phones still have at least half a gigabyte of memory – 100,000 times more than the 1974 personal computer. Today, the minimum "tiny programs" you download from the Play Store or Apple Store or Windows Phone Store are slightly below 1 MB and you routinely download programs that are closer to 1 GB. The internal flash memory is even bigger. The increase of the memory is comparable to the factor of one million. With that increase, there's of course a lot of wasting of the memory. It's likely that most programs we use today could be shortened to 1/10 or a smaller fraction without crippling their essential functionality. But people are no longer forced to save memory.

The frequency of the 1972 Intel 8008 was 0.5 MHz. These days, we have several GHz i.e. many thousand times faster microprocessors. You may still see that the increase of the memory was more substantial – this increase is comparable to the "square" of the relative increase of the frequency. That's roughly compatible with the idea that the bits are still being encoded in a two-dimensional array (therefore the second power) because the length scale of the microprocessors has shrunk (almost) 1,000-fold, much like the frequency, from 10 microns of Intel 8008 to 10 nanometes of the state-of-the-art Intel and other microprocessor technologies.

If you only use the two data points, 1972 and 2014, to estimate the rate of the Moore's law exponential increases, you see that 42 years or about 500 months is needed to shrink the distances 1,000 times – let's say 1,024 times. But 1,024 is 2 to the tenth power, so each 50 months (4 years or so), the frequency doubles, the length scale gets halved, and the memory quadruples (the memory doubles each 2 years). And despite these improvements, we seem to pay a slightly decreasing price for the not-really-the-same thing! It's a somewhat slower progress than we often hear about in the context of Moore's law but it may be the more realistic one in recent years.

By the way, the non-increasing price makes it so funny to think about "economics" of those computers. We are paying a part of the price of PCs and cell phones for the RAM, the hard disk, and so on. Just imagine how much some extra 8 kB would cost today. A thousandth of a penny. And it was comparable to tens or hundreds of (more valuable) dollars in the past. Clearly, "1 kB of memory" isn't a good entry for a basket used to quantify inflation. The deflation rate that you would get from this memory-disk basket would be remarkable, like minus 10% per year or a bit more.

The same, just slightly less extreme comments apply to the code. Imagine how important Altair BASIC was for Bill Gates and it was a few kilobytes of machine code. How much money does an average programmer get for a few kilobytes of code today?

Can Moore's law continue?

Well, using the same technologies, we surely can't add additional 40 years of the same exponential progress which means, as the Age of Stupid "movie" correctly calculates LOL, that the doom must arrive by 2055. ;-) The microprocessors would have to go from 10 nanometers to 10 picometers which is already smaller than the atoms. ;-) What we have today, 10 nanometers, is already just 100 atomic radii or so. It means that we only have "20 years" of the extrapolated (from the past) progress in the shrinking dimensions.

I would actually be surprised if the 14 nm (2014) or 10 nm (2016) technology would ever drop below 1 nm.

Along with that, the memory in our gadgets may increase less than 10,000 times if it remains stuck in two-dimensional arrays – and my guess is that it won't jump more than 100-fold in the devices of the same size. I actually still recommend everyone to try to switch to three dimensions. Microprocessors and memories should have lots of (interconnected?) layers. Even though the shrinking of the dimensions will essentially stop in the coming decades, there will still be some room for the shrinking of the overall size of the microprocessors – and therefore the increase of the frequency – if the three-dimensional construction is used somewhat effectively. And there will be some extra increase of the memory. The physical nature of the microscopic components may keep on changing (it hasn't been changing much for the microprocessors).

But it's hard to see how we could go beyond that. In 40 years, the electronics we use today will probably be everywhere. Every bottle of beer will be intelligent, remembering whom it belongs, whether it's fresh, asking to be saved from too hot environment, and many other things. But it's plausible that electronics of the size comparable to what we have today won't exceed 1,000 times – and I would even say 100 times – the memories and CPU power that we have today.

Quantum computers may be constructed – I think it is guaranteed that they are physically possible – but I don't really believe that they will change the world of the devices we have today because they only offer the improvement of some rather specific computational tasks and they represent no positive progress when it comes to the "mundane" activities that dominate the cell phones etc. today – video editing software is among the most complex and CPU+memory-demanding tasks that we need today.

It's also fun to think about Moore's law in the context of biology. Many of the principles of the evolution could be much more analogous to the world of electronics than what most people think.

Add to Digg this Add to reddit

snail feedback (12) :

reader student said...

off topic question:

Hi Lubos. Is the wheeler-dewitt equation tied to the canonical approach to quantizing gravity or do modern physicists believe that it has some importance?

reader Luboš Motl said...

Dear S., the Wheeler-DeWitt equation in one form or another is almost certainly necessary for a description of the most general geometries within the quantum mechanical framework.

Because of its connections with the low-energy effective field theory, it inherits all the problems of the naively quantized GR, and can't be used at multi-loop level etc., at least not according to the current knowledge.

And the equation hasn't played an important role in string theory so far because all properly enough defined and understood descriptions of string/M-theory deal with some fixed asymptotic conditions of the background, so at least at infinity, the "right" choice of the time coordinate "t" is determined in every description we use.

But at the end, there should exist a WdW-like way to describe the bulk dynamics of any quantum theory of gravity, I think. This is a potential that is still here and hasn't changed, and I think that most experts would agree, but the decades after the equation was written down for the first time haven't brought us as clear results about the equation as some of the results building on completely different approaches.

reader Uncle Al said...

Composition's smallest lump is the crystallographic unit cell. Less then 5 nm architecture is not obvious. When areal compression exhausts, one builds upward, now to fight geometric error propagation and heat dissipation. The "transferred resistance" valve must yield to lower dissipation discriminations, all of which currently require large real estate.

Two futures are evident, asymptotes and breakthroughs. Washington-impressed US education plus professional business management offer a third and more likely outcome: collapse.

reader physicsnut said...

i doubt that Bill Gates wrote BASIC - see the lawsuit with DEC v Microsoft. All you had to do was change the backslash to a colon and you could run DEC BASIC programs on your Level II basic TRS-80.or whatever I think it was missing the matrix ops.
There was a guy who had an altair - static RAM and all that, and was brought to tears when I got 16k for 300 bucks - for the TRS.
but what the heck - I knew a guy who could toggle in a boot loader on the front panel switches - which he memorized ! An assembler was a big improvement - but not as much fun as a dis-assembler.

reader physicsnut said...

btw - did you see the article that HP is developing a new memory - based on MemRistors and is working on a new operating system - see
Otherwise i would think going to 3d connections would be reasonable.

reader Michael said...

The principal difficulty making 3D chips work is the need to cool them.

reader Vangel said...

So what? The price is set by the marginal producer and that was more than $100 not that long ago. Note that Bill gates can afford to buy gasoline at $1,000 a gallon but because the price is also set by the marginal consumer he has to pay the same as the rest of us.

I am sorry my friend but you are making some very simple errors.

You need to learn a bit about the Law of Supply, which states that as the market price of a good goes up the producers of that good will offer the same or greater number of units of that good. This means that as the price of oil goes down you will see the marginal producers shut down wells that are not profitable and the supply of oil will go down. In the case of the Saudis, they are just being foolish. By flooding the market with oil they are pushing fields that are already showing depletion issues and lowering the revenues that they can get for their products. Now I do understand that they are using their position to kill the US shale industry but that was unnecessary because there have not been any primary shale producers that can generate positive cash flows from the development of shale formations. While some of the wells are fabulous and offer great returns most are destroyers of capital, which is why many of the latecomers were writing off billions of investment long before the oil price crashed.

I find it fascinating that the analysts have not caught on and have not pointed to what the data has been clearly telling us; that the only reason why we had a shale bubble was access to cheap loans thanks to the Fed's liquidity injections into the financial system. Obama's error was to push the Saudis to play the oil card against Putin. Not only are they risking damaging their own fields they are killing the fake recovery in the US. We have already seen drilling permits fall and capital investment collapse in the sector. Unless there is a quick recovery in the price we will see the shale bubble pop for good and could see the USD go down with it.

reader Luboš Motl said...

Well, aside from the marginal producer, the price of oil is also set by the marginal consumer, right?

reader Gene Day said...

You are probably right in a technical sense. It’s an axiom in engineering that you can either get it done or get credit. People like Gates, Steve Jobs, Elon Musk, Edward Teller, Thomas Edison and many others have mastered the art of putting isolated pieces together in order to create great products. This does not, in my view, detract from their wonderful accomplishments.

reader Gene Day said...

HP Labs has been developing new memory methods for at least fifty years.

reader Vangel said...

Absolutely. I thought that I may have made it clear on my postings but may have missed it.

What we have today is weakening demand due to a faltering economy at the same time as cheap credit caused a huge bubble in the shale sector. The producers did not need to be economic because the game was one of land speculation. The balance sheets of companies carried assets that have no economic value but were treaded as sound. That can only work as long as the players who acquired their positions when leases were cheap are able to sell off some of their holdings at a high price so that they can close the funding gaps that appear when the revenues from the sale of oil is insufficient to cover the cost of producing it. That was a game that would have ended eventually but now with oil prices falling it is a near certainty that much of the high yield shale paper is about to go into default. Once that happens we will see the shale bubble exposed as a scam and one of the supports for the USD will collapse. And once the myth of the recovery becomes exposed another support will be pulled away.

This is exactly why you should ignore the pro-fiat rhetoric and be busy buying gold and silver for as long as the price remains low. (I would be buying oil and coal assets as well if you had a longer time horizon.)

Note that I have been asking the optimists to provide me with a single primary shale producer that has been able to produce positive cash flows from their shale wells. While I do not expect that from the newer players there are plenty of hyped up companies that have been producing for more than a decade. If they could not be economic when oil prices were at $100 why would I expect them to survive a $40-$80 range, particularly when we have high transport costs, high royalty fees, and wells that lose 90% of their production levels after two years of operation.

Also note that the Fed is insolvent on a mark to market basis. That will matter eventually.

reader john said...

Dear Vangel,

Sorry for late reply, I hope you will see it.

I can't make sense of your first paragraph. When a market is oligapoly and sellers form cartel, price is higher than the one in free market and total surplus is less than the one in free market. Where is my mistake ?

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');