Thursday, January 20, 2011

Intel Sandy Bridge is here

In recent years, I was disappointed by the rate at which the speed of microprocessors was increasing. While Moore's law describing the total number of transistors worked very well - the number of transistors in a computer is doubling every two years - and the memory (and flash disk) capacity was increasingly nicely as well, the frequencies of the microprocessors in computers you could buy were not changing much. Of course, I realize that there was a transition to multi-core processors, but anyway, it looked slow.

Finally, Intel began to sell its Sandy Bridge processors and they became a part of many computers that are sold across the world, including my homeland. They use the 32 nanometer architecture: finally some progress.

In 1971, computers would be built on a 10-micron architecture. Every five years or so, the distance scale dropped by a factor of two or so. When I was leaving America in 2007, they would be getting ready for the 45 nanometer architecture which was launched in 2008. Finally, we have another step.

The computers with the 32-nm chips consume less power - 17-55 W for laptops and 35-95 W for desktops. And of course, the speed may get a bit better, too. The jump to 22, 16, and 11 nanometers is planned for this year, 2013, and 2015, respectively. There could be a few more steps after that but it is not guaranteed: they are getting damn close to the size of the atom.

I am generally against subsidies but this is one of the contexts in which I could imagine some room for subsidies. I think that lots of slower and less efficient microprocessors will be produced in the future, making the consumers less satisfied than what would be possible and forcing them to pay more electricity than needed.

It seems to me that lots of the slowdown is due to the patents and copyrights which lead the fast companies to continue with their slower products and which prevent the slower companies from following the footsteps of their faster competitors.

Of course, the patents are very important for keeping research profitable. On the other hand, I can imagine that under some controllable and logical rules, the governments could buy these patents and release them to the public domain, at least in individual countries such as the U.S.

The justification would be that if a newer product X2 saves time and electricity that is worth P relatively to an older product X1, the government may afford to pay P for the upgrade from X1 to X2. I am sure that my fellow free-marketeers are going to burn me with anger :-) but still, I am curious what people think about these matters.


  1. The strategy of removing incentive to accelerate progress has not had good success where it has been tried.
    Indeed, Intel's huge margins provide the umbrella under which less capable firms such as AMD can survive profitably.
    It is very easy to make cheap processors,
    as the numerous ARM licensees demonstrate.
    The cost is in the software to make them do useful work.
    Unfortunately, writing good software is a rare skill somewhat akin to that required to write a good novel (or perhaps a good blog ;) ). Software is the long pole in the DP productivity tent. Cheaper CPU chips would actually make things worse imo, as parallel programming is even more difficult than regular sequential logic.

  2. 32 nm is the process node; Intel has the TIC TOC strategy. Every year they change alternatively either the process node or the architecture. 32 nm started with the Nehalem architecture and they are now moving to the Sandy Bridge architecture, which they will carry to 22 nm. Regarding alternatives to silicon at smaller geometry there are not any credible ones yet. Plenty of single fast and small transistors have been demonstrated (graphene included) but the hard part is to put 1 Billion + of them on a chip relatively cheaply.

  3. Having worked at Intel for 7 years, until just a few years ago, I can share this: They are moving forward as fast as they physically can. My role was not in Microprocessor architecture, but in Motherboard development to provide the infrastructure for the product. Intel runs a tight ship. We would often implement new designs, possibly including new board fabs on as short as 1 or 2 week turn. To procure new parts, revise testing, run a pilot and eval those new boards in that time was MISERABLE for employees. And we would do it over and over and over to get it right.

    Tho clock speed hasn't been increasing so much, performance has continued to climb remarkably.
    Tom's Hardware is a great place to read about this:,2418.html

    AMD began winning the war of overall value with equal performance yet lower power consumption, so clock speed ceased to be the marketing driver. Then we were looking at power consumption on all components and moving to designs that leveraged everything from reliability of compatible subsystems to heat management (to cut cooling costs) and just every conceivable thing. Deliver more for less.

    AMD was quite spry and formidable as a competitor. Complacency wasn't survivable. When I left, Intel had doubled down on resolve and commitment to literally outwork AMD to the finish line. Contrary to what you might hear in the news, I NEVER heard ANYONE within Intel talk about any method except hard work to gain the competitive edge. I've worked for many fortune 500 companies, and none of them even come close to Intel for effort.

  4. Lubos:

    you should always trust your right-wing instinct.

    Your suggestion to subsidize firms by buying patents and releasing them to the public domain is interesting, but it is also unnecessary and wrong.

    Patents which allow for greater efficiency and smaller transistor sizes have always been around. And this doesnt prevent anyone from benefiting, because you can always pay a licensing fee to a patent owner. This benefits the inventor - as it should - but also its competitors, which manage to make money from technology they did not invent, and ultimately society at large.