That's process technology. The other way to improve computer performance is processor architecture. As advances in process technology become more expensive and less frequent, architecture plays an increasingly important role. It's always been important, and in the last 20 years, microprocessors have taken on innovations that had previously appeared only in big iron, things like microcode, RISC, pipelining, cacheing of instructions and data, and branch prediction.
Every time process technology hits a bump in the road, it's a boost for parallelism. In the 1980s, a lot of start-ups tried to build massively parallel computers. I was a fan of Thinking Machines in Cambridge, having read Danny Hillis's PhD thesis. The premise of these machines was to make thousands of processors, individually fairly feeble, arranged in a broadcast architecture. The Transputer chip was another effort in a similar direction. One issue then was that people wanted compilers that would automatically parallelize code written for serial processors, but that turned out to be an intractable problem.

Meanwhile the scientific community has realized over the last five years that despite the unsavory association with video games, GPUs are nowadays the most bang for your buck available in commodity computing hardware. Reading about nVIDIA's CUDA technology just makes me drool. The claims are that for scientific computation, an inexpensive GPU represents a speed-up of 20x to 100x over a typical CPU.
When I set out to write this, GPUs seemed to me like the historically inevitable next step. Having now recalled some of the earlier pendulum swings between process technology and processor architecture, I see that would be an overstatement of the case. But certainly GPU architecture and development will be important for those of us whose retirements are yet a few years off.
No comments:
Post a Comment