Friday, July 10, 2009

Moore's Law and GPUs

Way back when, Gordon Moore of Intel came up with his "law" that the number of transistors on a given area of silicon would double every 18 months. Currently chip manufacturers use a 45 nm process, and are preparing to move to a 32 nm process. There is an International Technology Roadmap for Semiconductors that lays all this out. As feature sizes shrink, we need progressively more exotic technology to fabricate chips. The ITRS timeframe for a 16 nm process is 2018, well beyond the expectation set by Moore's Law. There is a lot of punditry around these days about how Moore's Law is slowing down.

That's process technology. The other way to improve computer performance is processor architecture. As advances in process technology become more expensive and less frequent, architecture plays an increasingly important role. It's always been important, and in the last 20 years, microprocessors have taken on innovations that had previously appeared only in big iron, things like microcode, RISC, pipelining, cacheing of instructions and data, and branch prediction.

Every time process technology hits a bump in the road, it's a boost for parallelism. In the 1980s, a lot of start-ups tried to build massively parallel computers. I was a fan of Thinking Machines in Cambridge, having read Danny Hillis's PhD thesis. The premise of these machines was to make thousands of processors, individually fairly feeble, arranged in a broadcast architecture. The Transputer chip was another effort in a similar direction. One issue then was that people wanted compilers that would automatically parallelize code written for serial processors, but that turned out to be an intractable problem.

Given the slowing of Moore's Law these days, it's good to be a GPU manufacturer. The GPU guys never claim to offer a parallelizing compiler -- one that can be applied to existing code written for a serial computer -- instead they just make it very easy to write new parallel code. Take a look at nVIDIA's GPU Gems, and notice there's a lot of math and very little code. Because you write GPU code in plain old C, they don't need to spend a lot of ink explaining a lot of wierd syntax.

Meanwhile the scientific community has realized over the last five years that despite the unsavory association with video games, GPUs are nowadays the most bang for your buck available in commodity computing hardware. Reading about nVIDIA's CUDA technology just makes me drool. The claims are that for scientific computation, an inexpensive GPU represents a speed-up of 20x to 100x over a typical CPU.

When I set out to write this, GPUs seemed to me like the historically inevitable next step. Having now recalled some of the earlier pendulum swings between process technology and processor architecture, I see that would be an overstatement of the case. But certainly GPU architecture and development will be important for those of us whose retirements are yet a few years off.

No comments: