Tuesday, July 21, 2009

Building a GPU machine

I've been reading lately about what NVIDIA has been doing with CUDA and it's quite impressive. CUDA is a programming environment for their GPU boards, available for Windows, Linux, and Mac. I am putting together a Linux box with an NVIDIA 9600GT board to play with this stuff. The NVIDIA board cost me $150 at Staples. Eventually I intend to replace it with a GTX280 or GTX285, which both have 240 processor cores to the 9600GT's 64. I purchased the following from Magic Micro, which was about $300 including shipping:
Intel Barebones #2


* Intel Pentium Dual Core E2220 2.4 GHz, 800FSB (Dual Core) 1024K
* Spire Socket 775 Intel fan
* ASRock 4Core1600, G31, 1600FSB, Onboard Video, PCI Express, Sound, LAN
* 4GB (2x2GB) PC6400 DDR2 800 Dual Channel
* AC 97 3D Full Duplex sound card (onboard)
* Ethernet network adapter (onboard)
* Nikao Black Neon ATX Case w/side window & front USB
* Okia 550W ATX Power Supply w/ 6pin PCI-E



I scavenged an old DVD-ROM drive and a 120-gig HD from an old machine, plus a keyboard, mouse, and 1024x768 LCD monitor. I installed Slackware Linux. I went to the CUDA download website and picked up the driver, the toolkit, the SDK, and the debugger.

This is the most powerful PC I've ever put together, and it was a total investment of just a few hundred dollars. For many years I've drooled at the prospect of networking a number of Linux boxes and using them for scientific computation, but now I can do it all in one box. It's a real live supercomputer sitting on my table, and it's affordable.

I am really starting to like NVIDIA. They provide a lot of support for scientific computation. They are very good about sharing their knowledge. They post lots of videos of scientific uses for their hardware.
NVIDIA's SDK includes several demos, some of them visually attractive: n-body, smoke particles, a Julia set, and a fluid dynamics demo. When running the n-body demo, the 9600GT claims to be going at 125 gigaflops.
A few more resources...

6 comments:

Jesper said...

Have you looked at OpenCL? It's like CUDA, but portable (not NVIDIA-specific).

Will Ware said...

Thanks for the pointer to OpenCL. I'll download it and take it for a spin.

If you have an interest in molecular modeling, another interesting non-NVIDIA-specific thing is OpenMM. The project was started by one of the Folding@Home guys, and there's an architectural layer not committed to molecular modeling upon which other kinds of computations can run.

I hope to find time to use OpenMM to tinker with simplified mechanical/electrostatic models of proteins, to see if they can be used in large-scale simulations where the regions of interest use more detailed models like GROMACS.

Will Ware said...

NVIDIA's OpenCL implementation is here. PyOpenCL is here, from the same guy who did PyCUDA.

Will Ware said...

More comments on Reddit.

Psychsoftpc said...

you may be interested in our Tesla / Cuda resources here www.psychsoftpc.com/tesla_cuda_resources.htm

Will Ware said...

Thanks! I haven't been doing anything with this stuff for quite a while, but maybe this will be the kick in the pants to change that situation. I'd be curious to see if Clojure's transactional model can somehow play nice with CUDA.