Friday, May 29, 2009

Molecular modeling with Hadoop?

Hadoop is Apache's implementation of the MapReduce distributed computing scheme innovated by Google. Amazon rents out Hadoop services on their cluster. It's fairly straightforward to set up Hadoop on a cluster of Linux boxes. Having myself a long-standing interest in distributed computing approaches to molecular modeling, I have been trying to figure out how Hadoop could be applied to do very large-scale molecular simulations.

MapReduce is great for problems where large chunks of computation can be done in isolation. The difficulty with molecular modeling is that every atom is pushing or pulling on every other atom on every single time step. The problem doesn't nicely partition into large isolated chunks. One could run a MapReduce cycle on each time step, but that would be horribly inefficient - on each time step, every map job needs as input the position and velocity of every atom in the entire simulation.

There are existing solutions like NAMD, which uses DPMTA for the long-range forces between atoms. For a cluster of limited size these are the appropriate tools. For large clusters with hundreds or thousands of machines, the rate of hardware failures becomes a consideration that can't be ignored.

MapReduce provides a few principles for working in the very-large-cluster domain:
  • Let your infrastructure handle hardware failures, just like the way the Internet invisibly routes around dead servers.
  • Individual machines are anonymous. You never write application code that directly addressses an individual machine.
  • Don't waste too much money trying to make the hardware more reliable. It won't pay off in the end.
  • Use a distributed file system that reliably retains the inputs to a task until that task has been successfully completed.

Could the tasks that NAMD assigns to each machine be anonymized with respect to which machine they run on, and the communications routed through a distributed filesystem like Hadoop's HDFS? Certainly it's possible in principle. Whether I'll be able to make any reasonable progress on it in my abundant spare time is another matter.

Thursday, May 28, 2009

More thinking about compensation models

I've been watching some of The Hunt for Gollum. The quality is quite good, and some of the camera effects are surprisingly clever.

I am interested in the question, how do you release a work so that it ultimately ends up in the public domain, but first make some money (perhaps a lot)? And how do you do this when your customer base is entirely aware that, in the long run, it will be available for free?

Back in the Eighties, Borland sold their Turbo Pascal development system for only $30 when competing products sold for hundreds, and did nothing in hardware or software to implement any sort of copy protection, while their competitors scrambled for complicated but unsuccessful approaches to combat piracy. Borland's approach to copy protection was simply the honor system, and making the product cheap enough that nobody minded paying for it.

The machinima Red vs. Blue is released serially as episodes. Those guys have an interesting approach:
Members of the official website can gain sponsor status for a fee of US$10 every six months. Sponsors can access videos a few days before the general public release, download higher-resolution versions of the episodes, and access special content released only to sponsors. For example, during season 5, Rooster Teeth began to release directors' commentary to sponsors for download. Additionally, while the public archive is limited to rotating sets of videos, sponsors can access content from previous seasons at any time.
They are smart guys who have been doing this for years now, so it's likely they've hit upon as optimal a solution as is practical. Of course it helps that they have a great product that attracts a lot of interest. They are following the Borland approach: sponsorship is inexpensive and there is no attempt at copy protection.

Computer performance vibes

There are a number of topics pertaining to computer performance that I want to learn more about. As an ex-EE, I should be keeping up with this stuff better.

Processors are fast, memory chips are slow. We put a cache between them so that the processor need not go out to memory on every read and write. There is a dense body of thought about cache design and optimization. I might blog about this stuff in future. It's a kinda heavy topic.

One way to make processors run really fast is to arrange steps in a pipeline. The CPU reads instruction one from instruction memory, and then it needs to read something from data memory, do an arithmetic operation on it, and put the result in a register. While reading from data memory, the CPU is simultaneously reading instruction two. While doing arithmetic, it's reading instruction three, and also doing the memory read for instruction two. And so forth, so that the processor is chopped up into a sequence of sub-processors, each busy all the time.



Apple has a nice, more detailed discussion, here.
But there is a complication with pipelining. Some of these instructions are branch instructions, which means that the next instruction could be either of two different ones. That's potentially a mess, because you've already got the pipeline full of stuff when you discover whether or not you're taking the branch, and you might find that the instructions you fetched were the wrong ones, so you have to go back and do all those same operations with a different sequence of instructions. Ick.
The work-around is branch prediction. The CPU tries to figure out, as accurately as it can, which sequence it will end up going down, and if all goes well, does that early enough to fill the pipeline correctly the first time. It doesn't have to be perfect, but it should try to guess correctly as often as possible.
A couple more things they're doing these days. One is to perform several memory transfers per cycle. Another is something Intel calls hyper-threading, where some of the CPU's registers are duplicated, allowing it to behave like two separate CPUs. This can be a win if one half is stalled waiting for a memory access; the other half just plows ahead.
That's all the vaguely intelligent stuff I have to say on this topic at the moment. Maybe I'll go into more detail in future, no promises.

Friday, May 01, 2009

Fan-made movie: The Hunt for Gollum

The Hunt for Gollum is a 40-minute high-def movie made by fans of the Lord of the Rings trilogy in general, and the Peter Jackson movies in particular. The trailers look beautiful, the cinematography looks about as good as the three movies. This is being done on a purely non-profit basis and the entire movie will be released free to the Internet on Sunday, May 3rd.

I kinda wish these guys had tried to make money with this, for a few reasons. First, they should be rewarded for such a monumental effort. No doubt many of the primary organizers will get their pick of sweet jobs, just as the primary developers of Apache, Linux, Python, etc. have gone on to sweet jobs after releasing useful free software, but other participants might have gotten some compensation for their time and effort.

Second, there was an opportunity here to experiment with compensation models whose endgame is the release of a work into the public domain. I've often wondered if a big movie could be made by an independent group and released to the public domain, and still bring in significant money. My first idea for a ransom model where each frame of the movie would be encrypted and distributed, and encryption keys for frames or groups of frames would be published as various amounts of the total desired donation were reached. There would probably be a clause that the entire key set would be released unconditionally at some future date.

I think I have a better idea on further reflection. Before the public-domain release, find some buyers who are willing to pay for the privilege of having the movie before the big release. The buyers need to be informed that a public-domain release will occur and on what date, so that they understand that the window to make any money off the movie will be limited.

Another possibility is a ransom model with a linearly-decreasing donation threshold, with the public-domain release scheduled for when the donation threshold reaches zero. If the total donations cross the threshold before then, the release occurs at that time.

Anyway, kudos to the people who made "Hunt for Gollum", thanks for your efforts, and I am eagerly looking forward to seeing the movie.