Thursday, November 16, 2006

What's a four-letter word starting with "R" for "Computer technology developed by D. Patterson"?

It all depends on your interests: if data storage is your thing, the answer is RAID; if you like compilers and microprocessor architecture, you're probably looking for RISC. If, on the other hand, you're taken by what the future might hold for parallel computing, the correct acronym is RAMP; Research Accelerator for Multiple Processors. If you want to continue to be impressed, look over the short version of D. Patterson's biography; he also has a slightly longer version, illustrating that impressiveness can scale linearly.

I had the great pleasure to attend Patterson's talk at PARC a week ago on the view from Berkeley. I strongly encourage you to watch the video linked from the above page or look over the accompanying slides; in them the audience is treated to the old and new conventional wisdoms on processor design, an explanation of the sudden π-turn the industry has made regarding parallelism, and visions of a future with 1000's of processors (or cores, as some call them) on a single chip; this scenario (termed "many-core" by Patterson) is a far cry from the so-called "multi-core" chips emerging on the market right now.

This is good news for the computational science community; for one thing, it means that we're not going to run into the "brick wall" of power consumption, memory accessibility, and instruction-level parallelism, so scientific applications will continue their exponential march toward ever-greater computational power. For another, it means there's plenty of work to be done in adapting the fundamental algorithms behind the computational work that goes on these days.

And what does it mean for the general, non-scientific computer-using population? At first, it may seem that computers are fast enough, hard drives large enough, etc., for most of what anyone could ever need. However, all it takes is some ingenuity to put all that power to good use; another great example of this sort of imagination was already discussed. If you've ever edited photos, audio, or video, or waited an hour to extract the music from a CD, you've wanted the power that many-core processors may be able to deliver. When asked "How fast is fast enough?", L. Ellison said "In the blink of an eye." We clearly have a long way to go before the current generation of applications meet that standard; hopefully, many-core technology will carry us a long way along that road.

2 comments:

Anonymous said...

I think by now it has got to be considered an axiom that computing applications will expand to make full use of any imaginable increase in available computing power or storage.

Ten years ago, 233MHz and 2GB seemed more than anyone could rationally use up in one lifetime. Of course, that was before iTunes, and burning your entire CD collection onto your hard drive, not to mention audiobooks, TV shows and feature-length movies. A couple of years later, I read an article suggesting that a chip manufacturer (I don't remember which) had developed a prototype processor on the order of 7GHz or so, but didn't think it was marketable outside of videogame consoles, which wouldn't be a big enough market to make production profitable. It didn't take long to prove just how shortsighted that viewpoint was.

Twenty years ago, it took my Commodore 64 over five minutes just to boot up, which was frustrating enough, but expectations of computers were pretty low. These days, some rendering jobs on my powerbook take quite a lot longer than that. Still frustrating, but I think the old C64 would have broken down weeping at what it would have had to consider magic.

P. Sternberg said...

And further underscoring how dramatically the landscape can change--video games are now a 30 billion dollar industry, and will only continue to grow as the major players get better at finding new untapped markets. Probably worth the development dollars, I would guess.

And to make matters even more interesting, researchers have found that GPU's can provide better performance than CPU's for some applications, and we find folding@home running on some unconventional hardware. Of course, it would be better if more than 400,000 of those machines were in circulation...