Thursday, September 25, 2008

Vive la (Killer App) Révolution!

I've pointed to finding applications that are embarrassingly parallel (EP), meaning it is essentially trivial for them to use parallel hardware, as a key way to avoid problems. Why should be the case?

Actually, what I’m trying to find are not just EP applications, but rather killer apps for mass-market parallel systems: Mass-market applications which force many people to buy a “manycore” system, because it’s the only thing that runs them well. In fact, I’m thinking now that EP is probably a misstatement, because that’s really not the point. If a killer app’s parallelism is so convoluted that only three people on the planet understand it, who cares? The main requirement is that the other 6.6 Billion minus 3 of us can’t live without it (or at least a sufficiently large fraction of that 6.6B-3). Those three might get obscenely rich. That’s OK, just so long as the rest of us don’t go down the tubes.

I’m thinking here by analogy to two prior computer industry revolutions: The introduction of the personal computer, and the cluster takeover. The latter is slightly less well-known, but it was there.

The introduction of the PC was clearly a revolution. It created the first mass market for microprocessors and related gear, as well as the first mass market for software. Prior to that, both were boutique products – a substantial boutique, but still a boutique. Certainly the adoption of the PC by IBM spurred this; it meant software developers could rely on there being a product that would last long enough, and be widespread enough, for them to afford to invest in product development. But none of that would have mattered if at the time this revolution began, microprocessors hadn’t gotten just (barely) fast enough to run a killer application – the spreadsheet. Later, with a bit more horsepower, desktop processing became a second level (and boosted Macs substantially) but the first big bump was the spreadsheet. It was the reason for the purchase of PCs in large numbers by businesses, which kicked everything into gear: The fortuitous combination of a particular form and level of computing power and an application using that form and power.

Note that nobody really lost in the PC revolution. A few people, Intel and Microsoft in particular, really won. But the revenue for other microprocessors didn’t decline, they just didn’t have the enormous increases of those two.

The takeover of servers by clusters was another revolution, ignited by another conjunction of fast-enough microprocessors and a killer app. This time, the killer app was the Internet. Microprocessors got just fast enough that they could provide adequate turnaround time for a straightforward “read this page” web request, and once that happened you just didn’t need single big computers any more to serve up your website, no matter how many requests came in: Just spray the request across a pile of very cheap servers. This revolution definitely had losers. It was a big part of what in IBM was referred to as “our near-death experience”; it’s hard to fight a difference in cost of about 300X (which is what I heard it estimated within IBM development for mainframe vs. commodity clusters, for cases where the clusters were adequate – big sore point there). Later Sun succumbed to this, too, even though they started out being one of the spears in IBM’s flanks. Once again, capability plus application produced revolution.

We have another revolution going on now in the turn to explicit parallelism. But so far, it’s not the same kind of revolution because while the sea-change in computer power is there, the killer app, so far, is not – or at least it’s not yet been widely recognized.

We need that killer app. Not a simpler means of parallel programming. What are you going to do with that programming? Parallel Powerpoint? The mind reels.

(By the way, this is perfect – it’s exactly what I hoped to have happen when I started this blog. Trying to respond to a question, I rethought a position and ended with a stronger story as a result. Thanks to David Chess for his comment!)

7 comments:

Anonymous said...

Greg, I find your central premise very reasonable -- that capability plus an application with mass appeal can together produce a computing revolution.

I understand that you are blogging to help yourself write your new book. If I may make one suggestion concerning language, I would not use "Internet" as a synonym for "World Wide Web." From the context, it seemed like you were referring to the WWW as the killer app of cluster computing -- as you already know, the Internet is merely the communication network which connects the client and server machines on which the WWW application software runs. I don't mean to come across as pedantic, but I think it's important to make this distinction in a technical book on parallel computing.

Wishing you success with your latest book,

Fred Chapman
Bethlehem, PA

Greg Pfister said...

Good catch, Fred. Obviously, I really meant the web. I'd edit it in place, but that somehow seems inappropriate; it makes your comment a dangling reference.

terry freeman said...

Will there be a mass market for massive parallelism? Yes and no. If it weren't for games, most people would be perfectly happy with today's quad-core computers plus lots of cheap RAM and storage, far into the forseeable future. There will be a demand for smaller, more energy-efficient, more quiet replacements. I think it's interesting that computationally-light UPMCs have experienced such strong demand. For the near future, games will drive consumers to buy more computer power.

The next "killer app", which convinces people that 1000s of processors are really cool, is likely to be a cross between a game and an expert tutor. It will have a a great deal of mathematical and scientific information, will be able to explain those topics well, and will produce large, interesting game/simulations to illustrate the concepts.

Another "killer app" would manage your photo collection in interesting ways. You take 100 snapshots at a party; tag the first few as "Aunt Susan", "Joe", "Tom", and so forth, and face recognition would suggest applying the same tags to other pics. You tag the location "Grand Canyon", and again, pattern recognition suggests the same tag be applied to other pictures. Clustering algorithms present an attractive collage, with similar pictures grouped together. Smart photo processing algorithms automatically crop and filter and clean up pictures. Want to make Cousin Eddy disappear, even when he sneaks into the background?

Lastly, optimization. Why shouldn't my computer introspect about my usage of Firefox, determine which bits are seldom or never of use, and optimize the code accordingly? Sometimes mathematical proofs could show that certain branches are dead; others could migrate to a sort of lazy dynamic link. Perhaps I almost never use the SVG capabilities - why load them before I need them? LLVM claims to do such run-time optimization and between-run optimization; a thousand processors might scrunch code down to much smaller sizes - especially if the analyzer is permitted to ask "how much do you really want capability X in this program?"

My favorite game - Go - could easily suck up the capability of 1000s of processors; I'm just waiting for the cost to drop.

Philip Machanick said...

What is really needed is a killer app that addresses some real mass-market need. The history of computing tells us that technology packaged for low cost eventually overtakes technology packaged for speed, first on price:performance and eventually on performance.

More at my blog.

Steve Rogers said...

Embedded systems could soak up a lot of low power multiprocessor chips. At least a couple of start-ups are working on them. Intellasys has a very low power 40 core chip and XMOS has a 4 core chip with 8 hardware threads per core.

stu said...

isn't the killer app 'information retrieval' ? i.e. google search, or commoditized parallel data warehousing and mining?

Arkadius said...

@Terry McIntyre

Adobe Photoshop CS5 has exactly that function. It lets people "disappear" from a picture and creates what it "thinks" would be there if the person weren't in the picture to begin with.

The tagging feature based on scanning similar faces is a feature at Facebook.

Your ideas were good and they have been already implemented in software. The algorithms don't need parallelism. It just makes the whole "fun" faster.

http://www.youtube.com/watch?v=vfkjHnsAsvg

Cheers.

Post a Comment

Thanks for commenting!

Note: Only a member of this blog may post a comment.