Sunday, August 30, 2009

Parallelism Needs a Killer Application

Gee, do you think?

That's the startling conclusion of a panel in the latest Hot Chips Conference, according to Computerworld.

(Actually, it doesn't need a killer app – for servers. But servers don't have sufficient volume.)

The article also says Dave Patterson, UC Berkeley, was heard to say "There's no La-Z-Boy approach to programming." This from someone I regard as a super hardware dude. If he's finally got the software message, maybe hope is dawning.

The article has this quote from a panelist: "Threads have to synchronize correctly." Wow, do my car tires have to be inflated, too? This has to have been taken out of a more meaningful context.

Here are some links to posts in this blog that have been banging on this drum for nearly two years: here, here, here, here, here, here. Actually, it's pretty much the whole dang blog; see the history/archives.

Maybe there will be more meaningful coverage of that panel elsewhere.

Ha, my shortest post yet. It's a quickie done just before I finished another one. I just couldn't believe that article. I guess this is another "my head exploded" post.

5 comments:

Anonymous said...

Hopefully this is a sign that people's understanding of multi-core is getting more realistic.

Igor
PS.: Your "for servers" link is broken.

Greg Pfister said...

That is putting a positive spin on it. I wish I had thought to say so. My gut reaction was just too strong, I guess.

Hopefully that link is fixed now. Thanks!

Greg

Anonymous said...

Might wanna check out the videos from the 2009 Par Lab Boot Camp : http://parlab.eecs.berkeley.edu/bootcampagenda

Greg Pfister said...

Thanks for the link. There's good data to mine in Patterson's intro. But he's still doing parallelism "because there is no choice" and hopes this time it will succeed "because there is no alternative." I count that as hardware guy thinking. Also, the example app areas are all server apps, which can't be "killer" by my reckoning. Exception: A parallel browser, which is interesting and might be useful for ultra-low power handhelds -- is that the killer direction? Hmmm. Maybe.

Greg

Anonymous said...

Gosh, I think even moreso than you, Greg, that the article is rubbish.

I'm not sure what they even mean by "killer app." This used to be something like e-mail or VisiCalc: a tool that users use directly that they just have to have. The users don't care about the implementation.

I was going to say that parallel programming is a programming technique, not an application, but on further reflection even that's not correct: concurrent programming is a technique (or a vast set of techniques, actually); parallelism is an execution model.

Seen in that light, parallelism is already used pretty much everywhere and has been for many years. How many CPUs in the last decade, even small, single-core, single-thread CPUs, don't have multiple execution units (say, one or two integer ALUs and a floating point unit) that can operate in parallel, and a control unit capable of dispatching instructions to take advantage of the available parallelism?

Now, concurrent programming models can be useful in their own right in many applications, but parallelism is quite orthogonal to that: I can't think of a single application where you'd care that something's running in parallel on several slower execution units rather than the whole running at the same speed on a much faster sequential execution unit.

The "killer application" for parallelism, i.e., what will drive developers to use it, is not an application or even a programming technique at all; it's just a desire for better performance than current sequential systems can provide. But that doesn't make use of parallelism a good or desirable thing; it makes it a necessary evil.

So all we really want here is to strike a balance between an easy programming model and cheap hardware. Clearly the hardware folks are telling us that several slower execution units are cheaper than one faster one, and there's a lot of research being done and that needs to be done on programming models that can better take advantage of this. But that's been an issue, well, forever: we were dealing with this decades ago when we programmers were debating whether to try to stuff some data in core or spool it out to magtape to be read in again later. (The hardware guys gave us the choice: would you rather have a megabyte of core, or a quarter meg of core and four tape drives storing 40 MB each?)

I suppose my summary is very curmudgeonly: nothing's really changed, except the details?

Post a Comment

Thanks for commenting!

Note: Only a member of this blog may post a comment.