The announcement of this blog in LinkedIn’s High Performance and Supercomputing group produced some responses indicating that I need to clarify what this blog is about. Thanks for that discussion, folks. One of my (semi-?) writers' block problems with this book is clearly explaining exactly what the issue is.
Here’s a whack at that, an overall outline of the situation as I see it. Everything in this outline requires greater depth of explanation, with solid numbers and references. This is just the top layer, and I’m really unsatisfied with it. Hang in there with me, prod me with questions if there are parts you are really itchy about, and I’ll get there.
First, I don’t mean to imply or say that there is some dastardly conspiracy to force the industry to use explicit parallelism. The basic problems are (1) physics; (2) the exhaustion of other possibilities. There simply isn't any choice -- but, no choice to do what? To increase performance.
This industry has lived and breathed dramatically increasing year-to-year single-thread performance since the 1960s. The promise has always been: Do nothing, and your programs run 45% faster this time next year, on cheaper hardware. That’s an incredible situation, the only time it’s ever happened in history, and it’s a situation that’s the very bedrock of any number of industry business and technical assumptions. Now we are moving to a new paradigm: To actually achieve the additional performance available in new systems, programmers must do something. What they must do is a major effort, it’s not clear how broadly applicable it is, and it’s so difficult that few can master it. That, to put it mildly, is a major paradigm shift. (And it’s not being caused by some nefarious cabal; as I said, it’s physics.)
This is not a problem for server systems. Those have been broadly parallel – clusters / farms / warehouses -- since the mid-90s, and server processing has arguably been parallelized since the mid-70s on mainframe SMPs. Transaction monitors, web server software like Apache, and other infrastructure all have enabled this parallelism (but the applications had to be there).
Unfortunately, servers alone don't produce the semiconductor volumes that keep an important fraction of this industry moving forward – a fraction, not all of the industry, but certainly the part that's arguably key, to say nothing of the loudest, and will make the biggest ruckus as it increasingly runs into trouble. The industry is going to be a very different place in the foreseeable future.
At this point, it’s reasonable to ask that if this is actually true, why aren’t the combined voices of the blogosphere, industry rags, and industry stock analysts all shouting it from the housetops? Why are these statements believable? Where’s the consensus? Are we in conspiracy theory territory?
No, we’re not. I think we’re in the union of two territories. First and foremost, we’re in the “left hand doesn’t know what the right hand is doing” territory, greatly enhanced by the narrow range of people with the cross-domain expertise needed to fully understand the problem. There was a flare-up of concern, flashing through the blogosphere and the technical news outlets back in 2004, but it was focused (almost correctly) on crying “Moore’s Law Is Ending!” so it was squashed when the technorati high priests responded “No, Moore’s Law isn’t ending, you dummies” which it isn’t, in the original literal sense. But saying that is a fine case of not seeing the forest for the veins on the leaves of the trees, or, in this case, not seeing a chasm because you’re a botanist, not a geologist.
That is the main territory, or at least I hope so. But having lived in the industry for quite a while, I understand that there’s also a form of denial – or hope – going on: There have definitely been occasions, well-remembered by management and business leaders, where those excitable, barely comprehensible techno-nerds have cried wolf, said the sky was falling, and it didn’t. This produces a reaction like: Nah, this can’t really be that big a deal; it’s the wrong answer; obviously we should not panic, since things have always happened to make such issues just go away. Also, aren’t they really posturing for more funding?
Unfortunately, they’re not. And the truth of the situation is starting to come across, with some industry funding for parallel programming, and pleas for a
My choice of the place to look is for applications that are “embarrassingly parallel,” where there’s no search for the parallelism – it’s obvious – and little need for explicit parallel programming. There are a few possibilities there (I’m partial to virtual worlds), but I’m far from certain that they’ll be widespread enough to pick up the slack in the client space. So I fear we may be in for a significant, long-term, downturn for companies whose business relies on the replacement cycle for client systems. This is not a pleasant prospect, since those companies are presently at the heart of the industry.