The Coalition for Academic Scientific Computation had its 20-year anniversary celebration symposium recently, and I was invited to participate on a panel with the topic HPC – The Next 20 Years. I thought it would be interesting to write down here what I said in my short position presentation. Eventually all the slides of the talks will be available; I’ll update this post when I know where.
First, my part.
Thank you for inviting me here today. I accepted with misgivings, since futurists give me hives. So I stand here now with a kind of self-induced autoimmune disorder.
I have no clue about what high-performance computing will look like 20 years from now.
(Later note: I was rather surprised that the other panelists did not say that; they all did agree.)
So, I asked a few of my colleagues. The answers can be summarized simply, since there were only three, really:
A blank stare. This was the most common reaction. Like “Look, I have a deadline tomorrow.”
Laughter. I understand that response completely.
And, finally, someone said: What an incredible opportunity! You get to make totally outrageous statements that you’ll never be held accountable for! How about offshore data centers, powered by wave motion, continuously serviced by autonomous robots with salamander-level consciousness, spidering around replacing chicklet-sized compute units, all made by the world’s largest computer vendor – Haier! [They make refrigerators.] And lots of graphs, all going up to the right!
There’s a man after my own heart. I clearly owe him a beer.
And he’s got a lot more imagination than I have. I went out and boringly looked for some data.
What I found was the chart below, from the ITRS, the International Technology Roadmap for Semiconductors, a consortium sponsored by other semiconductor consortia for the purpose of creating and publishing roadmaps. It’s their 2008 update, the latest published, since they meet in December to do the deed. Here it is:
Oooooo. Complicated. Lots of details! Even the details have details, and lesser details upon ‘em. Anything with that much detail obviously must be correct, right?
My immediate reaction to this chart, having created thousands of technical presentations in my non-retired life, is that this is actually a transparent application of technical presentation rule #34: Overwhelm the audience with detail.
The implied message this creates is: This stuff is very, very complicated. I understand it. You do not. Therefore, obviously, I am smarter than you. So what I say must be correct, even if your feeble brain cannot understand why.
It doesn’t go out a full 20 years, but does go to 2020, and it says that by then we’ll be at 10 nm. feature sizes, roughly a quarter of what’s shipping today. Elsewhere it elaborates on how this will mean many hundreds of processors per chip, multi-terabit flash chips, and other wonders.
But you don’t have to understand all of that detail. You want to know what that chart really means? I’ll tell you. It means this, in a bright, happy, green:
Everything’s Fine! .
We’ll just keep rolling down the road, with progress all the way. No worries, mate!
Why does it mean that? Because it’s a future roadmap. Any company publishing a roadmap that does not say “Everything’s Fine!” is clearly inviting everybody to short their stock. The enormous compromise that must be performed within a consortium of consortia clearly must say that, or agreement could not conceivably be reached.
That said, I note two things on this graph:
First, the historical points on the left really don’t say to me that the linear extrapolation will hold at that slope. They look like they’re flattening out. Another year or so of data would make that more clear one way or another, but for now, it doesn’t look too supportive of the extrapolated future predictions.
Second, a significant update for 2008 is noted as changing the slope from a 2.5-year cycle to a 3-year cycle of making improvements. In conjunction with the first observation, I’d expect future updates to increase the cycle length even more, gradually flattening out the slope, extending the period over which the improvements will be made.
The implication: Moore’s Law won’t end with a bang; it will end with a whimper. It will gradually fade out in a period stretching over at least two decades.
I lack the imagination to say what happens when things really flatten out; that will depend on a lot of things other than hardware or software technology. But in the period leading up to this, there are some things I think will happen.
First, computing will in general become cheaper – but not necessarily that much faster. Certainly it won’t be much faster per processor. Whether it will be faster taking parallelism into account we’ll talk about in a bit.
Second, there will be a democratization of at least some HPC: Everybody will be able to do it. Well before 20 years are out, high-end graphics engines will be integrated into traditional high-end personal PC CPUs (see my post A Larrabee in Every PC and Mac). That means there will be TeraFLOPS on everybody’s lap, at least for some values of “lap”; lap may really be pocket or purse.
Third, computing will be done either on one’s laptop / cellphone / whatever; or out in a bloody huge mist/fog/cloud -like thing somewhere. There may be a hierarchy of such cloud resources, but I don’t think anybody will get charged up about what level they happen to be using at the moment.
Those resources will not be the high-quality compute cycles most of the people in the room – huge HPC center managers and users – are usually concerned with. They’ll be garbage computing; the leftovers when Amazon or Google or Microsoft or IBM are finished doing what they want to do.
Now, there’s nothing wrong with dumpster-diving for computing. That, after all, is what many of the original clusters were all about. In fact, the first email I got after publishing the second edition of my book said, roughly, “Hey love the book, but you forgot my favorite cluster – Beowulf.” True enough. Tom Sterling’s first press release on Beowulf came out two weeks after my camera-ready copy was shipped. “I use that,” he continued. “I rescued five PCs from the trash, hooked the up with cheap Ethernet, put Linux on them, and am doing [some complicated technical computing thing or other, I forget] on them. My boss was so impressed he gave me a budget of $600 to expand it!”
So, garbage cycles. But really cheap. In lots of cases, they’ll get the job done.
Fourth, as we get further out, you won’t get billed by how many processors or memory or racks you use – but by how much power your computation takes. And possibly by how much bandwidth it consumes.
Then there’s parallelism.
I’m personally convinced that there will be no savior architecture or savior language that makes parallel processing simple or easy. I’ve lived through a good four decades of trying to find such a thing, with significant funding available, and nothing’s emerged. For languages in particular, take a look at my much earlier post series about there being 101 Parallel Languages, with none of them are in real use. We’ve got MPI – a package for doing message-passing – and that’s about it. Sometimes OpenMP (for shared memory) gets used, but it’s a distant second.
That’s the bad news. The good news is that it doesn’t matter in many cases, because the data sets involved will be absolutely humongous. Genomes, sensor networks, multimedia streams, the entire corpus of human literature will all be out there. This will offer enormous amounts of the kinds of parallelism traditionally derided as “embarrassingly parallel” because it didn’t pose any kind of computer science challenge: There was nothing interesting to say about it because it was too easy to exploit. So despite the lack of saviors in architecture and languages, there will be a lot of parallel computing. There are people now trying to call this kind of computation “pleasantly parallel.”
Probably the biggest challenges will arise in getting access to the highest-quality, most extensive exemplars of such huge data sets.
The traditional kind of “hard” computer-science-y parallel problems may well still be an area of interest, however, because of a curious physical fact: The amount of power consumed by a collection of processing elements goes up linearly with the number of processors; but it also goes up as the square of the clock frequency. So if you can do the same computation, in the same time, with more processors that run more slowly, you use less power. This is much less macho than traditional massive parallelism. “I get twice as much battery life as you” just doesn’t compete with “I have the biggest badass computer on the planet!” But it will be a significant economic issue. From this perspective, parallel office applications – such as parallel browsers, and even the oft-derided Parallel PowerPoint – actually make sense, as a way to extend the life of your cell phone battery charge.
Finally, I’d like to remind everybody of something that I think was quite well expressed by Tim Bray, when he tweeted this:
Here it is, 2009, and I'm typing SQL statements into my telephone. This is not quite the future I'd imagined.
The future will be different – strangely different – from anything we now imagine.
* * * *
That was my presentation. I’ll put some notes about some interesting things I heard from others in a separate post.
2 comments:
I like your writing style. Comfortable, conversant, humorous.
I think this is spot on. I think I agree with nearly every word you said.
However... Although you make one allusion to it, this misses a huge point or topic area. What about the "the high-quality compute cycles"?
In part, since that's where the money is, that's what I find interesting. Those "high-quality compute cycles" are corporate IT. It's also interesting since it's not as well defined. I think the whole Grid, HPC, Utility Computing and Web evolution into Cloud is fairly well defined and the roadmap is even fairly predictable and stable.
Far less obvious is what's going to happen to the back-end infrastructure... the databases and corporate infrastructure. Those "high-quality compute cycles" you spoke of. Certainly cloud and virtualization will continue their evolutionary march but their impact on Corporate IT is going to be fascinating to watch. This will literally be a clash of cultures with the corporate stodginess and risk aversion severely challenged by the compelling advantages of new technologies. There has always been evolution in IT but I think we'll see even more of it over the coming years.
Thanks for an enjoyable blog. GGK
Thanks for the kudos!
A little miscommunication here, though.
In the context of that post, I'd classify most corporate IT as "garbage cycles."
That was a talk to people who get the funding for big academic HPC "supercomputing centers." What they mean by high-quality involves really high memory bandwidth, major FP support, very high-bandwidth low-latency interconnects, etc.
Not the usual categorization, where, say, zSeries would be considered. That's "high quality" along a different axis.
Thanks for reading and commenting.
Greg
Post a Comment
Thanks for commenting!
Note: Only a member of this blog may post a comment.