I recently had a glorious flashback to 2004.
Remember how, back then, when you got a new computer you would be just slightly grinning for a few weeks because all your programs were suddenly so crisp and responsive? You hadn't realized your old machine had a rubbery-feeling delay responding to your clicks and key-presses until zip! You booted the new machine for the first time, and wow. It just felt good.
I hadn't realized how much I'd missed that. My last couple of upgrades have been OK. I've gotten a brighter screen, better graphics, lighter weight, and so on. They were worth it, intellectually at least. But the new system zip, the new system crispness of response – it just wasn't there.
I have to say I hadn't consciously noticed that lack because, basically, I mostly didn't need it. How much faster do you want a word processor to be, anyway? So I muddled along like everyone else, all our lives just a tad more drab than they used to be.
Of course, the culprit denying us this small pleasure has been the flattening of single-thread performance wrought by the half-death of Moore's Law. Used to be, after a couple of years delay you would naturally get a system that ran 150% or 200% faster, so everything just went faster. All your programs were rejuvenated, and you noticed, instantly. A few weeks or so later you were of course used to it. But for a while, life was just a little bit better.
That hasn't happened for nigh unto five years now. Sure, we have more cores. I personally didn't get much use out of them. All my regular programs don't perk up. But as I said, I really didn't notice, consciously.
So what happened to make me realize how deprived I – and everybody else – has been? The Second Life client.
I'd always been less than totally satisfied with how well SL ran on my system. It was usable. But it was rubbery. Click to walk or turn and it took just a little … time before responding. It wasn't enough to make things truly unpleasant (except when lots of folks were together, but that's another issue). But it was enough to be noticeably less than great. I just told myself, what the heck, it's not Quake but who cares, that's not what SL is about.
Then for reasons I'll explain in another post, I was motivated to reanimate my SL avatar. It hadn't seen any use for at least six months, so I was not at all surprised to find a new SL client required when I connected. I downloaded, installed, and cranked it up.
Ho. Ly. Crap.
The rubber was gone.
There were immediate, direct responses to everything I told it to do. I proceeded to spend much more time in SL than I originally intended, wandering around and visiting old haunts just because it was so pleasant. It was a major difference, on the order of the difference I used to encounter when using a brand-new system. It was like those good old days of CPU clock-cranking madness. The grin was back.
So was this "just" a new, better, software release? Well, of course it was that. But I wouldn't have bothered writing this post if I hadn't noticed two other things:
First, my CPU utilization meter was often pegged. Pegged, as in 100% utilization, where flooring only one of my two CPUs only reads 50%. When I looked a little deeper, I saw the one, single SL process was regularly over 50%. I've not looked at any of the SL documentation on this, but from that data I can pretty confidently say that this release of the SL client can make effective use of both cores simultaneously. It's the only program I've got with that property.
Second, my thighs started burning. Not literally. But that heat tells me when my discrete GPU gets cranking. So, this client was also exercising the GPU, to good effect.
Apparently, this SL client actually does exploit the theoretical performance improvements from graphics units and multiple cores that had been laying around unused in my system. I was, in effect, pole-vaulted about two system generations down the road – that's how long it's been since there was a discernible difference. The SL client is my first post-Moore client program.
All of this resonates for me with the recent SC09 (Supercomputing Conference 2009) keynote of Intel's Justin Rattner. Unfortunately it wasn't recorded by conference rules (boo!), but reports are that he told the crowd they were in a stagnant, let us not say decaying, business unless they got their butts behind pushing the 3D web. (UPDATE: Intel has posted video of Rattner's talk.)
Say What? No. For me, particularly following the SL experience above, this is not a "Say What?" moment. It makes perfect sense. Without a killer application, the chip volumes won't be there to keep down the costs of the higher-end chips used in non-boutique supercomputers. Asking that audience for a killer app, though, is like asking an industrial assembly-line designer for next year's toy fashion trends. Killer apps have to be client-side and used by the masses, or the volumes aren't there.
Hence, the 3D Web. This would take the kind of processing in the SL client, which can take advantage of multicore and great graphics processing, and put it in something that everybody uses every day: the browser. Get a new system, crank up the browser, and bang! you feel the difference immediately.
Only problem: Why does anybody need the web to be 3D? This is the same basic problem with virtual worlds: OK, here's a virtual world. You can run around and bump into people. What, exactly, do you do there? Chat? Bogus. That's more easily done, with more easily achieved breadth of interaction, on regular (2D) social networking sites. (Hence Google's virtual world failure.)
There are things that virtual worlds and a "3D web" can, potentially, excel at; but that's a topic for a later post.
In the meantime, I'll note that in a great crawl-first development, there are real plans to use graphics accelerators to speed up the regular old 2D web, by speeding up page rendering. Both Microsoft and Mozilla (IE & Firefox) are saying they'll bring accelerator-based speedups to browsers (see CNET and Bas Schouten's Mozilla blog) using Direct2D and DirectWrite to exploit specialized graphics hardware.
One could ask what good it is to render a Twitter page twice as fast. (That really was one of the quoted results.) What's the point? Asking that, however, would only prove that One doesn't Get It. You boot your new system, crank up the browser and bam! Everything you do there, and you do more and more there, has more zip. The web itself – the plain, old 2D web – feels more directly connected to your inputs, to your wishes; it feels more alive. Result?
The grin will be back. That's the point.
4 comments:
Greg wrote:
---
Of course, the culprit denying us
this small pleasure has been the
flattening of single-thread
performance wrought by the half
death of Moore's Law.
---
Hi Greg! I'm thinking that a significant factor might also be the long-flat curve of disk seek time vs. calendar year. Certainly the Windows systems I run seem to be disk-bound when they're at their slowest (not to say that flattening of the single core curve isn't an important and less well understood issue too.)
Turning around, one might say it's interesting that the ratio of disk performance to CPU performance isn't changing as quickly as it once was (I.e. because neither of them is progressing very much! Maybe SSDs will invert the relative rates of progress; not sure).
Cheers.
Noah
Hi, Noah.
Yes, certainly disk seek time has kept PCs (& Macs) from being zippier, and I hope SSDs do help. It alone might make a bigger real difference than several old generations of silicon.
But there was still zip despite disk relative slowness in the 90s and early 2000s. That zip was lost. We need post-Moore programs to get it back.
Greg
Well, try the Intel G2 SSD on your system and you will find that everything is _much_ faster. In fact i have never experienced an upgrade that made such a big difference as this SSD has.
For me, the Ho. Ly. Crap apps on multicore are dmake and 7zip - in fact, I use 7zip (on Windows it integrates cleanly with Explorer, CLI on other OSs) whenever possible now. Compressing and decompressing are fast as blazes, as is compiling. As for zippier, if you use Eclipse you'll definitely notice a difference.
Photoshop CS4 really flies. I can't speak for Adobe's other apps, but the integration with Bridge is threaded, so batch-processing images uses all cores. Nvidia's drivers are tailored to the 3D capabilities of the CS4 suite (on all cards, not just Quadro), and I'm running a GTX-285 GPU. I don't do 3D professionally, but playing around with it is surreal compared to my previous desktop-PC experience.
I realize this isn't most peoples' experience with a multi-core upgrade. But I do believe that it will be, within a few short years, as more applications catch up.
It's been a few months now and I'm still grinning like an idiot, since it turns out most of the major applications I use were already good-to-go for multicore. The five years since my last PC upgrade feel like a decade, performance-wise.
Post a Comment
Thanks for commenting!
Note: Only a member of this blog may post a comment.