Saturday, September 4, 2010

Intel Graphics in Sandy Bridge: Good Enough


As I and others expected, Intel is gradually rolling out how much better the graphics in its next generation will be. Anandtech got an early demo part of Sandy Bridge and checked out the graphics, among other things. The results show that the "good enough" performance I argued for in my prior post (Nvidia-based Cheap Supercomputing Coming to an End) will be good enough to sink third party low-end graphics chip sets. So it's good enough to hurt Nvidia's business model, and make their HPC products fully carry their own development burden, raising prices notably.

The net is that for this early chip, with early device drivers, at low, but usable resolution (1024x768) there's adequate performance on games like "Batman: Arkham Asylum," "Call of Duty MW2," and a bunch of others, significantly including "Worlds of Warfare." And it'll play Blue-Ray 3D, too.

Anandtech's conclusion is "If this is the low end of what to expect, I'm not sure we'll need more than integrated graphics for non-gaming specific notebooks." I agree. I'd add desktops, too. Nvidia isn't standing still, of course; on the low end they are saying they'll do 3D, too, and will save power. But integrated graphics are, effectively, free. It'll be there anyway. Everywhere. And as a result, everything will be tuned to work best on that among the PC platforms; that's where the volumes will be.

Some comments I've received elsewhere on my prior post have been along the lines of "but Nvidia has such a good computing model and such good software support – Intel's rotten IGP can't match that." True. I agree. But.

There's a long history of ugly architectures dominating clever, elegant architectures that are superior targets for coding and compiling. Where are the RISC-based CAD workstations of 15+ years ago? They turned into PCs with graphics cards. The DEC Alpha, MIPS, Sun SPARC, IBM POWER and others, all arguably far better exemplars of the computing art, have been trounced by X86, which nobody would call elegant. Oh, and the IBM zSeries, also high on the inelegant ISA scale, just keeps truckin' through the decades, most recently at an astounding 5.2 GHz.

So we're just repeating history here. Volume, silicon technology, and market will again trump elegance and computing model.



PostScript: According to Bloomberg, look for a demo at Intel Developer Forum next week.

10 comments:

Anonymous said...

I suppose low end chipsets really compete for the notebook market desktop is debatable. Is notebook market really the market of the future .. netbooks / tablets are invading it like crazy so a good move from Intel but too late?

Noah Mendelsohn said...

I think you're somewhat underrating the "beauty" of the original System/360 instruction set. Well, let me temper that: there certainly were some bizarre warts on the side (Edit and Mark?), and things like memory-to-memory instructions proved a mess performance-wise. Nonetheless, I think it's fair to see that the core of the Register-to-Register and Register-to-Memory instruction set is reasonably orthogonal and clean. It certainly scaled for performance much better than supposedly nicer architectures like the Vax.

Now, what was done with that instruction set in the later generations includes some real architectural abominations.

Cheers.

RPG said...

@Daniel

It's the volume of chips, not IP cores that matters. And few ARM based SoC's (targeted at desktop/HPC) can stand up against x86 when compared for volumes.

Anonymous said...

This blog brought to you by Intel?
I get the impression of an underlying anti-GPU, anti-NVIDIA theme running through many of these blog postings.

Greg Pfister said...

Nah, just following what the technology will inevitably produce. AMD's Fusion will do the same; it's just that Intel has a proof point right now. And it also means the GPUs will be built in, available to everybody to use all the time.

Actually, getting this comment is pretty interesting, given that I spent 31 years working in parts of IBM that considered Intel (and AMD) the enemy. It shows that I've apparently overcome some ground-in biases.

J said...

In light of your recent post about Larrabee not being dead, this seems rather sad...

At a group panel discussion with the Intel Fellows, essentially some of the smartest people at Intel, Intel's head graphics honcho was quizzed about Larrabee.

When asked if they ever expected to see a Larrabee-based graphics part coming out at all, the entire panel looked directly at Piazza, as he hunkered down in his stool.

"I honestly thought i'd get through two days without someone asking me that..." he said, followed by a simple, "I don't think so."


Read more: http://www.techradar.com/news/computing-components/graphics-cards/intel-larrabee-was-impractical--716960#ixzz0zkxC2ebe

J said...

Hey Greg, I don't recall deleting that comment. I visited your blog today and wondered where it was, thinking that possibly you were "in the know" and were trying to suppress the comment, which of course, is at your discretion.

For your other readers again if Google lets me,


TechRadar quotes an Intel Architecture Group Director (there seem to be several), "I just think it's impractical to try to do all the functions in software in view of all the software complexity," he explained. "And we ran into a performance per watt issue trying to do these things. Naturally a rasterizer wants to be fixed function."

Greg Pfister said...

Hi, Broc.

Turns out Blogspot auto-magically decided your comment was spam. I informed it otherwise, so the original is there now, too, along with your second one.

Anyway, thanks for pointing me to that article. The Larrabee crew have clearly been repurposed to HPC, it looks like.

However, the blanket statement that "Naturally a rasterizer wants to be fixed function" seems a bit off the deep end for me. All new-ish graphics support is programmable -- DirectX demands it, as I understand the situation.

Performance per watt problems, though, may point to a more general issue: Inherent inefficiency of MIMD compared with SIMD, in those cases where you can efficiently use SIMD.

Greg

Anonymous said...

Of course will never see Larrabee implemented. Intel changed its name to Knights Ferry. Down the memory hole with Larrabee. Intel has never been developing a new GPU except for Knights Ferry.

All hail Knights Ferry!

Greg Pfister said...

@DanielVS, @RPG -
Yes, ARM and similar things are certainly super-high volume. Intel is also clearly heading that way, judging from the recent IDF (http://goo.gl/XFGn). As noted, that's Nvidia's target, too. Not sure what AMD might be doing in that space.

But all that doesn't affect whether GPGPUs will continue to be subsidized. I don't think they will. Prices will rise to typical supercomputer numbers -- in fact, they already have, judging from various offerings at Nvidia's GPU technical conference. They're comparable to Cray.

Post a Comment

Thanks for commenting!

Note: Only a member of this blog may post a comment.