Tuesday, May 17, 2011

Sandy Bridge Graphics Disappoints


See update and the end of this post: New drivers released.
Well, I'm bummed out.

I was really looking forward to purchasing a new laptop that had one of Intel's new Sandy Bridge chips. That's the chip with integrated graphics which, while it wouldn't exactly rock, would at least be adequate for games at midrange settings. No more fussing around comparing added discrete graphics chips, fewer scorch marks on my lap, and other associated goodness would ensue.

The pre-ship performance estimates and hands'-on trials said that would be possible, as I pointed out in Intel Graphics in Sandy Bridge: Good Enough. This would have had the side effect of pulling the rug out from under Nvidia's volumes for GPUs, causing the HPC market to have to pull its own weight, meaning have traditional HPC price tags (see Nvidia-based Cheap Supercomputing Coming to an End). That would have been an earthquake, since most of the highest-end HPC systems now get their peak speeds from Nvidia CUDA accelerators, a situation not in small part due to their (relatively) low prices arising from high graphics volumes.

Then TechSpot had to go and do a performance comparison of low-end graphics cards, and later, just as a side addition, throw in measurements of Sandy Bridge graphics, too.

Now, I'm sufficiently old-fashioned in my language that I really try to avoid even marginally obscene terms, even if they are in widespread everyday use, but in this case I have to make an exception:

Damn, Sandy Bridge really sucks at graphics.

It's the lowest of the low in every case. It's unusable for every game tested (and they tested quite a few), unless you're on some time-dilation drug that makes less than 15 frames per second seem zippy. Some frame rates – at medium settings – are in single digits.

With Sandy Bridge, Intel has solidly maintained its historic lock on the worst graphics performance in the industry. This, by the way, is with the Intel i7 chips overclocked to 3.4GHz. That should also overclock the graphics (unless Intel is doing something I don't know about with the graphics clock).

Ah, but possibly there is a "3D" fix for this coming soon? Ivy Bridge, the upcoming 22nm shrink of Sandy Bridge (the Intel "tock" following Sandy Bridge "tick"), has those wondrous new much-promoted transistors. Heh. Intel says Ivy Bridge will have – drum roll – 30% faster graphics than Sandy Bridge.

See prior marginal obscenity.

Intel does tend to sandbag future performance estimates, but not by enough to lift 30% up to 200-300%; that's what would be needed to produce what people were saying Sandy Bridge would do. Is that all we get from those "3D" transistors? The way the Intel media guys are going on about 3D, I expected Tri-Gate (which can be two- or five- or whatever-gate) to give me an Avatar-like mind meld or something.

All that stuff about on-chip integrated graphics taking over the low-end high-volume market for discrete graphics just isn't going to happen this year with Sandy Bridge, or later with Ivy Bridge. As a further grain of salt in my wound, Nvidia is even seeing a nice revenue uptick from selling discrete graphics add-ons to new Sandy Bridge systems. It's not that I have anything against Nvidia. I just didn't think that uptick, of all things, was going to happen.

This doesn't change my opinion that GPUs integrated on-chip won't ultimately take over the low-end graphics market. As the real Moore's Law – the law about transistor densities, not clock rates – continues to march on, it's inevitable that on-chip integrated graphics will be just fine for low- and medium-range games. It just won't happen soon with Intel products.

Ah, but what about AMD? Their Fusion chips with integrated graphics, which they call APUs, are supposed to be rather good. Performance information leaked on message boards about their upcoming A4-3400, A6-3650 and A8-3850 APUs make them sound as good as, well, um, as good as Sandy Bridge was supposed to be. Hm.

Several years ago I heard a high-level AMD designer say that people looking for performance with Fusion were going to be disappointed; it was strictly a cost/performance product. That was several years ago, and things could have changed, but chip design lead times are still multi-year.

In any event, this time I think I'll wait until shipped products are tested before declaring victory.

Meanwhile, here I go again, flipping back and forth between laptop specs and GPU specs, as usual.

Sigh.


UPDATE May 23, 2011

Intel has just released new drivers for Sandy Bridge. The press release says they provide “up to 40% performance improvements on select games, support for the latest games like Valve’s Portal 2 and Stereoscopic 3D playback on DisplayPort monitors.”

At this time I don't know of test results that would confirm whether this really makes a difference, but if it’s real, and applies broadly enough, it might be just barely enough to make the Ivy Bridge chip the beginning of the end for low-end discrete graphics.


17 comments:

Anonymous said...

so glad to see you post again Greg .. is there a way to collect your articles from the new site or atleast post links here

Anonymous said...

Nice you are admitting you've been talking nonsense - as most of the time you do so...

Ed Trice said...

The sandy Bridge systems that we build all hit 5.0 GHz+ and we add the GTX 580 graphics card to the mix to really provide outstanding performance for video. See LiquidNitrogenOverclocking.com for some options that will probably change your mind about the Sandy Bridge systems.

Ryan said...

Ed, I don't think Greg's opinion on Sandy Bridge's integrated graphics performance is will be changed by your systems, which ADD AN EXTERNAL GRAPHICS CARD in order to improve performance. This is doubly true since what Greg really cares about is laptop performance.

Anonymous said...

Greg,

Love your posts. Do you have any thoughts on the architecture below for accelerating graphics?

http://www.hpcwire.com/hpcwire/2011-05-03/startup_launches_manycore_floating_point_acceleration_technology.html

Totally irrelevant or a possibility if integrated on the same die?

Greg Pfister said...

@Anonymous\1 - Collecting my posts is easy to do, since other than the one pointed to by my immediately prior post, there weren't any. It's been a fallow period, caused by a lack of interesting events and the arrival of two puppies who limited concentration; they're now house-trained and also (hopefully!) trained not to yap incessantly at every blown leaf, so at least that distraction has been subdued. More posts will come, fairly soon.

Greg Pfister said...

@Anonymous\2 - Thank you for your support.

Greg Pfister said...

@Ed & @Ryan - I agree with Ryan (mostly). A 5GHz Sandy Bridge is pretty spectacular, I agree. However, I'm interested in how on-chip integration of decent graphics wipes out an existing volume market in GPUs, with implications that go up the food chain -- which is why my initial post on this was titled "Nvidia-based Cheap Supercomputing Coming to an End".


The laptop issue is just for my personal use / enjoyment, not the whole story.

Greg Pfister said...

@Anonymous\3 - I did see that one. In fact, that article has inspired another post, possibly my next. Not for the tech, which I don't have enough detail on to evaluate (but it seems dubious). The writeup pushed one of my sensitive buttons.

Anonymous said...

(I'm "@Anonymous\2"): Just to clarify, I was referring to all your posts discussing CUDA, and your "Nvidia-based Cheap Supercomputing Coming to an End" post in particular. Sorry if you're offended, but I plain don't like people talking about stuff they don't know, especially if despite of this they write pompously on topic, like you do. If you got your hands dirty at least a bit on GPGPU programming, you'd readily realize that considering Sandy Bridge as some sort of NVIDIA killer (especially after Larabee fiasco), then talking about how NVIDIA is stupid not to adopt OpenCL, etc. is just plain crap. Now, note that I'm not any sort of NVIDIA fanboy, but the plain fact is that they own GPGPU market at the moment, and that they are keeping working hard to improve their solutions. So, like it or not, anyone planning to do any sort of serious HPC-related work in coming years will just have to stick to NVIDIA, period.

ghostmonk said...

Anonymous\2 is a reminder of why open and free can often be a complete pain in the ass.

Greg, Thanks for your comments on SandyBridge... I'm buying this chip pretty soon, and wasn't aware of this information.

Anonymous said...

@ghostmonk: If you pointed where Greg was right about CUDA, and where I was wrong in stating that his write-ups are off the mark, then your opinion on my comments would have credibility; but by just calling for censorship, not so.

Greg Pfister said...

@Anonymous\2 - From your response, I believe I've not communicated to you what I'm getting at with these posts.

I'm not saying OpenCL is better than CUDA, or Intel's graphic architecture is superior to Nvidia's, or Nvidia doesn't all but own the HPC side of GPGPU use. I agree with you that all those things are false.

All I am saying is that right now, users of Nvidia GPGPUs get a tremendous price break because their products are effectively subsidized by high-volume low-end products. And that high-volume, low-end section of the market is going to get sucked up into "free" on-chip integrated graphics.

Nvidia & CUDA aren't going away. They're just going to get more expensive, possibly a lot more expensive. Eventually. And not overnight. It's not going to start happening now with Sandy Bridge, as I originally thought, but it just might start (see update to post) with Ivy Bridge.

Greg Pfister said...

Oh, and @Anonymous\2 - I didn't mean my initial response to be dismissive. I count just reading my stuff as support. Agreement - feh, nice sometimes. And I'm glad you expanded on why you disagreed, since it helped me find something to clarify.

Kue said...

Greetings Mr. Pfister. I ran across your blog while googling the A6-3400M. Good stuff. I am glad to see someone else is disappointed by sandy bridge (though I am thoroughly biased). Anyway, I ran across this article @ AnandTech that rates the A8's performance. thought it might pick up your spirits when it comes to fusion APU.

http://www.anandtech.com/show/4444/amd-llano-notebook-review-a-series-fusion-apu-a8-3500m/11

Greg Pfister said...

Thanks, Kue. Looks like A8-3500M + 6630M can make a very usable system. Now to search for a laptop that has it and otherwise does what I want. :-)

Admin said...

thank you for your post, i appreciate your findings!
im using my 2500K sandy to do testing with OSX Lion etc,
so its going to be rather interesting how the Apple drivers
hold up to WOW or something like that with the intergraded
graphics of sandy bridge.

Post a Comment

Thanks for commenting!

Note: Only a member of this blog may post a comment.