Tuesday, January 11, 2011

Intel-Nvidia Agreement Does Not Portend a CUDABridge or Sandy CUDA


Intel and Nvidia reached a legal agreement recently in which they cross-license patents, stop suing each other over chipset interfaces, and oh, yeah, Nvidia gets $1.5B from Intel in five easy payments of $300M each.

This has been covered in many places, like here, here, and here, but in particular Ars Technica originally lead with a headline about a Sandy Bridge (Intel GPU integrated on-chip with CPUs; see my post if you like) using Nvidia GPUs as the graphics engine. Ars has since retracted that (see web page referenced above), replacing the original web page. (The URL still reads "bombshell-look-for-nvidia-gpu-on-intel-processor-die.")

Since that's been retracted, maybe I shouldn't bother bringing it up, but let me be more specific about why this is wrong, based on my reading the actual legal agreement (redacted, meaning a confidential part was deleted). Note: I'm not a lawyer, although I've had to wade through lots of legalese over my career; so this is based on an "informed" layman's reading.

Yes, they have cross-licensed each others' patents. So if Intel does something in its GPU that is covered by an Nvidia patent, no suits. Likewise, if Nvidia does something covered by Intel patents, no suits. This is the usual intention of cross-licensing deals: Each side has "freedom of action," meaning they don't have to worry about inadvertently (or not) stepping on someone else's intellectual property.

It does mean that Intel could, in theory, build a whole dang Nvidia GPU and sell it. Such things have happened, historically, but usually without cross-licensing, and are uncommon (IBM mainframe clones, X86 clones), but as a practical matter, wholesale inclusion of one company's processor design into another company's products is a hard job. There is a lot to a large digital widget not covered by the patents – numbers of undocumented implementation-specific corner cases that can mess up full software compatibility, without which there's no point. Finding them all is massive undertaking.

So switching to a CUDA GPU architecture would be a massive undertaking, and furthermore it's a job Intel apparently doesn't want to do. Intel has its own graphics designs, with years of the design / test / fabricate pipeline already in place; and between the ill-begotten Larrabee (now MICA) and its own specific GPUs and media processors Intel has demonstrated that they really want to do graphics in house.

Remember, what this whole suit was originally all about was Nvidia's chipset business – building stuff that connects processors to memory and IO. Intel's interfaces to the chipset were patent protected, and Nvidia was complaining that Intel didn't let Nvidia get at the newer ones, even though they were allegedly covered by a legal agreement. It's still about that issue.

This makes it surprising that, buried down in section 8.1, is this statement:

"Notwithstanding anything else in this Agreement, NVIDIA Licensed Chipsets shall not include any Intel Chipsets that are capable of electrically interfacing directly (with or without buffering or pin, pad or bump reassignment) with an Intel Processor that has an integrated (whether on-die or in-package) main memory controller, such as, without limitation, the Intel Processor families that are code named 'Nehalem', 'Westmere' and 'Sandy Bridge.'"

So all Nvidia gets is the old FSB (front side bus) interfaces. They can't directly connect into Intel's newer processors, since those interfaces are still patent protected, and those patents aren't covered. They have to use PCI, like any other IO device.

So what did Nvidia really get? They get bupkis, that's what. Nada. Zilch. Access to an obsolete bus interface. Well, they get bupkis plus $1.5B, which is a pretty fair sweetener. Seems to me that it's probably compensation for the chipset business Nvidia lost when there was still a chipset business to have, which there isn't now.

And both sides can stop paying lawyers. On this issue, anyway.

Postscript

Sorry, this blog hasn't been very active recently, and a legal dispute over obsolete busses isn't a particularly wonderful re-start. At least it's short. Nvidia's Project Denver – sticking a general-purpose ARM processor in with a GPU – might be an interesting topic, but I'm going to hold off on that until I can find out what the architecture really looks like. I'm getting a little tired of just writing about GPUs, though. I'm not going to stop that, but I am looking for other topics on which I can provide some value-add.

5 comments:

Sam Martin said...

They get $1.5 billion to kick start their new venture with ARM, which as we now know will run Windows 8. Taken together looks like a very good partnership and excellent windfall to get them going. I wouldn't call that bupkis.

DanielVS said...

I remember commenting about the way to go for Nvidia was ARM - volumes and promising market to sustain HPC development. With fresh $1.5bi and strong willing, I really believe we will see a PC with ARM sooner than previously expected.

It was intriguing though to read Intel statements about this ARM race, on Ars Technica: "Intel on the Windows/ARM port: it could be a good thing for us". My interpretation is that they are indeed agreeing about direct competition from ARM cores in their (Intel) market, not the contrary (that was their original plan with Atom, I suppose).

DanielVS said...

Ouch, comment was cut somehow...

Anyway, what was written: good to read new content here, no matter how long (or sooner) it takes. Great writing, Greg!

Cheers,

Greg Pfister said...

One thing about Nvidia & ARM:

People seem to forget that Nvidia already has a good ARM business, with a good ARM implementation. In particular, Nvidia's Tegra 2 is a dual-core ARM that powers the Motorols ATRIX and LG Star (and, I think, others).

Oh, and @Sam Martin -- Bupkis. That's humor. At least for me. Especially when I say Bupkis + $1.5B.

Anonymous said...

I don't know what the power draw will be for ARM + Nvidia GPU chips will be, but I worked on a design which used an ARM11 variant running at 1 GHz. Intel sold the ARM stuff to Marvell, and replaced the ARM core in that chip with an IA32 core. The power draw for went from 2.5 watts to 12.5 watts.

Nothing like an instruction set that is state of the art, 1977, to really get the electrons flowing.

Post a Comment

Thanks for commenting!

Note: Only a member of this blog may post a comment.