Log in

View Full Version : Intel's 'Larrabee' Graphics Chip is Really a Bunch of CPUs?


Jason Dunn
08-04-2008, 09:00 PM
<div class='os_post_top_link'><a href='http://news.cnet.com/8301-13924_3-10005391-64.html?hhTest=1&tag=nefd.top' target='_blank'>http://news.cnet.com/8301-13924_3-1...=1&tag=nefd.top</a><br /><br /></div><p><em>"Intel has disclosed details on a chip that will compete directly with Nvidia and ATI and may take it into unchartered technological and market-segment waters. Larrabee will be a stand-alone chip, meaning it will be very different than the low-end--but widely used--integrated graphics that Intel now offers as part of the silicon that accompanies its processors. And Larrabee will be based on the universal Intel x86 architecture."</em></p><p><img src="http://images.thoughtsmedia.com/resizer/thumbs/size/600/dht/auto/1217874904.usr1.jpg" border="1" /></p><p>I was wondering what Intel was up to, and it seems they're doing what Intel does best: CPU stuff. Larrabee is really going to be a bunch of CPUs on a card, and not a GPU-based solution that early speculation thought it was going to be. CPU's tend to be vastly less efficient at 3D graphics than dedicated GPUs, which is why we've seen the industry adopt dedicated GPUs as the solution of choice for 3D gaming. Intel's approach is quite interesting - one the one hand, it seems foolish to ask a CPU to do a GPU's job. Having eight CPUs on a card might be an expensive, power-hungry approach. <MORE /></p><p>On the other hand, if Larrabee could be leveraged to do other CPU-related tasks (such as video encoding), suddently we have a graphics solution that could have a huge impact on the overall performance of the system. Could you imagine encoding h.264 video, using he proper software, with 12 CPU cores? That would result in some serious performance. I'm really curious about what sort of price points Larrabee is going to come in at, and how easily their cards will function in the computer systems we have today. Looks like we have a while to wait: Larrabee won't hit until 2009 or 2010.</p>

Hooch Tan
08-05-2008, 01:34 AM
ArsTechnica has a good article discussing some technical aspects of Larabee at here (http://arstechnica.com/news.ars/post/20080804-larrabee-intels-biggest-leap-ahead-since-the-pentium-pro.html).

Some things of note:
1) Larabee will interface with the computer via a PCIe interface and a software "driver."
2) The card will contain its own memory separate from computer memory, much like most discrete video cards have dedicated memory.
3) It will be Direct3D and OpenGL compatible!

I recall seeing some videos about the abilities of Larabee and it definately looks like it's going to be something fun and compete quite competitively with ATI and NVidia. I guess parallelism is going to be the hot topic for the next few years.

Felix Torres
08-05-2008, 02:14 PM
Oh, how jaded we've become that a cluster-on-a-chip no longer excites us...;)

There's a bit more to Larrabee than just a bunch of cpus on a card, as you well know.
Check the follow-up from the yellow rag:
http://news.cnet.com/8301-13924_3-10006617-64.html?hhTest=1&tag=nefd.top

Anyway, the comment about cpu vs gpu reminded me of the days of the Risc vs Cisc wars. May, I suggest that a processor is a processor is a processor? That the only real issues are not what you call it but rather *how* the transistors on the chip are used?

If you look at the basic architecture of the Larrabee you may notice an abundance of vector processing units (one per core, in fact) and these units roughly correspond to the shader units in a more traditional gpu so if we think of Larrabee as a gpu with on-board scaler processing support we would be about as right as calling it a bunch of cpus with vector processor support.

To me it seems Larrabee is a pure 50/50 hybrid of scalar, general-purpose cpu and vector-optimized programmable gpu. Both Nvidia and ATI are headed that way by adding more and more general-purpose programmability to their vector units; its just that Intel got there first (with the announcement, at least) and instead of some funky proprietary instruction set (cough*Nvidia*cough) they've pre-empted AMD's likely approach of using x86 instructions for the general-purpose coding of the processor.

Looking at the Larrabee architecture, what if we think of each vector unit as a shader? Then you have a gpu with 48 shaders. Sounds decent enough, no? Now, each shader has an associated scalar unit and dedicated level one cache. Plus the clock rate is likely to run at something like 3-4 times the clock-rate of current gpus so those 48 "shaders" might be as effective as 192 dedicated shaders. Remember, what we call shaders are really just special-purpose vector processors; the terminology may be different and the designs specialized but a vector processor is a vector processor no matter what you call it (Shader, spu, whatever) and ultimately its all about transistor counts and ops per cycle.

If (big if) the performance of the individual vector units/faux scalers approaches the performance of a hardwired shader, graphics mode performance should be nothing to sneer at. But, of course, Larrabee is supposed to be about more than graphics; there's all those scalar units...

Intel is taking an interesting road here. It may pay off or it may not but its going to be very educational. Anyway, do consider some of the oddities of the design; namely that the scalar cares started out as *original* Pentium cores, the last X86 that required in-order coding. (It seems the premium of doing out-of-order execution doesn't pay off on many-core designs.) So, First Intel backtracked from P4 to P3 design principles when it introduced the Core architecture, and now they're going way back to the future to a separate timeline with a 64-bit Pentium 1 design. (Over-simplifying, I know...)

Simpler and cleaner is better in the highly paralel world and Pentium 1 is the most Risc-like core design Intel had in inventory so it makes sense to start there but it does make you wonder what Larrabee can do to Itanium in the workstation/compute engine space. After all those aren't hardwired shader units, those are full general-purpose vector units. A whole gang of semi-independent processors linked by an efficient ring data path (very much like a network, no?)...

Isn't that, in effect, a cluster-on-a-chip?:cool:

Could be interesting if Intel aims at Nvidia and ends up hitting Itanium, no?
As I said: educational...

Jason Dunn
08-05-2008, 09:15 PM
Oh, how jaded we've become that a cluster-on-a-chip no longer excites us...;)

There are just too many unknowns at this point. How much will it cost? How much power will it need, and how much heat will it generate? How much performance can we really expect to see? 2009/2010 is an eternity in the tech world, so what if two years from today Intel never even brings this to market because it was too expensive, and no one was interested in it?

This is a vapourware announcement, pure and simple. I'm excited by some of the possibilities here, but until Intel puts some meat on these bones, I'm not going to get too optimistic...