A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Intel presentation reveals the future of the CPU-GPU war



 
 
Thread Tools Display Modes
Prev Previous Post   Next Post Next
  #1  
Old April 11th 07, 05:00 PM posted to comp.sys.intel,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video,alt.comp.hardware.amd.x86-64
AirRaid
external usenet poster
 
Posts: 126
Default Intel presentation reveals the future of the CPU-GPU war

http://www.beyond3d.com/content/articles/31


Intel presentation reveals the future of the CPU-GPU war

Published on 11th Apr 2007, written by TeamB3D for Consumer Graphics -
Last updated: 11th Apr 2007

Introduction

Back in February we reported that Intel's Douglas Carmean, new Chief
Architect of their Visual Computing Group (VCG) in charge of GPU
development at Intel, had been touring universities giving a
presentation called "Future CPU Architectures -- The Shift from
Traditional Models". Since then he's added a few more major
university stops, and now the feared B3D ninjas have caught up with
him. Our shadow warriors have scored a copy of Carmean's presentation,
and we've selected the juicy bits for your enjoyment and edification
regarding the showdown that Intel sees as already underway between CPU
and GPU makers.

http://www.beyond3d.com/images/artic...Image1-big.jpg
http://www.beyond3d.com/images/artic...ure/Image2.jpg
http://www.beyond3d.com/images/artic...ure/Image3.jpg


After a fairly standard review of CPU development over the last thirty
years, a serpent is detected in the CPU boys' garden of eden,
threatening their supremacy. "CPU profit margins are decreasing. GPU
margins are increasing." As the old saying goes, "Follow the money!"
and you'll rarely be lead astray. But where has this serpent come
from?


http://www.beyond3d.com/images/artic...ure/Image4.jpg
http://www.beyond3d.com/images/artic...ure/Image5.jpg
http://www.beyond3d.com/images/artic...ure/Image6.jpg
http://www.beyond3d.com/images/artic...ure/Image7.jpg


Ah ha, NVIDIA and ATI are revealed as the wannabe usurpers, and the
GPU programability trends that began with 2001's NV20 DX8 capabilities
have now grown to be enough of a threat to gain even the attention of
mighty Intel. Given that Carmean first began giving the original form
of this presentation in 2005, one might wonder how large a part the
rationale displayed here played in AMD's acquistion of ATI which was
first proposed in December of that year.


http://www.beyond3d.com/images/artic...ure/Image8.jpg
http://www.beyond3d.com/images/artic...ure/Image9.jpg
http://www.beyond3d.com/images/artic...re/Image10.jpg
http://www.beyond3d.com/images/artic...re/Image12.jpg


Now the clues as to where Intel's VCG are going with their graphics
architecture appear. You're asked to visualise an in-order 4-thread
'throughput' core at 10mm2 and consuming only 6.25W. In theory, that'd
look like a pretty weak CPU, with less than one third the single-
threaded performance of current processors. The catch, however, is
that they'd strap on a super-wide Vec16 FPU! It would likely be
programmer-controlled via new instructions, so you'd use it as you see
fit, but for graphics it would make some sense to think of it as
working on scalar operations for four quads (2x2 pixels) at a time.

Now, pack a given die area with enough of those small cores, Intel
says, and et-voila, a multi-threaded, very-wide vector processor that
scales according to understood CMP and CMT ideology. But it's also
one that might require very significant compiler and software
engineering effort to run fast, given what we know of current CPU and
GPU architectures.

It should once again be noted that each throughput processor would
have ~30% the performance of a traditional CPU for single-threaded
code, according to the slides, but for only 1/5th of the area even
though it hosts a Vec16 unit. So again, the scaling opportunity of an
architecture like that seems somewhat promising, even if not all
applications it could run fully exploit the new FPU. Legacy code
obviously wouldn't benefit at all from it.

So the question is, does the core support x86 instructions at all? If
single-threaded performance is still roughly acceptable, it might make
some sense for it to do so, and then you could think of the Vec16 FPU
as an 'on-core' coprocessor that exploits VLIW extensions to the x86
instruction set. Or, the entire architecture might be VLIW with
absolutely no trace of x86 in it. Obviously, this presentation doesn't
give us a clear answer on the subject. And rumours out there might
just be speculating on Larrabee being x86, so that doesn't tell us
much either.

http://www.beyond3d.com/images/artic...re/Image13.jpg
http://www.beyond3d.com/images/artic...re/Image14.jpg
http://www.beyond3d.com/images/artic...re/Image15.jpg


Add in thread synchronisation, cross thread communications and a
completely shared cache setup and threads have an efficient means to
communicate while processing. It does look quite different from what
Intel is researching with Polaris aka the Terascale Initiative, but
that doesn't mean it couldn't be quite efficient indeed. That remains
to be seen, of course.

The slides do mention fixed-function units, but that doesn't really
mean much, and it's hard to say how much of a focus Intel will have on
implementing those blocks efficiently - especially so since these
would be unused during non-graphics processing, and the previous
slides clearly pointed out the advance of GPGPU as one of the key
reasons behind the development of this new architecture for Intel.

It is not unthinkable that Intel would try to maximize the amount of
work done in these processors, rather than in fixed-function units.
For example, many of the operations achieved in the ROPs, such as
blending, could be done there. Triangle setup wouldn't be too hard
either. But what about rasterization, antialiasing, texture addressing
and anisotropic filtering, etc.? There's more to a good graphics
processor than big SIMD units and high aggregate bandwidth but it's a
big step in the right direction, obviously.

And it's not like the described architecture would really be a
traditional GPU anyway, with only 4 threads per core (arguably, that
should be compared to G80's 12 warps/multiprocessor) and a huge cache!
Either way, it will be quite interesting to see what Intel comes up
with for the "fixed-function units" part of the chip. If it's good
enough, this might be a real competitor in the 3D space. Otherwise,
it'd likely only compete for GPGPU mindshare.

http://www.beyond3d.com/images/artic...re/Image16.jpg

http://www.beyond3d.com/images/artic...re/Image17.jpg
http://www.beyond3d.com/images/artic...re/Image18.jpg

http://www.beyond3d.com/images/artic...re/Image19.jpg
http://www.beyond3d.com/images/artic...re/Image20.jpg

http://www.beyond3d.com/images/artic...re/Image21.jpg
http://www.beyond3d.com/images/artic...re/Image22.jpg



Now here's where things really get interesting. Intel provides their
vision of where the fault lines and relative advantages for CPU vs GPU
are by application types. It is particularly interesting to note that
they place video processing firmly in the CPU camp, and yet all
current premium video solutions for high-end codecs rely on GPU power
to accelerate this function smoothly. Of course, part of that is
dedicated silicon for the decoding, but many of the video quality
enhancements on G80 are done in the shader core, presumably through
CUDA!

It should be noted there that one of the points Intel brings forward
is that GPUs are weak at "communciation between elements". That has
been traditionally true, but it is certainly also one of the things
that CUDA's Parallel Data Cache, aka Shared Memory, is trying to fix.
NVIDIA's goal there definiely was to increase their addressable
market. It won't fix the problem completely. GPUs are still awful at
*creating* complex data structures, for example, among many other
things. But it's a step in the right direction, and it highlights that
NVIDIA and AMD are ready to change how their GPUs work to get all
those GPGPU dollars, just like Intel is ready to change how its CPUs
work to try and make sure that doesn't happen. Finance is another area
where Intel might be underestimating GPU vendors, but then again, that
probably depends on how you define Finance...


There's also a fairly strong implication in these slides that there
could be a serious struggle for the hearts and minds of ISVs brewing
dead ahead, and their willingness (or not) to be convinced to "drag
applications to the left" could be a major factor in how events play
out. Certainly the GPU boys and their devrel teams are very familiar
with that kind of battleground. Interestingly, one of our ninjas
reports having gotten a few of Intel's acolytes on the side and
perceived a nearly staggering lack of appreciation from them for just
how important and resource-intensive the software development
infrastructure and support side can be. One hopes for their sake that
the senior people have a finer appreciation for this element. But when
one remembers how many games updated for dual core CPUs last year also
included a note that Intel's HT technology (introduced in 2002!)
received significant benefits too. . . well, let's say that confidence
on that point is hard to come by.


http://www.beyond3d.com/images/artic...re/Image23.jpg
http://www.beyond3d.com/images/artic...re/Image24.jpg

Heaven forbid anyone should forget the real point of the exercise. . .
But then one must recall that Carmean is giving this presentation to
eager young engineers at universities. In other words, he's looking
for recruits in this war, and reminding them there will be booty
galore for those on the winning side is a smart strategy.


http://www.beyond3d.com/images/artic...re/Image25.jpg


A fairly typical summing up, but one that leaves no doubt that Intel
both perceives a serious threat from the GPU makers, and also
recognizes that this is not a battle that they can afford to lose.
Interesting stuff from the man who is in charge of building an
architecture for Intel to fight and win this war.

Care to comment on this article? You can do so he
http://forum.beyond3d.com/showthread.php?p=966262

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Wonderfully funny Inquirer Intel FUD presentation [email protected] General 37 April 27th 06 02:49 PM
Wonderfully funny Inquirer Intel FUD presentation [email protected] Intel 37 April 27th 06 02:49 PM
Future Intel Processors to do away with pins, utilize LGA Joe General 1 January 28th 05 03:27 PM
VIA reveals a penny-sized processor Yousuf Khan General 3 October 16th 03 06:00 PM
Future Intel mobile processor directions Yousuf Khan General 1 September 19th 03 01:17 PM


All times are GMT +1. The time now is 04:39 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.