A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » AMD x86-64 Processors
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA



 
 
Thread Tools Display Modes
  #1  
Old October 31st 07, 12:21 AM posted to comp.sys.intel,alt.comp.hardware.amd.x86-64,alt.comp.periphs.videocards.ati,alt.comp.periphs.videocards.nvidia,comp.sys.ibm.pc.hardware.video
NV55
external usenet poster
 
Posts: 149
Default GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA

GPU Computing Gets Ready for Act II

The idea of general-purpose computing on graphics processing units
(GPGPU) continues to capture the imagination of the HPC community. But
the three big players -- Intel, NVIDIA and AMD -- all have their ideas
on how this new technology should play out.

When Intel rejected the whole notion of general-purpose computing on
graphics processing units (GPGPU) at the spring 2007 IDF meeting with
its announcement of its upcoming Larrabee product line, the digerati
began to buzz about what the future might hold for the GPU. For those
who might not have heard about it, Larrabee is Intel's answer to the
programmable GPU, the technology that is bringing GPGPU to the masses.

The Larrabee architecture could be characterized as the anti-GPU
entry. The overall approach is an attempt to evolve the CPU into a
terascale data parallel engine. According to Intel, Larrabee will be a
manycore (i.e., more than 8 cores) device and will be based on a
subset of the IA instruction set with some extra GPU-like instructions
thrown in. Intel has not elaborated on how it intends to do this, but
one could imagine super-sized SSE units with just enough x86 CPU
silicon to enable general-purpose flow control and data access. The
first product release will probably come in 2009, but Intel says it
may have something to demo as early as next year.

The idea behind Larrabee is to bring both traditional graphics
processing and data parallel computing under the IA umbrella. I'm not
going to talk about the traditional graphics side of the story here
(I'll let the game weenies argue about the advantages of ray-tracing
over rasterization.) What's interesting about Larrabee and its GPU
brethren is the extent to which a graphics engine can become a general-
purpose computing engine without compromising its performance.

The combination of a data parallel engine with more of the general-
purpose flexibility of a traditional CPU could offer a powerful model
for scientific computing applications, which usually consist of an
irregular mix of matrix math and other logic. One of the drawbacks of
traditional GPUs is that they depend upon an accompanying CPU for
virtually all of the non-vector logic. That's fine if the application
divides neatly between a vector computing kernel and the rest of the
application logic in such a way as to keep both types of processing
engines busy. But if it doesn't, the software developer has to find a
way to tease out enough parallelism for the GPU to make sending the
vector data on a round trip from the CPU worthwhile. This will only
get worse in the future, since chip-to-chip bus performance is not
expected to keep pace with either CPU or GPU performance.

The division of labor problem is at the heart of the GPGPU critique
elaborated by Anwar Ghuloum, an engineer at Intel's Microprocessor
Technology Lab. In a blog entry last week, The Problem(s) with GPGPU,
he writes about some of the ramifications of the current CPU-GPU
dichotomy:

[b]ecause of the underlying constraints of GPU architecture,
oftentimes the program relies heavily on the CPU to manage the
difficult parts of the control and data flow, as well as all the other
(necessary) stuff like I/O, etc. Here's the problem with this, the CPU-
GPU link is relatively lower performance, engendering relatively high
latencies for CPU-GPU interactions (like using a CPU to handle an
outer level loop that the GPU can't handle). This can have a
devastating effect on performance.

Ghuloum is not explicitly making a pitch for Larrabee here. He's
really questioning the validity of the GPGPU programming approach,
which he believes is too narrowly defined to exploit all avenues of
data parallelism. In a previous blog post, Ghuloum makes a case for
Ct, a language Intel is developing that supports a more general-
purpose, deterministic parallel programming model. While Ct assumes no
specific architecture, the underlying model he's describing seems to
point to a more generalized parallel processing architecture, like
Larrabee.

NVIDIA offers a more traditional approach to GPGPU. Its Tesla product
line and CUDA C-programming environment were specifically developed to
deliver GPU computing to the HPC market. The current Tesla products,
released in June 2007, are based on the G80 architecture but packaged
in form factors that are geared toward high performance computing
setups, both workstations and servers. Host communication is done via
PCI Express (PCIe).

There's plenty of low-hanging fruit to be had with Tesla. Seismic
analysis, medical diagnostics, molecular modeling and other such
applications can realize performance increases of one or two orders of
magnitude from this type of GPU acceleration. The next generation of
Tesla offerings are expected to support double precision floating
point. This will expand the GPGPU application domain even more, since
64-bit floating point is the de facto standard for scientific
computing.

NVIDIA may eventually move its high performance computing Tesla line,
or its descendents, in the same direction as Larrabee. But unlike
Intel, NVIDIA starting point is the GPU, and it has no in-house CPU to
draw from, so the path is bound to be different. For now, NVIDIA is
content to exploit its lead in the GPGPU arena, especially since its
nearest competitor, AMD, is still in the process of putting its GPU
computing strategy together.

At one time, AMD seemed to be ready to take advantage of the renewed
interest in GPGPU. Soon after the company acquired ATI in July 2006,
it launched its "Stream Computing" strategy, with the idea of
leveraging ATI's GPUs and AMD's HyperTransport interconnect
technology. The company's first GPGPU platform consisted of a PCIe-
connected ATI R580 GPU bundled with their "Close to the Metal"
software development kit. But it's not clear how many of these
platforms have been sold, and AMD hasn't talked much about stream
computing since 2006.

Over the past year, the company has struggled against Intel's
onslaught of new x86 technology and aggressive chip pricing. If that
wasn't enough of a distraction, NVIDIA's foray into the GPGPU arena
seemed to catch AMD off-guard. Even if the company's initial GPU plans
have slipped, AMD's long-term commitment to marry its two
architectures remains. But with Intel and NVIDIA forging ahead, time
is no longer on AMD's side.

The first instance of AMD's upcoming Fusion processor, which
integrates a CPU and GPU on the same die, is at least a year away and
is intended for the consumer market (notebooks). If successful, later
generations of Fusion will almost certainly target HPC, and are likely
to resemble a Cell processor architecture, with multiple CPU and GPU
cores. Chip level CPU-GPU integration offers a number of advantages
over discrete components, namely increased energy efficiency and
better communication bandwidth and latency (HyperTransport versus
PCIe). It's not the Larrabee model, but it offers the same advantage
of using an x86 base to create a platform with much greater
capabilities for data parallelism. AMD is also likely to offer
discrete GPU products for high-end computing, but no roadmap has been
publicized.

Like Intel, AMD has hinted at adding GPU-type instructions to the x86
ISA to allow software to work seamlessly with the graphics engines via
a standard compiler/runtime. If AMD and Intel were on speaking terms,
they could forge a common GPU ISA, which would be much appreciated by
the GPGPU ecosystem. It could also serve to blunt NVIDIA's lead, and
probably force the company to adopt what would be an industry-standard
GPU interface. In the short term, standards are unlikely. Everyone
involved has their own vision of how the GPU should evolve into its
new role.

This is one reason why high-level software environments for parallel
programming are needed. While the Ct language looks promising, it's
still in the research stage. (I'm guessing we'll soon be hearing more
about this Intel.) Today, RapidMind offers a high-level software
platform that allows developers to exploit data parallelism on a
variety of hardware architectures, including NVIDIA and AMD GPUs, the
Cell BE, and soon, x86 CPUs. The RapidMind platform has been generally
available for less than a year, but has already managed to attract
over 1,000 developers.

Given the asymmetric capabilities of the different chip vendors and
the immaturity of the GPGPU software ecosystem, it's too early to make
predictions on the future of GPUs for general-purpose computing. What
seems more certain is that proprietary vector processor-based
supercomputers, like the one just announced by NEC this week, will
soon be edged out by commodity-based systems that contain the
equivalent vector smarts. Whether these machines turn out to be based
on double precision GPUs, GPU-CPU hybrids, SIMD-enhanced CPUs, Cell BE
processors, FPGAs, or SIMD ASICs, remains to be seen.

http://www.hpcwire.com/hpc/1856011.html

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA NV55 Intel 0 October 31st 07 12:21 AM
Intel's 'Larrabee' a candidate processor for the NEXT generation game consoles? AirRaid Intel 2 October 14th 07 04:16 AM
Intel's Larrabee - A candidate processor for NEXT generation game consoles AirRaid AMD x86-64 Processors 0 September 22nd 07 01:51 AM
Intel's Larrabee - A candidate processor for NEXT generation game consoles AirRaid Ati Videocards 0 September 22nd 07 01:51 AM
NVIDIA Tesla: new GPU computing product range Onyx IR2 Nvidia Videocards 0 June 22nd 07 05:10 PM


All times are GMT +1. The time now is 09:29 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.