A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » AMD x86-64 Processors
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Nvidia's 'NV60' (GT300) apparently uses MIMD architecture to take onIntel's Larrabee and any other challenger



 
 
Thread Tools Display Modes
  #1  
Old April 25th 09, 12:34 AM posted to alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video,alt.comp.hardware.amd.x86-64,comp.sys.intel
NV55
external usenet poster
 
Posts: 149
Default Nvidia's 'NV60' (GT300) apparently uses MIMD architecture to take onIntel's Larrabee and any other challenger


Rumour: Nvidia GT300 architecture revealed
Author: Ben Hardwidge
Published: 23rd April 2009


How do you follow a GPU architecture such as Nvidia's original G80?
Possibly by moving to a completely new MIMD GPU architecture.
Although Nvidia hasn’t done much to the design of its GPU architecture
recently - other than adding some more stream processors and renaming
some of its older GPUs - there’s little doubt that the original
GeForce 8-series architecture was groundbreaking stuff. How do you
follow up something like that? Well, according to the rumour mill,
Nvidia has similarly radical ideas in store for its upcoming GT300
architecture.

Bright Side of News claims to have harvested “information confirmed
from multiple sources” about the part, which looks as though it could
be set to take on any threat posed by Intel’s forthcoming Larrabee
graphics processor. Unlike today’s traditional GPUs, which are based
on a SIMD (single instruction, multiple data) architecture, the site
reports that GT300 will rely on “MIMD-similar functions” where “all
the units work in MPMD mode”.

MIMD stands for multiple-input, multiple-data, and it’s a technology
often found in SMP systems and clusters. Meanwhile, MPMD stands for
multiple-program, multiple data. An MIMD system such as this would
enable you to run an independent program on each of the GPU’s parallel
processors, rather than having the whole lot running the same program.
Put simply, this could open up the possibilities of parallel computing
on GPUs even further, particularly when it comes to GPGPU apps.

Computing expert Greg Pfister, who’s worked in parallel computing for
30 years, has a good blog about the differences between MIMD and SIMD
architectures here, which is well worth a read if you want to find out
more information. Pfister makes the case that a major difference
between Intel’s Larrabee and an Nvidia GPU running CUDA is that the
former will use a MIMD architecture, while the latter uses a SIMD
architecture. “Pure graphics processing isn’t the end point of all of
this,” says Pfister. He gives the example of game physics, saying
“maybe my head just isn't build for SIMD; I don't understand how it
can possibly work well [on SIMD]. But that may just be me.”

Pfister says there are pros and cons to both approaches. “For a given
technology,” says Pfister, “SIMD always has the advantage in raw peak
operations per second. After all, it mainly consists of as many
adders, floating-point units, shaders, or what have you, as you can
pack into a given area.” However, he adds that “engineers who have
never programmed don’t understand why SIMD isn’t absolutely the cat’s
pajamas.”

He points out that SIMD also has its problems. “There’s the problem of
batching all those operations,” says Pfister. “If you really have only
one ADD to do, on just two values, and you really have to do it before
you do a batch (like, it’s testing for whether you should do the whole
batch), then you’re slowed to the speed of one single unit. This is
not good. Average speeds get really screwed up when you average with a
zero. Also not good is the basic need to batch everything. My own
experience in writing a ton of APL, a language where everything is a
vector or matrix, is that a whole lot of APL code is written that is
basically serial: One thing is done at a time.” As such, Pfister says
that “Larrabee should have a big advantage in flexibility, and also
familiarity. You can write code for it just like SMP code, in C++ or
whatever your favorite language is.”

Bright Side of News points out that this could potentially put the
GPU’s parallel processing units “almost on equal terms” with the “FPUs
inside latest AMD and Intel CPUs.” In terms of numbers, the site
claims that the top-end GT300 part will feature 16 groups that will
each contain 32 parallel processing units, making for a total of 512.
The side also claims that the GPU’s scratch cache will be “much more
granular” which will enable a greater degree of “interactivity between
the cores inside the cluster”.

No information on clock speeds has been revealed yet, but if this is
true, it looks as though Nvidia’s forthcoming GT300 GPU will really
offer something new to the GPU industry. Are you excited about the
prospect of an MIMD- based GPU architecture with 512 parallel
processing units, and could this help Nvidia to take on the threat
from Intel’s Larrabee graphics chip? Let us know your thoughts in the
forums.

http://www.bit-tech.net/news/hardwar...architecture/1
  #2  
Old April 25th 09, 12:37 AM posted to alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video,alt.comp.hardware.amd.x86-64,comp.sys.intel
NV55
external usenet poster
 
Posts: 149
Default Nvidia's 'NV60' (GT300) apparently uses MIMD architecture to takeon Intel's Larrabee and any other challenger

nVidia's GT300 specifications revealed - it's a cGPU!
4/22/2009 by: Theo Valich - Get more from this author


Over the past six months, we heard different bits'n'pieces of
information when it comes to GT300, nVidia's next-gen part. We decided
to stay silent until we have information confirmed from multiple
sources, and now we feel more confident to disclose what is cooking in
Santa Clara, India, China and other nV sites around the world.

GT300 isn't the architecture that was envisioned by nVidia's Chief
Architect, former Stanford professor Bill Dally, but this architecture
will give you a pretty good idea why Bill told Intel to take a hike
when the larger chip giant from Santa Clara offered him a job on the
Larrabee project.

Thanks to Hardware-Infos, we managed to complete the puzzle what
nVidia plans to bring to market in couple of months from now.
What is GT300?

Even though it shares the same first two letters with GT200
architecture [GeForce Tesla], GT300 is the first truly new
architecture since SIMD [Single-Instruction Multiple Data] units first
appeared in graphical processors.

GT300 architecture groups processing cores in sets of 32 - up from 24
in GT200 architecture. But the difference between the two is that
GT300 parts ways with the SIMD architecture that dominate the GPU
architecture of today. GT300 Cores rely on MIMD-similar functions
[Multiple-Instruction Multiple Data] - all the units work in MPMD
mode, executing simple and complex shader and computing operations on-
the-go. We're not exactly sure should we continue to use the word
"shader processor" or "shader core" as these units are now almost on
equal terms as FPUs inside latest AMD and Intel CPUs.

GT300 itself packs 16 groups with 32 cores - yes, we're talking about
512 cores for the high-end part. This number itself raises the
computing power of GT300 by more than 2x when compared to the GT200
core. Before the chip tapes-out, there is no way anybody can predict
working clocks, but if the clocks remain the same as on GT200, we
would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we
are talking about no less than 3TFLOPS with Single-Precision. Dual
precision is highly-dependant on how efficient the MIMD-like units
will be, but you can count on 6-15x improvement over GT200.


This is not the only change - cluster organization is no longer
static. The Scratch Cache is much more granular and allows for larger
interactivity between the cores inside the cluster. GPGPU e.g. GPU
Computing applications should really benefit from this architectural
choice. When it comes to gaming, the question is obviously - how good
can GT300 be? Please do bear in mind that this 32-core cluster will be
used in next-generation Tegra, Tesla, GeForce and Quadro cards.

This architectural change should result in dramatic increase in Dual-
Precision performance, and if GT300 packs enough registers -
performance of both Single-Precision and Dual-Precision data might
surprise all the players in the industry. Given the timeline when
nVidia begun work on GT300, it looks to us like GT200 architecture was
a test for real things coming in 2009.

Just like the CPU, GT300 gives direct hardware access [HAL] for CUDA
3.0, DirectX 11, OpenGL 3.1 and OpenCL. You can also do direct
programming on the GPU, but we're not exactly sure would development
of such a solution that be financially feasible. But the point in
question is that now you can do it. It looks like Tim Sweeney's
prophecy is slowly, but certainly - coming to life.


http://www.brightsideofnews.com/news...s-a-cgpu!.aspx
  #3  
Old April 25th 09, 05:11 PM posted to alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video,alt.comp.hardware.amd.x86-64,comp.sys.intel
parallax-scroll
external usenet poster
 
Posts: 59
Default GT300 will supposedly reach 3 TFLOPS single precision performance


And much better DP performance compared to GT200.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA NV55 Intel 0 October 31st 07 01:21 AM
GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA NV55 AMD x86-64 Processors 0 October 31st 07 01:21 AM
GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA NV55 Nvidia Videocards 0 October 31st 07 01:21 AM
GPU Computing: Intel's Larrabee - AMD's Fusion - NVIDIA's Tesla + CUDA NV55 Ati Videocards 0 October 31st 07 01:21 AM


All times are GMT +1. The time now is 11:47 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.