HardwareBanter

HardwareBanter (http://www.hardwarebanter.com/index.php)
-   AMD x86-64 Processors (http://www.hardwarebanter.com/forumdisplay.php?f=37)
-   -   TheInquirer: 'Why AMD should merge with Nvidia' (http://www.hardwarebanter.com/showthread.php?t=171418)

NV55 August 12th 08 01:22 AM

TheInquirer: 'Why AMD should merge with Nvidia'
 
Why AMD should merge with Nvidia

Comment: To save us from Intel

By Axonn Echysttas

TO QUOTE A famous expression: "there can be only one". Looking at how
things shape up for AMD and Nvidia, it's rather obvious to see that in
the future there can be only one.

To battle Intel and its upcoming technologies, it's going to take a
lot more than what AMD or Nvidia can offer on their own. They need to
form a very close cooperation or even fuse into one single, across-the-
board company.

Daamit is in big trouble with its CPU line and Nvidia is in big
trouble with... pretty much everything. But AMD has a big debt coming
along with it and Graphzilla is far from being the richest boy in the
classroom. However, they're both green, so check one on the
compatibility list.

Technology match
Not only in colour are the two companies a good match. Now that Ruiz
is gone, maybe a bit of common ground can be found. If only Huang can
forget about his ego for a few moments, only enough to realize that a
common future is a much better path than a separate future. Daamit has
a very nice engineering potential as well as several fabs in its
pocket. It has the most efficient GPU to date and a CPU line to go
with it, embattled as it may be. Nvidia on the other hand, can fill in
the high performance GPU gaps and can offer additional chipset know-
how. Together, they have what it takes to deliver strong single and
multi GPU technologies, a very complete physics support for gaming
purposes and a great GPGPU range. AMD already delivered Puma, a
unified mobile platform. Together with Nvidia, they can put the
pressure on Intel, big time. They got Fusion coming up, Nvidia has
Physx and if they work together that can mean a world of pain for
Chipzilla.

A blue future
Arguably, Intel has all the aces right now. It's going to be a blue
future for the IT world, but also a blue mood for Envydia and Daamit
if Intel can make things work in reality the way they currently work
on paper. And, unfortunately for the greenies, in the past few years
Intel has actually made things work even better in reality than what
they promised in slide shows. The upcoming discrete GPU from Intel,
Larrabee as well as the upcoming Nehalem assault should give Damvidia
a lot to think about.

For now, there is no reason to think that Intel's new aspirations in
the GPU market will be a fluke. Of course, there is no reason to think
otherwise either. Larrabee is an unknown factor right now. But Intel
has proven many a times that even though it might be a fresh entrant
into a given arena, it can make up for the lost experience extremely
fast. This is a very agile and versatile competitor, definitely not to
be underestimated. Huang knows that and that's the reason he's so
passionate about Intel lately.

Under the belt
What's even more worrying is the fact that Intel will undoubtedly hit
both green camps under the belt with Larrabee. For now, they appear to
accept the fact that rasterization is still hip in the gaming world,
but Larrabee has the potential to do much more, they made sure we all
know that. First of all, there's raytracing. This may be the ultimate
in providing super realistic graphics and fantastic special effects.
and Intel is banging the war drum on this subject. Nvidia and AMD have
nothing in this area as of right now and they will pretty much wake up
not only being beat at their own game, but even falling behind with a
few years from a technological point of view.

To this, we must add Intel's know-how in the field of chip design.
Larrabee is already supposed to be multi-core, and very powerful too.
Think about the raw amount of power offered by the beast as well as
the efficiency with which that power will be used. These things alone
should be more than enough reason to worry for the greenies.

Conclusion
It's been a while since AMD has had its own GPU line. The latest
exponent of this line, the 4800 series, is one of the best pieces of
graphical acceleration engineering to grace the IT world in the past
few years. But one cannot argue that Nvidia is a very strong ally to
have on board. Even with its lack of charisma, Intel's Santa Clara
neighbour is what AMD needs to weather the storm ahead.

It goes the other way around too. With a mounting debt and a host of
other problems, Chimpzilla can't afford any more months filled with
bad news and migration of key personnel. A fusion or at least a very
close partnership is something that will undo some frowns and give the
investors reasons to cheer about. It would boost motivation in both
camps and bring numerous marketing, logistical and technological
assets under one hopefully happy roof. Nvidia, with its monstrous GPU
tactic is safe, because DAAMIT's targets in this arena are different
anyway. So why not put two plus two together and give Intel some real
competition in the future, not two small companies which to wipe the
floor with? µ


http://www.theinquirer.net/gb/inquir...d-merge-nvidia

Jure Sah[_2_] August 13th 08 01:43 AM

TheInquirer: 'Why AMD should merge with Nvidia'
 
NV55 pravi:
Under the belt
What's even more worrying is the fact that Intel will undoubtedly hit
both green camps under the belt with Larrabee. For now, they appear to
accept the fact that rasterization is still hip in the gaming world,
but Larrabee has the potential to do much more, they made sure we all
know that. First of all, there's raytracing. This may be the ultimate
in providing super realistic graphics and fantastic special effects.
and Intel is banging the war drum on this subject. Nvidia and AMD have
nothing in this area as of right now


This is not true. AMD, as you know, has it's HyperTransport capability.
There are FPGAs already on the market *now* (and have been for some
time) that work with HyperTransport (or even fit an AM2 socket) that are
perfectly capable of exactly what the Larrabee core is promising to be
in some distant point in the future.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIoi5JB6mNZXe93qgRAt8JAJ0UMZvn5xobdPSg+H+yaC 8kBHODdgCeIC/W
XSn4W3Q9zXwEfqXg5Op8Gbs=
=s0AK
-----END PGP SIGNATURE-----


Miles Bader[_2_] August 13th 08 03:50 AM

TheInquirer: 'Why AMD should merge with Nvidia'
 
Larrabee is an interesting experiment, but the hype surrounding it is
just pathetic. The Inquirer, though, seems to have swallowed it, hook,
line, and sinker...

-Miles

--
In New York, most people don't have cars, so if you want to kill a person, you
have to take the subway to their house. And sometimes on the way, the train
is delayed and you get impatient, so you have to kill someone on the subway.
[George Carlin]

Yousuf Khan August 13th 08 11:26 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
NV55 wrote:
Why AMD should merge with Nvidia

Comment: To save us from Intel

By Axonn Echysttas

TO QUOTE A famous expression: "there can be only one". Looking at how
things shape up for AMD and Nvidia, it's rather obvious to see that in
the future there can be only one.



This seems to ignore the fact that right now AMD's GPUs have just taken
the lead from Nvidia on both the price/performance and the
performance/power-consumption basis. Also AMD has more experience in
design for manufacturing, so it looks like like it will be able to get
to process nodes sooner than Nvidia. There's even talk that together
with TSMC AMD will get to the 40nm half-node sooner than Intel, putting
it slightly ahead of Intel too (until Intel gets to 32nm).

Beyond that, AMD's Fusion CPU/GPU looks like it will be a better bet for
floating point domination than Larabee/Nehelem combination.

Robert Myers August 14th 08 06:19 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
On Aug 13, 6:26*pm, Yousuf Khan wrote:


Beyond that, AMD's Fusion CPU/GPU looks like it will be a better bet for
floating point domination than Larabee/Nehelem combination.


"Floating point domination" could mean any number of things. If, by
that, you mean domination of HPC (including applications like
animation), we'll just have to see. Cost, power consumption, and the
usability and stability of the API will be more important than raw
flops.

Marketing will play a big role, and both IBM and Intel, which is the
real competition, are much better situated to get their products
placed. Both have a much better track record than AMD, which did make
some dents with Opteron and Hypertransport. I think that blip is
past, but, as I said, we'll just have to see.

Robert.

[email protected] August 14th 08 10:19 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
On Aug 14, 1:19 pm, Robert Myers wrote:
On Aug 13, 6:26 pm, Yousuf Khan wrote:
Beyond that, AMD's Fusion CPU/GPU looks like it will be a better bet for
floating point domination than Larabee/Nehelem combination.


"Floating point domination" could mean any number of things. If, by
that, you mean domination of HPC (including applications like
animation), we'll just have to see. Cost, power consumption, and the
usability and stability of the API will be more important than raw
flops.


Specifically, AMD and Intel are both attempting to extend their SIMD
instruction sets for the next generation of floating point. Intel
calls theirs AVX (Advanced Vector eXtensions), while AMD calls theirs
SSE5. The two instructions aren't compatible with each other but they
do basically the same thing, so it is a forking of the SSE standards.
Both will be introducing 3-operand SSE instructions, leaving behind
the current 2 operand variety.

However it's how they implement the SIMD engine in the background that
makes the difference. Intel will be using multiple little Pentium I
cores, which is the basis of its Larabee project. And AMD will be
implementing its ATI latest graphics cores. AVX will be a frontend for
Larabee, while SSE5 will be a frontend for the ATI GPU. Both
instruction sets are supersets of the existing x86 instruction set
therefore they will be easy to program for in certain ways. Intel's
Larabee will depend on certain amount of super-sized multi-threading
of the software to get the most out of its Larabee cores. AMD won't
need that much multi-threading of the software since, GPU's are
already highly parallelized by definition. Will it be easier for
compilers to create the level of multithreading that Intel requires,
or will they be more comfortable just throwing the data at the GPU and
letting the GPU sort it out for them? We'll have to see how that
plays.

Marketing will play a big role, and both IBM and Intel, which is the
real competition, are much better situated to get their products
placed. Both have a much better track record than AMD, which did make
some dents with Opteron and Hypertransport. I think that blip is
past, but, as I said, we'll just have to see.


Intel has finally caught up to all of the technology that AMD did
introduce about 5 years ago. But this CPU/GPU hybrid with a friendly
CPU frontend is a new direction that Intel can't take yet. Intel is
attempting to emulate a GPU with Larabee, but how good it's going to
be is questionable. Intel is excitedly talking about starting Larabee
out with 32 cores and then expanding that out to 64 cores. By
comparison Nvidia's latest GPU (GTX 280) already has 240 cores, while
AMD's latest single-chip GPU (HD 4870) has 480 cores.

Legit Reviews - NVIDIA GeForce GTX 280 Graphics Cards by EVGA and PNY
- NVIDIA Brings The Muscle - GeForce GTX 280
http://www.legitreviews.com/article/726/1/

Radeon HD 4000 series specifications surfaces | NordicHardware
http://www.nordichardware.com/news,7356.html

Also AMD has been concentrating on optimizing the double-precision FP
performance, a clear sign they are looking for a bigger market for
their GPUs than just graphics. Nvidia suffers a huge performance hit
when comparing double-precision to single-precision, something like a
4x decrease. AMD is only suffering a bit more than 2x decrease so far.
Even with these double-precision performance hits they are still
faster than Cell at double-precision. The following are comparisons
showing older versions of Cell and AMD GPU; Cell comes out at 14.6
GFlops and 200 GFlops for ATI Firestream 9160 (basically Radeon HD
3870) for double-precision.

Berkeley Lab Researchers Analyze Performance, Potential of Cell
Processor
http://www.lbl.gov/CS/Archive/news053006a.html

AMD FireStream - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/AMD_FireStream

Yousuf Khan

Robert Myers August 15th 08 04:44 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
On Aug 14, 5:19*pm, wrote:
On Aug 14, 1:19 pm, Robert Myers wrote:


AMD won't
need that much multi-threading of the software since, GPU's are
already highly parallelized by definition. Will it be easier for
compilers to create the level of multithreading that Intel requires,


To be a little bit tart about it, that's like looking at a horseless
carriage and saying, "But where do I put the saddle?" There are lots
of ways to hand work off to multiple processors aside from the clumsy
methods now in use. People have been talking about them for years.
Maybe now we'll see some action. In fact, I think it's inevitable.

or will they be more comfortable just throwing the data at the GPU and
letting the GPU sort it out for them? We'll have to see how that
plays.

Eugene Miya, who moderates comp.parallel, has advised ignoring the
graphics manufacturers because, according to him, they don't know what
they're doing. That could mean nothing more than that they have
thought that single-precision floating point would cut it, a mistake
that IBM has repaired slowly. Or he could be expressing an opinion
that's opposite to what you imply. He can't and won't say. In any
case, be careful of new wine in old wineskins.


Intel has finally caught up to all of the technology that AMD did
introduce about 5 years ago. But this CPU/GPU hybrid with a friendly
CPU frontend is a new direction that Intel can't take yet. Intel is
attempting to emulate a GPU with Larabee, but how good it's going to
be is questionable.


Intel is pursuing a suggestion that David Patterson made in the mid-
nineties. One of the many things that the platform could do (and
apparently will do) is to compete in the same spaces as GPU's, just as
GPU's are trying to compete in the same space as CPU's.

Robert.

Yousuf Khan August 15th 08 07:27 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
Robert Myers wrote:
Eugene Miya, who moderates comp.parallel, has advised ignoring the
graphics manufacturers because, according to him, they don't know what
they're doing. That could mean nothing more than that they have
thought that single-precision floating point would cut it, a mistake
that IBM has repaired slowly. Or he could be expressing an opinion
that's opposite to what you imply. He can't and won't say. In any
case, be careful of new wine in old wineskins.


Probably because he isn't talking about the same thing. Current GPUs are
separate, and require special API software to program for. What AMD is
coming out with is a direct interface to GPUs through an x86 instruction
set front-end. There are too many solutions out there if you have to
support everything, so with this solution you only have to worry about x86.


Yousuf Khan

Miles Bader[_2_] August 16th 08 07:08 AM

TheInquirer: 'Why AMD should merge with Nvidia'
 
Robert Myers writes:
Eugene Miya, who moderates comp.parallel, has advised ignoring the
graphics manufacturers because, according to him, they don't know what
they're doing.


What is it with Eugene Miya, anyway? For as long as I can remember
(since the late 80s or early 90s at least), everything I've seen him
post has been rather nutty (like the right terminology seems to be
there, but used in ways that simply doesn't make much sense). However
as far as I can tell, he seems to be respected. He seems like a nice
enough guy, but I'd be kind of dubious of his advice...

-Miles

--
Innards, n. pl. The stomach, heart, soul, and other bowels.

Robert Myers August 16th 08 04:53 PM

TheInquirer: 'Why AMD should merge with Nvidia'
 
On Aug 16, 2:08*am, Miles Bader wrote:

What is it with Eugene Miya, anyway? *For as long as I can remember
(since the late 80s or early 90s at least), everything I've seen him
post has been rather nutty (like the right terminology seems to be
there, but used in ways that simply doesn't make much sense). *However
as far as I can tell, he seems to be respected. *He seems like a nice
enough guy, but I'd be kind of dubious of his advice...

By his own admission, he is "not a hardware guy per se." He's
associated with NASA Ames, and I'd characterize him as an unusually
well-informed buyer. He knows everyone and talks to everyone. That's
the important thing. I'd take his advice seriously, as I do not take
the advice of most articles cited here seriously. That is not to say
that I've always agreed with him.

Robert.


All times are GMT +1. The time now is 01:55 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com