A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Ati Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

AIB Companies To Adopt XGI Volari GPUs?



 
 
Thread Tools Display Modes
  #11  
Old September 28th 03, 07:21 PM
Andy Cunningham
external usenet poster
 
Posts: n/a
Default

The real key is in getting decent drivers. This is why nVidia took
over the graphics world, not by their hardware. nVidia managed to get
fast and *stable* drives out for all of their products while 3dfx and
ATI were floundering with buggy drivers that were missing features and
having either very poor performance or, at best, uneven performance.
ATI has since learned from their mistakes and really improved the
quality of their drivers, but they are about the only one.


3DFX drivers were excellent - I never had to fiddle with anything to get my
Voodoo 3 working with any game. The reason nVidia took over was that the
GeForce performance was so far above V3 performance, although there was
nothing wrong with the drivers either. 3DFX attempts to get back on par for
performance never went anywhere while nVidia raised the bar again by a huge
amount with the GF2.


  #12  
Old September 28th 03, 10:25 PM
Tony Hill
external usenet poster
 
Posts: n/a
Default

On 28 Sep 2003 06:36:57 -0700, (Radeon350) wrote:
Tony Hill wrote in message t.com...
I don't see why it is such a stretch. First of all, there are not many
companies that make consumer GPUs to begin with. They can be counted
on one hand, I believe.


There are 6 of them. nVidia, Intel and ATI are far and away the
leaders, with Matrox, S3/VIA and SiS following. XGI is a combo team
of SiS and the old Trident crew.

And as far as I am aware, none have released a
card with more than one GPU, for consumer use. Yeah, there are dozens
of cards that use 2 or more GPUs, from a number of companies, for all
kinds of highend, non-consumer applications. many of them predate
Nvidia's NV10/GeForce256, which was the first working consumer GPU,
but *certainly* not the first-ever GPU. that is, a chip with T&L
on-chip.


There is a lot of tricky wording going around with just what makes a
graphics chipset a "GPU" and what makes it just a video chipset.

Actually I don't just use the term 'GPU' as Nvidia uses it. To myself
and to many who use graphics processors, something that takes the
geometry processing load OFF the CPU, putting on the graphics chip,
that's a 'graphics processor' or graphics processing unit / GPU as
Nvidia coined it. The 3Dfx Voodoo chips, including VSA-100s used in
Voodoo5 5500 and 6000 did NOT do that at all. Neither did any of the
pre-Radeon ATI chips, inluding the duel Rage Fury chips in the MAXX
card. And basicly any consumer 3D PC chip before the GeForce256. Any


All of these chips, starting way back with the Matrox Millennium,
offloaded some of the graphics processing work to the video card. It
wasn't nearly as cut-and-dry as people (or, more to the point,
nVidia's marketing department) like to make it out to be. Even with
geometry processing it was by no means an all-in-one sort of deal,
different chips have taken over different stages of the geometry
processing.

graphics chip that lacks what used to be called 'geometry processing'
or what was commenly called T&L in late 1999 when GeForce came out,
and is now called Vertex Shading, if it lacks that, it's usually
concidered a 3D accelerator or rasterizer, rather than a complete
'graphics processor' or GPU. At least that is the way I have
understood things for a long time.


You're understanding the marketing terms just fine, though marketing
paints a much more black and white picture of things than the real
world.

In short, it all comes down to where you choose to put the mark of
what defines a GPU as. Personally, I really don't care one way or the
other. I'm an engineer and a computer users. I'm interested in the
technical details (for curiosity sake) and the price/performance that
it offers (from a "will I buy it" point of view). Everything else is
all just marketing, and doesn't interest me much.

-------------
Tony Hill
hilla underscore 20 at yahoo dot ca
  #13  
Old September 28th 03, 10:25 PM
Tony Hill
external usenet poster
 
Posts: n/a
Default

On Sun, 28 Sep 2003 19:21:19 +0100, "Andy Cunningham"
wrote:
The real key is in getting decent drivers. This is why nVidia took
over the graphics world, not by their hardware. nVidia managed to get
fast and *stable* drives out for all of their products while 3dfx and
ATI were floundering with buggy drivers that were missing features and
having either very poor performance or, at best, uneven performance.
ATI has since learned from their mistakes and really improved the
quality of their drivers, but they are about the only one.


3DFX drivers were excellent - I never had to fiddle with anything to get my
Voodoo 3 working with any game.


By the time that the Voodoo 3 came out it was rapidly becoming too
late for 3DFX. They really blew it with crappy drivers on their
Voodoo Rush chipset and then the Voodoo Banshee after that. 3DFX's
drivers also always tended to offer quite poor performance unless you
happened to be playing a Glide game or one that would work with their
"Mini-GL" driver (ie their Quake driver).

FWIW the "buggy driver" syndrome was more a problem for ATI. 3DFX had
more problems with poor driver performance in OpenGL and DirectX as
well as some missing features (though the latter was one part
hardware, one part software).

The reason nVidia took over was that the
GeForce performance was so far above V3 performance, although there was
nothing wrong with the drivers either. 3DFX attempts to get back on par for
performance never went anywhere while nVidia raised the bar again by a huge
amount with the GF2.


By that point in time their drivers might not have been buggy, but
they were often offering rather poor performance as compared to what
the hardware was theoretically capable of.

-------------
Tony Hill
hilla underscore 20 at yahoo dot ca
  #14  
Old September 29th 03, 05:22 AM
Radeon350
external usenet poster
 
Posts: n/a
Default

wrote in message ...
U comp.sys.ibm.pc.hardware.video graphics processing unit prica:
Personally, I am most excited about the Volari V8 Duo - first *consumer*
graphics card configuration to sport twin Grahpics Processing Units.



Voodoo5 5500 in my machine has got 2 VSA100 units... If that isn't GPU, than
what is it? Drivers are working properly under any Windows OS (right now
using Windows 2000 Pro)...

There is one thing that nobody will beat soon... ) Voodoo5 6000... Or,
saying another words - 4 CPU's on one board...

But, shhhhhh... ) I screwed one CPU, so it isn't working properly... ))


And, ATI Rage Fury MAXX had 2 Rage128Pro CPU's (IIRC)... But, problematic
drivers...



Ok this post is sort of for you, and for Tony, or anyone who doesn't
really draw the line between a rasterizer / 3D accellerator like 3Dfx
Voodoo 1,2,3, Banshee, VSA-100, PowerVR Series 1,2,3, Riva 128,
TNT1/2, Rage128, Rage Fury etc., and a full on 'graphics processor' or
GPU or polygon processor or polygon processor chipset (GeForce 1-4,
GFFX, all the Radeons, Lockheed Reald3D series, 3DLabs GLINT+Delta,
Evans & Sutherland RealIMAGE, 3DLabs Wildcat, etc)

What I am posting below is a very good (IMHO) post from 1996 from a
guy who explained the differences (and made a distinction) between
Voodoo Graphics or similar consumer 3D accelerators/rasterizers of the
time, and full 3D polygon processors (equivalent of todays GPUs) with
geometry engines/processors-like Lockheed's non-consumer Real3D/100,
which was a true 'graphics processor'/ chipset (not the horrible
consumer Intel/R3D i740 used in Starfighter cards that had not been
revealed in 1996). At that time, there were NO consumer PC 3D chips
with geometry processing / T&L. in otherwords, there were no consumer
GPUs in 1996. not until 1999's GeForce256.

This post really points out the differences quite well. Alright
without further rambling on my part, here is the post:

http://groups.google.com/groups?selm...&output=gplain

[quote]

"First, let me start off by saying I am going to be buying a Voodoo
card. For low end comsumer grade flight sims and such, the Voodoo
looks like about the best thing available. Second, I am not
necessarily responding to just you, because there seems to be a hell
of a lot of confusion about Lockheed Martin's graphics accelerators. I
have been seeing posts all over the place confusing the R3D/100 with
the AGP/INTEL project that L.M. is working on. The R3D/100 is *NOT*
the chipset that is being developed for the AGP/INTEL partnership.

However, since your inference is that the Voodoo is faster than the
R3D/100, I have to say that you are totally dead wrong. While the
specs say that the Voodoo is *capable* of rendering a higher number of
pixels per second, or the same number of polygons per second as the
R3D/100, the specs fail to mention that these are not real world
performance figures any you probably will not ever see the kind of
performance that 3Dfx claims to be able to acheive. This does *not*
mean that the Voodoo is not a good (its great actually) card, just
that the game based 3D accelerator companies (all of them) don't tell
you the whole story.

The Voodoo uses a polygon raster processor. This accelerates line and
polygon drawing, rendering, and texture mapping, but does not
accelerate geometry processing (ie vertex transormation like rotate
and scale). Geometry processing on the Voodoo as well as every other
consumer (read game) grade 3D accelerator. Because the cpu must
handle the geometry transforms and such, you will never see anything
near what 3Dfx, Rendition, or any of the other manufacturers claim
until cpu's get significantly faster (by at least an order of
magnitude). The 3D accelerator actually has to wait for the cpu to
finish processing before it can do its thing.

I have yet to see any of the manufacturers post what cpu was plugged
into their accelerator, and what percentage of cpu bandwidth was being
used to produce the numbers that they claim. You can bet that if it
was done on a Pentium 200, that the only task the cpu was handling was
rendering the 3D model that they were benchmarking. For a game,
rendering is only part of the cpu load. The cpu has to handle flight
modelling, enemy AI, environmental variables, weapons modelling,
damage modelling, sound, etc, etc.

The R3D includes both the raster accelerator (see above) and a 100
MFLOP geometry processing engine. Read that last line again. All
geometry processing data is offloaded from the system cpu and onto the
R3D floating point processor, allowing the cpu to handle more
important tasks. The Voodoo does not have this, and if it were to add
a geometry processor, you would have to more than double the price of
the card.

The R3D also allows for up to 8M of texture memory (handled by a
seperate texture processor) which allows not only 24 bit texturemaps
(RGB), but also 32bit maps (RGBA) the additional 8 bits being used for
256 level transparency (Alpha). An addtional 10M can be used for
frame buffer memory, and 5M more for depth buffering.

There are pages and pages of specs on the R3D/100 that show that in
the end, it is a better card than the Voodoo and other consumer and
accelerator cards, but I guess the correct question is, for what? If
the models that are in your scene are fairly low detailed (as almost
all games are - even the real cpu pigs like Back to Bagdhad), then the
R3D would be of little added benefit over something like the Voodoo.
However, when you are doing scenes where the polys are 2x+ times more
than your typical 3D game, the R3D really shines. The R3D is and
always was designed for mid to high end professional type application,
where the R3D/1000 (much much faster than the 100) would be too
expensive, or just plain overkill. I've seen the 1000 and I have to
say that it rocks! I had to wipe the drool from my chin after seeing
it at Siggraph (We're talking military grade simulation equipment
there boys, both in performance and price!)

Now then, as I mentioned before, I'm going be buying the Voodoo for my
home system, where I would be mostly playing games. But, I am looking
at the R3D for use in professional 3D application. More comparible 3D
accelerators would not be Voodoo, Rendition based genre, but more
along the lines of high end GLINT based boards containing Delta
geometry accelerator chips (and I don't mean the low end game base
Glint chips, or even the Permedia for that matter), or possibly the
next line from Symmetric (Glyder series), or Intergraph's new
professional accelerator series."

[unqoute]


Ahem, I appologize for making a really huge deal out of this. I am not
trying to be anal or trying to flame anyone, just pointing something
out that is quite significant IMHO, and significant to most people
that work with 3D graphics. (I dont myself). I feel that person's post
is right in line with my thinking as far as making a distinction
between rasterizers / 3D accelerators, which only tackle part of the
rendering pipeline (leaving the rest for the CPU) and full polygon
processers with geometry & lighting onboard, aka 'GPUs'.
  #15  
Old September 29th 03, 07:00 AM
Larry Roberts
external usenet poster
 
Posts: n/a
Default


Actually I don't just use the term 'GPU' as Nvidia uses it. To myself
and to many who use graphics processors, something that takes the
geometry processing load OFF the CPU, putting on the graphics chip,
that's a 'graphics processor' or graphics processing unit / GPU as
Nvidia coined it. The 3Dfx Voodoo chips, including VSA-100s used in
Voodoo5 5500 and 6000 did NOT do that at all. Neither did any of the
pre-Radeon ATI chips, inluding the duel Rage Fury chips in the MAXX
card. And basicly any consumer 3D PC chip before the GeForce256. Any
graphics chip that lacks what used to be called 'geometry processing'
or what was commenly called T&L in late 1999 when GeForce came out,
and is now called Vertex Shading, if it lacks that, it's usually
concidered a 3D accelerator or rasterizer, rather than a complete
'graphics processor' or GPU. At least that is the way I have
understood things for a long time.


Well. By your explanation, a GPU does all proccessing itself.
My old GF2 card didn't support the DX 8 hardware shaders, so I guess
it stopped being a GPU. Now I have an actual GPU card (GF3), but I
guess since it doesn't support DX 9 hardware shaders, I can't call it
a GPU either.
  #16  
Old September 29th 03, 07:46 AM
Larry Roberts
external usenet poster
 
Posts: n/a
Default


By the time that the Voodoo 3 came out it was rapidly becoming too
late for 3DFX. They really blew it with crappy drivers on their
Voodoo Rush chipset and then the Voodoo Banshee after that.


I don't think the Rush, and Banshee helped kill 3DFX. The
Banshee was understood to be entry level performance when compared to
the Voodoo 2. I bought a Banshee myself, and found it to be a nice,
cheap upgrade from my previous 8MB Verite 2200 + 4MB Voodoo 1. Could
only get about 28fps in Q2 640x480 using Verite 2200, about 34fps with
Voodoo 1, and a whopping 50fps with the Banshee. I was so happy then.
Now we complain if we can't get 100fps.
  #17  
Old September 29th 03, 01:55 PM
Mark Leuck
external usenet poster
 
Posts: n/a
Default


"Larry Roberts" wrote in message
...

By the time that the Voodoo 3 came out it was rapidly becoming too
late for 3DFX. They really blew it with crappy drivers on their
Voodoo Rush chipset and then the Voodoo Banshee after that.


I don't think the Rush, and Banshee helped kill 3DFX. The
Banshee was understood to be entry level performance when compared to
the Voodoo 2. I bought a Banshee myself, and found it to be a nice,
cheap upgrade from my previous 8MB Verite 2200 + 4MB Voodoo 1. Could
only get about 28fps in Q2 640x480 using Verite 2200, about 34fps with
Voodoo 1, and a whopping 50fps with the Banshee. I was so happy then.
Now we complain if we can't get 100fps.


The Banshee did help kill 3dfx because 3dFX was in such a hurry to release
the alll-in-one 3d card they took people away from the Rampage project to
work on it.


  #18  
Old October 1st 03, 10:57 PM
external usenet poster
 
Posts: n/a
Default

Look... GPU states for Graphics Processing Unit, right?

The acronyms says it's a unit that processes graphics... So, looking that
way, all 2D GPU's back in the time of Hercules, CGA, EGA, VGA, blahblah, to
the newest GPU's are the same thing... Units that have only one thing to do
- process graphics...

You can now talk about high-perf SGI GPU's, all the stuff you mentioned, and
yes, all of these are GPU's, just like all the stuff I mentioned...

But, if you say 3D GPU only, then it's other thing to discuss... Looking
that way, Voodoo 1 and 2 weren't true GPU's, but 3D only (which they were in
fact)...


EOD...

--
Klintona boja drazesan keksu sviru na Infou prekjucer ?
By runf

Damir Lukic,
a member of hr.comp.hardver FAQ-team
  #19  
Old October 5th 03, 12:46 AM
Thomas
external usenet poster
 
Posts: n/a
Default

Radeon350 wrote:
Ok this post is sort of for you, and for Tony, or anyone who doesn't
really draw the line between a rasterizer / 3D accellerator like 3Dfx
Voodoo 1,2,3, Banshee, VSA-100, PowerVR Series 1,2,3, Riva 128,
TNT1/2, Rage128, Rage Fury etc., and a full on 'graphics processor' or
GPU or polygon processor or polygon processor chipset (GeForce 1-4,
GFFX, all the Radeons, Lockheed Reald3D series, 3DLabs GLINT+Delta,
Evans & Sutherland RealIMAGE, 3DLabs Wildcat, etc)

What I am posting below is a very good (IMHO) post from 1996 from a
guy who explained the differences (and made a distinction) between
Voodoo Graphics or similar consumer 3D accelerators/rasterizers of the
time, and full 3D polygon processors (equivalent of todays GPUs) with
geometry engines/processors-like Lockheed's non-consumer Real3D/100,
which was a true 'graphics processor'/ chipset (not the horrible
consumer Intel/R3D i740 used in Starfighter cards that had not been
revealed in 1996). At that time, there were NO consumer PC 3D chips
with geometry processing / T&L. in otherwords, there were no consumer
GPUs in 1996. not until 1999's GeForce256.


Yes, sure, the name GPU was invented back then. It was a 'revolution' in 3D
cards. The 'GPU' has more capabilities and hardware support than the
previous generations of vid cards.

*BUT*, there have been many many more revolutions, like for instance the
pixel shader. The Directx 8 compliant cards are the first ones capable of
doing this. Great. But they didnt come up with a new name, like PSGPU, or
whatever. It's just that NVidia chose to change the name of the graphics
chip to GPU. For me, it's nonsense to claim that it's a special thing that
the Volari chips is the first dual GPU video card, since you're referring to
a dual video-chip card. It's the card with the latest version of videochips
that has been launched. But, well, since it's te most recent card, there's
no special thing in that.

Well, this doesnt really lead anywhere ;-) I think we all know what we all
mean, so no point in arguing about names ;-)

Thomas


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT +1. The time now is 10:19 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.