A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Ati Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

NV40 ~ GeForce 6800 specs



 
 
Thread Tools Display Modes
  #41  
Old April 15th 04, 01:25 PM
external usenet poster
 
Posts: n/a
Default

On Thu, 15 Apr 2004 20:55:42 +1000, Ricardo Delazy wrote:

On Wed, 14 Apr 2004 19:13:30 GMT, "teqguy" wrote:


Most MPEG encoding is processor dependent... I wish developers would
start making applications that let the graphics card do video encoding,
instead of dumping the work on the processor.


I'm pretty sure I read somewhere that the (new & improved) Prescotty
processor has been given a special hard wired instruction set which is
dedicated to encoding video, so that should speed things up some what.

I remember reading an article over a year ago which had Intel giving a
demo of a future release CPU which apparently was running 3 full screen
HD videos simultaneously rotating in a 3d cube. The processor prototype
was not specified, but it may have been a Tejas as it was rated at 5GHz.



SSE3 won't make Intel CPUs as fast as dedicated DSPs for video encoding. It can be an improvement
over SSE and SSE2 but it's still not fast enough.
They should have embedded a full DSP (or more than one) inside CPUs to achieve the same performance.
SSE subsets are still too much tied to the general purpose x86 architecture and their efficiency
it's poor compared to dedicated DSPs.
A $40-50 floating point DSP can be 3x times faster than any SSE3 capable CPU at MPEG2/MPEG4
encoding.
If it's true that Nvidia has designed the NV40 as a full DSP then it's just a matter of time and SDK
availability to let programmers access the NV40 DSP thru DirectX or other dedicated APIs before
known Codecs such as DiVX would be able to take advantage of GPU power.
The only problem is that Nvidia needs a mainstream set of GPUs derived from this one with MPEG
encoding/decoding on the market ASAP to set a standard, before ATI releases its own DSP GPUs with
MPEG encoding/decoding capability.
If the MPEG encoding/decoding in NV40 were fixed in hardware, hardwired then it would be a pretty
low quality implementation and I really hope that the claims that the GPU it's a full DSP are true
so that programmers with DSP experience could upload their own filters codes onto the GPU DSP to
perform their own MPEG video encoding.
I also hope that the SDK to access DSP features and reprogram MPEG video encoding would be free so
that even non-commercial, freeware encoders could be available in the future to further exploit GPU
capabilities.


  #42  
Old April 15th 04, 04:14 PM
chrisv
external usenet poster
 
Posts: n/a
Default

"Mark Leuck" wrote:

"chrisv" wrote in message
.. .
"teqguy" wrote:

The FX series is 28 to 23, ranging from the 5950 to the 5200.


Better late then never, I guess.

If you have nothing to contribute, shut up.


If you're just going to post drivel, shut up.

What's drivel is your
obsessant need to critique everything anyone ever says.


Wrong again.


If I recall you are the same chrisv who stated over and over in
alt.computer.storage how IBM never had a problem with it's last batch of
Deathstar hard drives


Too stupid to figure out that I was parodying the "great" Ron Reaugh
with those posts. The google record proves that I in fact was very
aware of the "Deathstar's" problems.

Ignore the troll folks, he knows not what he says.


It's true, you don't have a clue.

  #43  
Old April 15th 04, 05:20 PM
Eric Witte
external usenet poster
 
Posts: n/a
Default

SCSI only operates at 320Mb/s.

320MB/s. But you need a lot of drives to saturate that. Any single
IDE drive could easily do 320Mb/s Thats only 40MB/s.

Eric
  #45  
Old April 15th 04, 06:37 PM
teqguy
external usenet poster
 
Posts: n/a
Default

DaveL wrote:

I think Nvidia learned their lesson about that from the 5800U
debacle. It was ATI that stayed with the old standard and took the
lead in performance. Meanwhile, Nvidia was struggling with fab
problems.

DaveL


"Ar Q" wrote in message
link.net...

Isn't it time for NVidia to use 0.09um process? How could they put
some many features if still using 0.13 um process?







Heat generation is still too much of a risk for moving to 90-nm.



If AMD moved to .09...... I'd have a new toaster. Say goodbye to
overclocking at that point.




The "features" can be expandable as much as they like.... right now
they aren't even using the entire wafer for such optimizations, only a
small section.



A lot of those optimizations are software based too... the GPU just has
to be able to support the relative ballpark of them.
  #46  
Old April 15th 04, 11:25 PM
JBM
external usenet poster
 
Posts: n/a
Default


"teqguy" wrote in message
...
DaveL wrote:

I think Nvidia learned their lesson about that from the 5800U
debacle. It was ATI that stayed with the old standard and took the
lead in performance. Meanwhile, Nvidia was struggling with fab
problems.

DaveL


"Ar Q" wrote in message
link.net...

Isn't it time for NVidia to use 0.09um process? How could they put
some many features if still using 0.13 um process?







Heat generation is still too much of a risk for moving to 90-nm.



If AMD moved to .09...... I'd have a new toaster. Say goodbye to
overclocking at that point.




The "features" can be expandable as much as they like.... right now
they aren't even using the entire wafer for such optimizations, only a
small section.


That's going to be some big honked chip when they use the whole
wafer.

Jim M



A lot of those optimizations are software based too... the GPU just has
to be able to support the relative ballpark of them.



  #47  
Old April 16th 04, 01:58 AM
G
external usenet poster
 
Posts: n/a
Default

(Eric Witte) wrote in message . com...
(G) wrote in message . com...
K wrote in message ...

I have a gut feeling that PCI Express will do very little for performance,
just like AGP before it. Nothing can substitute lots of fast RAM on the
videocard to prevent shipping textures across to the much
slower system RAM. You could have the fastest interface imaginable for
your vid card; it would do little to make up for the bottleneck that
is your main memory.




But what about for things that don't have textures at all?

PCI Express is not only bi-directional, but full duplex as well. The
NV40 might even use this to great effect, with its built-in hardware
accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
means it natively supports 1920x1080 and 1280x720 without having to
use Powerstrip). The lower cost version should be sweet for Shuttle
sized Media PC's that will finally be able to "tivo" HDTV.

I can also see the 16X slot being used in servers for other things
besides graphics. Maybe in a server you'd want your $20k SCSI RAID
Controller in it. Or in a cluster box a 10 gigabit NIC.


Why even mess with a 16X PCI-e slot? A 10Gbit NIC could be handled by
3-4x PCIe. All you need is a 1X slot for most of what is out there
today. I would like to see something that could handle 8GB/s
bandwidth

Eric



Absolutely. The 16X comment was just an example. It's way more likely
that a server would have 1@16x, 3@4x, and 4@1x (or something like
that). In fact I don't see the number and/or speed of expansion slots
being a big "server vs desktop" differentiator after PCIe catches on.
I've even heard that external bus expansion housings are possible.

Anyway, PCI Express looks like it has tons of flexibility. Not that
PCI-X couldn't have lasted for a while longer. But one bus to get rid
of the four we have now *AND* increase headroom for the future *AND*
add new features that AGP lacks *AND* reduce the wire/pin count at the
same time is a Good Thing.
  #48  
Old April 16th 04, 05:55 AM
John Lewis
external usenet poster
 
Posts: n/a
Default

On Wed, 14 Apr 2004 17:17:07 GMT, "Ar Q"
wrote:


"NV55" wrote in message
om...
the following is ALL quote:


http://frankenstein.evilgeniuslabs.c...nv40/news.html


Tuesday, April 13, 2004

NVIDIA GeForce 6800 GPU family officially announced - Cormac @ 17:00
It's time to officially introduce the new GPU generation from NVIDIA
and shed the light on its architecture and features.

So, the GeForce 6800 GPU family, codenamed NV40, today officially
entered the distribution stage. Initially it will include two chips,
GeForce 6800 Ultra and GeForce 6800, with the same architecture.


These are the key innovations introduced in NVIDIA's novelties:

*16-pipeline superscalar architecture with 6 vertex modules, DDR3
support and *real 32-bit pipelines
*PCI Express x16, AGP 8x support
*222 million transistors
*400MHz core clock
*Chips made by IBM
*0.13µm process


Isn't it time for NVidia to use 0.09um process? How could they put some
many features if still using 0.13 um process?


The NV40 die is .75 inches square and all the features are in there.
The part will have been stress-tested by a vector-test program to
completely exercise all of its functions before it is ever supplied to
a 3rd party for incorporation into the 6800 video card.

Future generations of this GPU will be on a smaller process. The
current NV40 chip is made by IBM. IBM is working on a .065 nm process
that AMD will use when it is sufficiently mature. No doubt nVidia will
also be one of the first users of the process also. Will shrink the
existing die area by a factor of 4 and also drop the power by about a
factor of 6. Will probably take a couple of years to get there...
nVidia will not make the mistake of ever using an immature process
again.

John Lewis





  #49  
Old April 19th 04, 06:19 AM
Derek Baker
external usenet poster
 
Posts: n/a
Default


"joe smith" wrote in message
...
pfft. You don't even know what the ATI offering is as yet, much

less
are you able to buy a 6800 until well into next month.


No, I do not. I wrote that the rumor is that ATI wouldn't have 3.0 level
shaders.. I was commenting on a rumor, if that isn't true then the

situation
is naturally entirely different. The confidentially / NDA ends 19th this
month so soon after that we should begin to see cards dripping to the
shelves like always (just noticed a trend in past 5-7 years, could be

wrong
but I wouldn't die if had to wait even 2 months.. or 7.. or 3 years.. the
stuff will get here sooner or later.. unless the world explodes before

that
=


[Snipped]

19th? Where did you get that date from?

--
Derek


  #50  
Old April 19th 04, 08:32 AM
joe smith
external usenet poster
 
Posts: n/a
Default

19th? Where did you get that date from?

"Confidential until April 19th 2004" stamped over slides, etc. material you
find from here and there.


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Geforce 6800 drivers Glitch General 3 December 13th 04 09:30 AM
Athlon xp 2600+ (@1.9ghz) with a Geforce 6800? Kiran Kumar Kamineni Overclocking AMD Processors 1 November 14th 04 11:24 AM
Gigabyte NVIDIA 6600 Series: Bringing GeForce 6800 features to the mainstream! Gigabyte USA Marketing Gigabyte Motherboards 0 October 28th 04 11:04 PM
GeForce 6800 Ultra (256 Mb) F.O.R. General 1 August 7th 04 02:20 AM
P4C800-E Deluxe and BFG GeForce 6800 Ultra OC graphics card Mark Cee Asus Motherboards 2 June 28th 04 05:24 AM


All times are GMT +1. The time now is 09:41 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.