A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Nvidia Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

possible NV40 specs plus NV50 tidbits



 
 
Thread Tools Display Modes
  #1  
Old December 2nd 03, 03:30 AM
NV55
external usenet poster
 
Posts: n/a
Default possible NV40 specs plus NV50 tidbits

while surfing Beyond3D and Rage3D forums, I found:

NV40
---
1) 600Mhz core on IBM's 0.13u technology, 48GB/s memory bandwidth with
256-bit GDDR2
2) 8x2 ( possibly 16x0 or 16x1 mode, although I'd find that rather
stupid personally due to the focus on AA ).
3) FP32/FP16/FX16, this means PS1.4. is done in FX16 100% legally,
while it would seem logical for PS2.0. partial precision to be done in
FP16 unless MS decides to expose the HW better in an upcoming DX9
revision.
4) ( unsure ) HUGE die, NVIDIA is most likely artificially increasing
die size to make cooling more efficient.
5) Slightly beyond PS3.0. / VS3.0. specificiations ( not anywhere as
much as PS2.0.+ and VS2.0.+ were compared to the PS/VS2.0. standard
though, I assume ).
6) Support of a Programmable Primitive Processor
7) The only units being shared between the VS and the PS are the
texture lookup units ( NOT addressing units; addressing is still done
on a standard FP32 unit ).
8 ) Most likely no 512MB version, that's still overkill IMO.
9) PCI-Express support, most likely ( but not certainly ) through a
compatibility bridge between AGP and PCI-Express.
10) Completely new AA algorithm, most likely a stochaistic(sp?)
approach.
---
Release: February-March 2004

And when it comes to the NV50...
---
1) Full ILDP; sharing of VS/PS units
2) 0.09u most likely
3) Not a TBDR!
---
Release: Mid 2005, most likely ( SIGGRAPH? )
  #2  
Old December 2nd 03, 04:34 AM
Phrederik
external usenet poster
 
Posts: n/a
Default

They don't mention how many PCI slots the cooling system blocks... or how
many hundred watts the fan will require.

"NV55" wrote in message
om...
while surfing Beyond3D and Rage3D forums, I found:

NV40
---
1) 600Mhz core on IBM's 0.13u technology, 48GB/s memory bandwidth with
256-bit GDDR2
2) 8x2 ( possibly 16x0 or 16x1 mode, although I'd find that rather
stupid personally due to the focus on AA ).
3) FP32/FP16/FX16, this means PS1.4. is done in FX16 100% legally,
while it would seem logical for PS2.0. partial precision to be done in
FP16 unless MS decides to expose the HW better in an upcoming DX9
revision.
4) ( unsure ) HUGE die, NVIDIA is most likely artificially increasing
die size to make cooling more efficient.
5) Slightly beyond PS3.0. / VS3.0. specificiations ( not anywhere as
much as PS2.0.+ and VS2.0.+ were compared to the PS/VS2.0. standard
though, I assume ).
6) Support of a Programmable Primitive Processor
7) The only units being shared between the VS and the PS are the
texture lookup units ( NOT addressing units; addressing is still done
on a standard FP32 unit ).
8 ) Most likely no 512MB version, that's still overkill IMO.
9) PCI-Express support, most likely ( but not certainly ) through a
compatibility bridge between AGP and PCI-Express.
10) Completely new AA algorithm, most likely a stochaistic(sp?)
approach.
---
Release: February-March 2004

And when it comes to the NV50...
---
1) Full ILDP; sharing of VS/PS units
2) 0.09u most likely
3) Not a TBDR!
---
Release: Mid 2005, most likely ( SIGGRAPH? )



  #3  
Old December 2nd 03, 05:50 PM
\(\) |\\/| 3 G /-\\
external usenet poster
 
Posts: n/a
Default

maybe a noobie question, but any ideas on card length?

tim
"NV55" wrote in message
om...
while surfing Beyond3D and Rage3D forums, I found:

NV40
---
1) 600Mhz core on IBM's 0.13u technology, 48GB/s memory bandwidth with
256-bit GDDR2
2) 8x2 ( possibly 16x0 or 16x1 mode, although I'd find that rather
stupid personally due to the focus on AA ).
3) FP32/FP16/FX16, this means PS1.4. is done in FX16 100% legally,
while it would seem logical for PS2.0. partial precision to be done in
FP16 unless MS decides to expose the HW better in an upcoming DX9
revision.
4) ( unsure ) HUGE die, NVIDIA is most likely artificially increasing
die size to make cooling more efficient.
5) Slightly beyond PS3.0. / VS3.0. specificiations ( not anywhere as
much as PS2.0.+ and VS2.0.+ were compared to the PS/VS2.0. standard
though, I assume ).
6) Support of a Programmable Primitive Processor
7) The only units being shared between the VS and the PS are the
texture lookup units ( NOT addressing units; addressing is still done
on a standard FP32 unit ).
8 ) Most likely no 512MB version, that's still overkill IMO.
9) PCI-Express support, most likely ( but not certainly ) through a
compatibility bridge between AGP and PCI-Express.
10) Completely new AA algorithm, most likely a stochaistic(sp?)
approach.
---
Release: February-March 2004

And when it comes to the NV50...
---
1) Full ILDP; sharing of VS/PS units
2) 0.09u most likely
3) Not a TBDR!
---
Release: Mid 2005, most likely ( SIGGRAPH? )



  #4  
Old December 2nd 03, 07:33 PM
John Lewis
external usenet poster
 
Posts: n/a
Default

On Tue, 02 Dec 2003 11:02:10 GMT, "Lenny" wrote:


They don't mention how many PCI slots the cooling system blocks... or how
many hundred watts the fan will require.


It'll probably be a one PCI slot blocked affair again, that seems to be the
trend with Nvidia these days - sadly - and then they'll leave it up to their
partners to develop a 1-slot solution.


Er, the PCI slot next to the AGP is pretty useless anyway, since it
shares interrupts with the AGP slot. This PCI slot is only safely
useful for something that requires no interrupts.

John Lewis

As for the fan, I guess it'll consume maybe half a watt or so, but the card
as a whole might well need upwards of a hundred.



  #5  
Old December 2nd 03, 10:26 PM
J.Clarke
external usenet poster
 
Posts: n/a
Default

On Tue, 02 Dec 2003 19:59:19 GMT
"Lenny" wrote:


Er, the PCI slot next to the AGP is pretty useless anyway, since it
shares interrupts with the AGP slot.


This is not true, both because they don't share on ALL mobos, and also
because sharing is NO PROBLEM in modern systems.


Lemme guess, you believe in the Tooth Fairy and the Easter Bunny too.

When he says it's shared, he means that it's hard wired to the same
interrupt--in other words if Windows dynamically reassigns the interrupt
for one it necessarily reassigns the interrupt for the other, which
_does_ cause problems if the devices in the two slots are both
high-traffic devices.

If you have sharing
issues, the hard/software in your system is faulty.

I shared the IRQ on my vidcard with two other PCI devices (none in the
2nd slot) when I ran WinME, no issues whatsoever. On XP, I have FOUR
devices sharing with vid-card. Again, no issues. CHeck your own box,
chances are very good you'll have things sharing too. It's not an
issue, that's the way the system is supposed to work.


You will find under XP that all devices nominally share the same
interrupt. That does not mean that they all use the same
interrupt--windows will reassign interrupts as required by the workload
on the system.

This PCI slot is only safely
useful for something that requires no interrupts.


There's almost no PCI devices that don't. PCI devices are almost
exclusively I/O devices, and those almost exclusively use
busmastering, which requires an interrupt.

Besides, your statement is flat-out WRONG.


Actually, yours is not exactly "flat-out WRONG" but demonstrative of a
lack of understanding of how Windows assigns interrupts and of a lack of
experience that would show you that sometimes automatic interrupt
reassignment and interrupt sharing do not work as well as advertised.

--
--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #6  
Old December 4th 03, 08:17 AM
phobos
external usenet poster
 
Posts: n/a
Default

() |\/| 3 G /-\ wrote:

maybe a noobie question, but any ideas on card length?

tim
"NV55" wrote in message
om...

while surfing Beyond3D and Rage3D forums, I found:

NV40
---
1) 600Mhz core on IBM's 0.13u technology, 48GB/s memory bandwidth with
256-bit GDDR2
2) 8x2 ( possibly 16x0 or 16x1 mode, although I'd find that rather
stupid personally due to the focus on AA ).
3) FP32/FP16/FX16, this means PS1.4. is done in FX16 100% legally,
while it would seem logical for PS2.0. partial precision to be done in
FP16 unless MS decides to expose the HW better in an upcoming DX9
revision.
4) ( unsure ) HUGE die, NVIDIA is most likely artificially increasing
die size to make cooling more efficient.
5) Slightly beyond PS3.0. / VS3.0. specificiations ( not anywhere as
much as PS2.0.+ and VS2.0.+ were compared to the PS/VS2.0. standard
though, I assume ).
6) Support of a Programmable Primitive Processor
7) The only units being shared between the VS and the PS are the
texture lookup units ( NOT addressing units; addressing is still done
on a standard FP32 unit ).
8 ) Most likely no 512MB version, that's still overkill IMO.
9) PCI-Express support, most likely ( but not certainly ) through a
compatibility bridge between AGP and PCI-Express.
10) Completely new AA algorithm, most likely a stochaistic(sp?)
approach.
---
Release: February-March 2004

And when it comes to the NV50...
---
1) Full ILDP; sharing of VS/PS units
2) 0.09u most likely
3) Not a TBDR!
---
Release: Mid 2005, most likely ( SIGGRAPH? )





The closer we get to a generalized programmable GPU, the less space a
card will eventually take up. This given with the feature set
indications Beyond3D has written up about DirectX Next (and therefore
reasonable expectations of future hardware support) leads me to believe
generation after next cards (like NV45? or NV50?) will actually use less
chips since they won't rely on VRAM as much as they do a fair sized
primary cache (like the L1 on a normal CPU).

So a 512MB card might be totally unnecessary. Virtual memory addressing
will help a GREAT deal, especially with hitches and stuttering.
Elminating the need to have absolutely every single texture object in
RAM at the time and only what it needs when it needs it (like a single
mip map, or a single portion of a texture element, sort of like
tile-rendering for everything and the AGP bus isn't constantly flooded
with large textures, only small bytes in real time without a performance
hit)

  #7  
Old December 4th 03, 11:05 AM
J.Clarke
external usenet poster
 
Posts: n/a
Default

On Fri, 05 Dec 2003 02:10:42 GMT
"Lenny" wrote:


Lemme guess, you believe in the Tooth Fairy and the Easter Bunny
too.


Right. Nothing beats starting out a post discrediting the one you're
replying to. Smart move, I must remember that one if I find facts
aren't on my side for once.

When he says it's shared, he means that it's hard wired to the same
interrupt--in other words if Windows dynamically reassigns the
interrupt for one it necessarily reassigns the interrupt for the
other, which_does_ cause problems if the devices in the two slots
are both high-traffic devices.


Which part of "this is the way it's supposed to work?" don't you
understand?


Which part of "hard wired to the same interrupt" are you having trouble
with? It's _supposed_ to work by assigning interrupts independently as
needed. If two slots are hard-wired to the same interrupt then that
capability is defeated.

There's no problems with sharing interrupts. For chrissakes, they're
MEANT BT DESIGN to be shared!


And that design is itself a kluge intended to make up for the fact that
there are not enough interrupts available in the original PC
architecture to accomodate the number of devices that can be installed.

Do you happen to own one of those nifty
combined firewire/usb2 expansion cards by any chance? Believe it or
not, but those things ACTUALLY WORK, despite being as you say,
"high-traffic devices", and neccessarily sharing the same interrupt.


Those boards are single devices intended to work from the same
interrupt, they are not independent devices designed by independent
teams for independent purposes. And how much traffic they can actually
handle is debatable.

I
happen to own one of them, and I can attest that indeed, there are no
issues.


So let's see, now, you can perform a sustained transfer from the USB
side to the Firewire side and vice versa at the maximum speed allowed by
the standards with no trouble? So what devices do you have attached
that can provide data at those rates?

Sorry, but you simply fail to produce a convincing argument for your
case. Facts and reality speak against you.


Believe what you want to. But don't come crying to me when reality
bites you in the ass.

IRQ sharing isn't a problem in the PCI world. Not saying it works
flawlessly 100% of the time because virtually nothing about the PC
does, but that's not the same as it is a significant source of
trouble.


That's the point, it _doesn't_ work flawlessly 100% of the time even
when devices are not inserted in slots that force them to use the same
interrupt.

Mostly people who spout this 'sharing is evil' nonsense are
still perpetuating stuff that was relevant back in the old ISA days.
Not so anymore.


Depends.

You think the chipset and mobo makers are stupid or something, that
they put in support for six or even seven PCI busmasters/slots if it
wasn't possible to actually use them all without running into trouble?


It takes two to tango--even if the chipset design is perfect, that does
not mean that the six or seven PCI boards plugged into those slots are
all also perfect.

The designers of the chips and the designers of the motherboards make
the assumption that the people who are using them will be aware of their
limitations and will act accordingly rather than rashly assuming as you
seem determined to do that they can just plug things in willy-nilly and
have them work.

--
--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
  #8  
Old December 4th 03, 11:47 PM
John Lewis
external usenet poster
 
Posts: n/a
Default

On Tue, 2 Dec 2003 17:26:34 -0500, "J.Clarke"
wrote:

On Tue, 02 Dec 2003 19:59:19 GMT
"Lenny" wrote:


Er, the PCI slot next to the AGP is pretty useless anyway, since it
shares interrupts with the AGP slot.


This is not true, both because they don't share on ALL mobos, and also
because sharing is NO PROBLEM in modern systems.


Lemme guess, you believe in the Tooth Fairy and the Easter Bunny too.

When he says it's shared, he means that it's hard wired to the same
interrupt--in other words if Windows dynamically reassigns the interrupt
for one it necessarily reassigns the interrupt for the other, which
_does_ cause problems if the devices in the two slots are both
high-traffic devices.


Thanks to another John, I did not immediately have to come back
and state the obvious.............. Lenny must be a software-type
where all PC hardware-contention problems are magically resolved
by either a click of the keyboard or the M$$ OS takes care of it
automatically.....

For the properly-architectured Amiga, probably yes. For the
legacy-riddled PC, frequently no. Other examples are on-board
disk-controllers sharing interrupts with slot #3, and PCI slot #5
( if present ) sharing with another slot. Mostly OK if the traffic
volume can be shared AND the contending plug-ins/devices
all comply with PCI 2.1 spec; the latter compliance still being a
mine-field..........

Try putting a SBLive! (non 5.1) in PCI slot #1, with a video card
already in the adjacent AGP slot.

John Lewis


If you have sharing
issues, the hard/software in your system is faulty.

I shared the IRQ on my vidcard with two other PCI devices (none in the
2nd slot) when I ran WinME, no issues whatsoever. On XP, I have FOUR
devices sharing with vid-card. Again, no issues. CHeck your own box,
chances are very good you'll have things sharing too. It's not an
issue, that's the way the system is supposed to work.


You will find under XP that all devices nominally share the same
interrupt. That does not mean that they all use the same
interrupt--windows will reassign interrupts as required by the workload
on the system.

This PCI slot is only safely
useful for something that requires no interrupts.


There's almost no PCI devices that don't. PCI devices are almost
exclusively I/O devices, and those almost exclusively use
busmastering, which requires an interrupt.

Besides, your statement is flat-out WRONG.


Actually, yours is not exactly "flat-out WRONG" but demonstrative of a
lack of understanding of how Windows assigns interrupts and of a lack of
experience that would show you that sometimes automatic interrupt
reassignment and interrupt sharing do not work as well as advertised.

--
--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)


  #9  
Old December 5th 03, 02:10 AM
Lenny
external usenet poster
 
Posts: n/a
Default


Lemme guess, you believe in the Tooth Fairy and the Easter Bunny too.


Right. Nothing beats starting out a post discrediting the one you're
replying to. Smart move, I must remember that one if I find facts aren't on
my side for once.

When he says it's shared, he means that it's hard wired to the same
interrupt--in other words if Windows dynamically reassigns the interrupt
for one it necessarily reassigns the interrupt for the other, which
_does_ cause problems if the devices in the two slots are both
high-traffic devices.


Which part of "this is the way it's supposed to work?" don't you understand?

There's no problems with sharing interrupts. For chrissakes, they're MEANT
BT DESIGN to be shared! Do you happen to own one of those nifty combined
firewire/usb2 expansion cards by any chance? Believe it or not, but those
things ACTUALLY WORK, despite being as you say, "high-traffic devices", and
neccessarily sharing the same interrupt. I happen to own one of them, and I
can attest that indeed, there are no issues.

Sorry, but you simply fail to produce a convincing argument for your case.
Facts and reality speak against you.

IRQ sharing isn't a problem in the PCI world. Not saying it works flawlessly
100% of the time because virtually nothing about the PC does, but that's not
the same as it is a significant source of trouble. Mostly people who spout
this 'sharing is evil' nonsense are still perpetuating stuff that was
relevant back in the old ISA days. Not so anymore.

You think the chipset and mobo makers are stupid or something, that they put
in support for six or even seven PCI busmasters/slots if it wasn't possible
to actually use them all without running into trouble?


  #10  
Old January 24th 04, 03:51 PM
Michael Clark
external usenet poster
 
Posts: n/a
Default

"Lenny" wrote in message:

Lemme guess, you believe in the Tooth Fairy and the Easter Bunny
too.


Right. Nothing beats starting out a post discrediting the one
you're replying to. Smart move, I must remember that one if I find
facts aren't on my side for once.

When he says it's shared, he means that it's hard wired to the
same interrupt--in other words if Windows dynamically reassigns
the interrupt for one it necessarily reassigns the interrupt for
the other, which _does_ cause problems if the devices in the two
slots are both high-traffic devices.


Which part of "this is the way it's supposed to work?" don't you
understand?

There's no problems with sharing interrupts. For chrissakes,
they're MEANT BT DESIGN to be shared!


I've had serious performance issues with network 3D gaming while
windows APCI had my video card assigned to the same interrupt as my
network card. When I moved the network card to a different PCI
slot, windows gave it a different IRQ and my FPS increased by about
30 in certain situations.

J.Clarke's reply on DEC 04, 2003 4:05 AM is absolutely correct IMO.

Michael


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
NV40 ~ GeForce 6800 specs NV55 Ati Videocards 52 April 20th 04 11:09 PM
John Carmack's official comments on NV40 (GeForce 6800 family) John Lewis Ati Videocards 45 April 18th 04 06:06 PM
NV40 to have 222 million transistors NV55 Ati Videocards 36 April 16th 04 08:53 AM
NV40 a 16-pipe MONSTER - Too late for ATI to respond NV55 Ati Videocards 15 February 28th 04 08:29 AM
Nvidia NV40 and NV50 - interesting discussion TvSurf Nvidia Videocards 3 October 31st 03 06:28 AM


All times are GMT +1. The time now is 12:45 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.