A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Intel's FB-DIMM, any kind of RAM will work for your controller?



 
 
Thread Tools Display Modes
  #1  
Old April 18th 04, 11:48 AM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default Intel's FB-DIMM, any kind of RAM will work for your controller?

Intel is introducing a type of DRAM called FB-DIMMs (fully buffered).
Apparently the idea is to be able to put any kind of DRAM technology (e.g.
DDR1 vs. DDR2) behind a buffer without having to worry about redesigning
your memory controller. Of course this intermediate step will add some
latency to the performance of the DRAM.

It is assumed that this is Intel's way of finally acknowledging that it has
to start integrating DRAM controllers onboard its CPUs, like AMD does
already. Of course adding latency to the interfaces is exactly the opposite
of what is the main advantage of integrating the DRAM controllers in the
first place.

http://arstechnica.com/news/posts/1082164553.html

Yousuf Khan

--
Humans: contact me at ykhan at rogers dot com
Spambots: just reply to this email address ;-)


  #2  
Old April 18th 04, 03:28 PM
external usenet poster
 
Posts: n/a
Default

A buffer is meant to reduce overall latency, not to increase it AFAIK.


On Sun, 18 Apr 2004 10:48:44 GMT, "Yousuf Khan" wrote:

Intel is introducing a type of DRAM called FB-DIMMs (fully buffered).
Apparently the idea is to be able to put any kind of DRAM technology (e.g.
DDR1 vs. DDR2) behind a buffer without having to worry about redesigning
your memory controller. Of course this intermediate step will add some
latency to the performance of the DRAM.

It is assumed that this is Intel's way of finally acknowledging that it has
to start integrating DRAM controllers onboard its CPUs, like AMD does
already. Of course adding latency to the interfaces is exactly the opposite
of what is the main advantage of integrating the DRAM controllers in the
first place.

http://arstechnica.com/news/posts/1082164553.html

Yousuf Khan


  #3  
Old April 18th 04, 06:37 PM
Yousuf Khan
external usenet poster
 
Posts: n/a
Default

wrote in message
...
A buffer is meant to reduce overall latency, not to increase it AFAIK.


Not necessarily, a buffer is also meant to increase overall bandwidth, which
may be done at the expense of latency.

Yousuf Khan


  #4  
Old April 18th 04, 10:43 PM
external usenet poster
 
Posts: n/a
Default

On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" wrote:

wrote in message
.. .
A buffer is meant to reduce overall latency, not to increase it AFAIK.


Not necessarily, a buffer is also meant to increase overall bandwidth, which
may be done at the expense of latency.


Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
latency, unless there's some serious design flaws.
I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
ratio.

  #6  
Old April 19th 04, 01:38 AM
external usenet poster
 
Posts: n/a
Default

On Sun, 18 Apr 2004 22:32:32 GMT, daytripper wrote:

On Sun, 18 Apr 2004 21:43:19 GMT, wrote:

On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" wrote:

wrote in message
...
A buffer is meant to reduce overall latency, not to increase it AFAIK.

Not necessarily, a buffer is also meant to increase overall bandwidth, which
may be done at the expense of latency.


Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
latency, unless there's some serious design flaws.
I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
ratio.


You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
would never, ever make. Caches and their effects aren't pertinent to a
discussion of the buffering technique found on Fully Buffered DIMMs and their
effects on latency and bandwidth...


FB-DIMMs are supposed to work with an added cheap CPU or DSP with some fast RAM, I doubt embedded
DRAM on-chip simply due to higher costs but you never know how much they could make a product cheap
if they really want to and no expensive DSP or CPU is needed there anyway for the FB-DIMM to work.
I know how both caches and buffers work (circular buffering, FIFO buffering and so on) and because
they're used to achieve similar results sometimes (like on DSPs architectures where buffering is a
key to performance with proper assembly code...) , it's not that wrong to refer to a cache as a
buffer even if its mechanism it's quite different the goal it's almost the same. The truth is that
both ways of making bits data faster to be retrieved are useful and a proper combination of these
techniques can achieve higher performance both at the bandwidth and latency levels.

  #7  
Old April 19th 04, 02:53 AM
David Schwartz
external usenet poster
 
Posts: n/a
Default


wrote in message
...
On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan"
wrote:

wrote in message
. ..
A buffer is meant to reduce overall latency, not to increase it AFAIK.


Not necessarily, a buffer is also meant to increase overall bandwidth,
which
may be done at the expense of latency.


Cache on CPU is not meant to increase bandwidth but to decrease overall
latency to retrieve data
from slower RAM.


Yes, but not by making the RAM any faster, but by avoiding RAM accesses.
We add cache to the CPU because we admit our RAM is slow.

More cache-like buffers in the path thru the memory controller can only
improve
latency, unless there's some serious design flaws.


That makes no sense. Everything between the CPU and the memory will
increase latency. Even caches increase worst case latency because some time
is spent searching the cache before we start the memory access. I think
you're confused.

I never seen a CPU that gets slower in accessing data when it can cache
and has a good hit/miss
ratio.


Except that we're talking about memory latency due to buffers. And by
memory latency we mean the most time it will take between when we ask the
CPU to read a byte of memory and when we get that byte.

DS


  #8  
Old April 19th 04, 04:46 AM
daytripper
external usenet poster
 
Posts: n/a
Default

On Mon, 19 Apr 2004 00:38:16 GMT, wrote:

On Sun, 18 Apr 2004 22:32:32 GMT, daytripper wrote:

On Sun, 18 Apr 2004 21:43:19 GMT,
wrote:

On Sun, 18 Apr 2004 17:37:36 GMT, "Yousuf Khan" wrote:

wrote in message
m...
A buffer is meant to reduce overall latency, not to increase it AFAIK.

Not necessarily, a buffer is also meant to increase overall bandwidth, which
may be done at the expense of latency.


Cache on CPU is not meant to increase bandwidth but to decrease overall latency to retrieve data
from slower RAM. More cache-like buffers in the path thru the memory controller can only improve
latency, unless there's some serious design flaws.
I never seen a CPU that gets slower in accessing data when it can cache and has a good hit/miss
ratio.


You're using "buffer" interchangeably with "cache" - a mistake our Yousuf
would never, ever make. Caches and their effects aren't pertinent to a
discussion of the buffering technique found on Fully Buffered DIMMs and their
effects on latency and bandwidth...


FB-DIMMs are supposed to work with an added cheap CPU or DSP with some fast RAM, I doubt embedded
DRAM on-chip simply due to higher costs but you never know how much they could make a product cheap
if they really want to and no expensive DSP or CPU is needed there anyway for the FB-DIMM to work.
I know how both caches and buffers work (circular buffering, FIFO buffering and so on) and because
they're used to achieve similar results sometimes (like on DSPs architectures where buffering is a
key to performance with proper assembly code...) , it's not that wrong to refer to a cache as a
buffer even if its mechanism it's quite different the goal it's almost the same. The truth is that
both ways of making bits data faster to be retrieved are useful and a proper combination of these
techniques can achieve higher performance both at the bandwidth and latency levels.


Ummm.....no. You're still missing the gist of the discussion, and confusing
various forms of caching with the up and down-sides of using buffers in a
point-to-point interconnect.

Maybe going back and starting over might help...

/daytripper
  #9  
Old April 19th 04, 11:28 AM
Felger Carbon
external usenet poster
 
Posts: n/a
Default

"Yousuf Khan" wrote in message
t.cable.rogers.com..
..
wrote in message
...
A buffer is meant to reduce overall latency, not to increase it

AFAIK.

Not necessarily, a buffer is also meant to increase overall

bandwidth, which
may be done at the expense of latency.


This particular buffer reduces the DRAM interface pinout by a factor
of 3 for CPU chips having the memory interface on-chip (such as
Opteron, the late and unlamented Timna, and future Intel CPUs). This
reduces the cost of the CPU chip while increasing the cost of the DIMM
(because of the added buffer chip).

And yes, the presence of the buffer does increase the latency.

There are other tradeoffs, the main one being the ability to add lots
more DRAM into a server. Not important for desktops. YMMV.


  #10  
Old April 19th 04, 01:33 PM
chrisv
external usenet poster
 
Posts: n/a
Default

wrote:

FB-DIMMs are supposed to work...


Do you ever get it right, Geno? I don't think I've seen it...

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
What is the differance in single and double sided memory. SPAWN Asus Motherboards 5 December 12th 04 12:15 AM
Last/Fastest cB0 stepped 'gray zone' Celerons work natively on Abit ZM6/BM6? pgtr General 15 July 9th 04 07:29 PM
2nd A7N8X-E that I can't get the A1 or A2 DIMM slots to work Ken Maltby Asus Motherboards 6 July 3rd 04 02:55 PM
Can't get "DIMM A1" or A2 to work on an A7N8X-E , B1 works ok. Ken Maltby Asus Motherboards 1 April 5th 04 09:06 PM
DELL upgrade with Kingston Value RAM - will it work properly stromer General 0 January 31st 04 08:29 AM


All times are GMT +1. The time now is 10:11 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.