A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

max transfer update



 
 
Thread Tools Display Modes
  #1  
Old June 18th 11, 05:49 PM posted to alt.comp.hardware
Geoff[_9_]
external usenet poster
 
Posts: 45
Default max transfer update

Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff
  #2  
Old June 18th 11, 11:39 PM posted to alt.comp.hardware
Paul
external usenet poster
 
Posts: 13,364
Default max transfer update

Geoff wrote:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff


That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.

Looking at the RCP protocol and doing some simple minded hand
calculations, I feel you could get 119MB/sec out of the link
theoretical max of 125MB/sec, using RCP. I managed to get 117MB/sec
with the hardware that works the best. So I'm reasonably happy
with the result. But when using other protocols, the rate drops.
I was surprised when my FTP tests, didn't give me good results.
My past experience was, I could transfer faster with FTP,
than with Windows file sharing. (Using Windows XP Pro, I can
install IIS web server, and there is also an FTP server
hiding in the installation options. That's how I can set up
an FTP test case. I don't leave IIS running longer than
necessary, and it's been removed again.)

The rate can drop a lot, if there is "packet fragmentation", where
the network path is required to figure out what size of packet
will fit. We had some problems with "work at home" from stuff
like that, where the link was encrypted, and the MTU from
the encrypted path was smaller than normal. File sharing
performance was "close to zero", but the files were very secure :-(
If you can't download the files, I guess that makes them
secure.

Paul
  #3  
Old June 19th 11, 11:30 AM posted to alt.comp.hardware
Geoff[_9_]
external usenet poster
 
Posts: 45
Default max transfer update

On Sat, 18 Jun 2011 18:39:21 -0400, Paul wrote:

Geoff wrote:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff


That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.


Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff
  #4  
Old June 19th 11, 12:41 PM posted to alt.comp.hardware
Geoff[_9_]
external usenet poster
 
Posts: 45
Default max transfer update

On Sun, 19 Jun 2011 11:30:21 +0100, Geoff
wrote:

Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?


oops! I should have written 400Mbps and 45MB/sec is 360Mbps is getting
close - so perhaps I ought to be happy?!

Geoff


Cheers

Geoff

  #5  
Old June 19th 11, 12:47 PM posted to alt.comp.hardware
Paul
external usenet poster
 
Posts: 13,364
Default max transfer update

Geoff wrote:
On Sat, 18 Jun 2011 18:39:21 -0400, Paul wrote:

Geoff wrote:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff

That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.


Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff


I've seen the same kind of thing here. Namely, difference in the
transfer rate in one direction, than the other. The nice thing
about the test cases, is no two test cases give the same results.

*******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

IPV4 checksum offload (I presume that's done in hardware)

Large Send OFfload IPV4
Large Send Offload V2 (IPV4)
Large Send offload V2 (IPV6)

You might try disabling the last three. Apparently, the features
are a function of the NDIS revision, so Microsoft plays a part
in defining those things. One web page I could find, claimed
enabling those could result in "chunking" of data transfers.
And perhaps more ACKs and smaller transmission windows are the
result.

It probably isn't your PCI bus. Even my crappy VIA situation managed
70MB/sec. There is one ancient AMD chipset, where the 32 bit PCI bus
was crippled at 25MB/sec instead of the more normal 110-120MB/sec,
but I doubt you're using that :-)

You can slow down a PCI bus, by changing the burst size. It
was termed the "latency timer", but the setting has been
removed from modern BIOS. At one time, the default might have been a
setting of 32. People wishing to inflate a benchmark test, would
crank it to 64 or larger. The idea being, that higher values promote
PCI unfairness. A large value allows a longer burst, and gets you
closer to 120MB/sec or so.

I had one motherboard years ago, where you had to tune that one
*very* carefully, to get good system operation. I spent hours
playing with that one. If you set the setting two low, the PC just
*crawled*. That wasn't exactly a pleasant motherboard to play with,
because it barely worked. It probably had a Pentium 3 processor or
the like. I think there was only one latency setting, that made
the sound work properly, and I could still use the disk. (Back
then, everything ran off PCI.)

Back when I was testing Win2K, it was the Win2K protocol stack that
limited performance to around 40MB/sec. Both of your OSes should be
able to do better than that.

So either it's a PCI bus issue, or it's one of those Device
Manager NIC options. Apparently, the offload settings can cause
really low transfer rates, and your transfer rates aren't that
bad.

Paul
  #6  
Old June 19th 11, 01:02 PM posted to alt.comp.hardware
Paul
external usenet poster
 
Posts: 13,364
Default max transfer update

Geoff wrote:
On Sun, 19 Jun 2011 11:30:21 +0100, Geoff
wrote:

Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec
Using iperf I get similar results.

Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?


oops! I should have written 400Mbps and 45MB/sec is 360Mbps is getting
close - so perhaps I ought to be happy?!

Geoff

Cheers

Geoff


You should be able to do better than the 70MB/sec I got on the
VIA chipset motherboard.

If the OSes you were testing were both Win2K, I'd tell you to stop. But
there is still hope...

Paul
  #7  
Old June 19th 11, 01:47 PM posted to alt.comp.hardware
John McGaw
external usenet poster
 
Posts: 732
Default max transfer update

On 6/19/2011 7:47 AM, Paul wrote:
Geoff wrote:
On Sat, 18 Jun 2011 18:39:21 -0400, Paul wrote:

Geoff wrote:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff
That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.


Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.
Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff


I've seen the same kind of thing here. Namely, difference in the
transfer rate in one direction, than the other. The nice thing
about the test cases, is no two test cases give the same results.

*******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

IPV4 checksum offload (I presume that's done in hardware)

Large Send OFfload IPV4
Large Send Offload V2 (IPV4)
Large Send offload V2 (IPV6)

You might try disabling the last three. Apparently, the features
are a function of the NDIS revision, so Microsoft plays a part
in defining those things. One web page I could find, claimed
enabling those could result in "chunking" of data transfers.
And perhaps more ACKs and smaller transmission windows are the
result.

It probably isn't your PCI bus. Even my crappy VIA situation managed
70MB/sec. There is one ancient AMD chipset, where the 32 bit PCI bus
was crippled at 25MB/sec instead of the more normal 110-120MB/sec,
but I doubt you're using that :-)

You can slow down a PCI bus, by changing the burst size. It
was termed the "latency timer", but the setting has been
removed from modern BIOS. At one time, the default might have been a
setting of 32. People wishing to inflate a benchmark test, would
crank it to 64 or larger. The idea being, that higher values promote
PCI unfairness. A large value allows a longer burst, and gets you
closer to 120MB/sec or so.

I had one motherboard years ago, where you had to tune that one
*very* carefully, to get good system operation. I spent hours
playing with that one. If you set the setting two low, the PC just
*crawled*. That wasn't exactly a pleasant motherboard to play with,
because it barely worked. It probably had a Pentium 3 processor or
the like. I think there was only one latency setting, that made
the sound work properly, and I could still use the disk. (Back
then, everything ran off PCI.)

Back when I was testing Win2K, it was the Win2K protocol stack that
limited performance to around 40MB/sec. Both of your OSes should be
able to do better than that.

So either it's a PCI bus issue, or it's one of those Device
Manager NIC options. Apparently, the offload settings can cause
really low transfer rates, and your transfer rates aren't that
bad.

Paul


I've noticed just recently that throughput on file copying is dependent on
more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows
file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing a
video sized about 900MB and, as usual, offloaded it from the SSD on my work
machine to my HTPC where I could view it on the big flat screen and onto
the server in the basement for backup purposes. Started the copy to one
machine and noticed that the speed was jumping around 20-40MB/s and then
without thinking I started the second copy before the first had completed.
At that point I saw that the downstream speed was spiking up around 90MB/s.
Neither destination machine would accept data as quickly as my i7+SSD
machine could spit it out, presumably because they have relatively slower
processors and 2tB 'green' spinning drives for storage but together they
managed to bring the output of the i7 machine up to levels I've never seen
before. That means that, at least for dumping data down the pipe, this
machine is certainly up to the task and to me that looks as if the
restriction is on the receiving/storing side.
  #8  
Old June 19th 11, 04:05 PM posted to alt.comp.hardware
Geoff[_9_]
external usenet poster
 
Posts: 45
Default max transfer update

On Sun, 19 Jun 2011 07:47:06 -0400, Paul wrote:

Geoff wrote:
On Sat, 18 Jun 2011 18:39:21 -0400, Paul wrote:

******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.


Paul,

I have been playing around with these settings but no speed
improvement so far ...

Cheers

Geoff
  #9  
Old June 19th 11, 04:07 PM posted to alt.comp.hardware
Geoff[_9_]
external usenet poster
 
Posts: 45
Default max transfer update

On Sun, 19 Jun 2011 08:47:49 -0400, John McGaw
wrote:


I've noticed just recently that throughput on file copying is dependent on
more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows


John,

I have seen the speed start at 40MB/sec and then quickly fall to
20MB/sec.

file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing a
video sized about 900MB and, as usual, offloaded it from the SSD on my work
machine to my HTPC where I could view it on the big flat screen and onto
the server in the basement for backup purposes. Started the copy to one
machine and noticed that the speed was jumping around 20-40MB/s and then
without thinking I started the second copy before the first had completed.


only have 2 PCs so cannot try the above!

Cheers

Geoff
  #10  
Old June 19th 11, 04:30 PM posted to alt.comp.hardware
Paul
external usenet poster
 
Posts: 13,364
Default max transfer update

John McGaw wrote:
On 6/19/2011 7:47 AM, Paul wrote:
Geoff wrote:
On Sat, 18 Jun 2011 18:39:21 -0400, Paul wrote:

Geoff wrote:
Hello

I have just installed 2 TP-Link 1000bps TG-3269 NICs - replacing the 2
ADDON NIC1000Rv2 NICs.

From the Windows 7 PC to the XP Pro PC I now get 20MB/sec, and
from the XP Pro PC to the Windows 7 PC (both using Windows Explorer on
the Windows 7 PC) I get 36.9 MB/sec.

Both figures much better than before (using auto-negociate) but not
earth shattering!

Cheers

Geoff
That's more like it.

Now, you need to run some benchmarks, to check for PCI bus issues.
The machine I only get 70MB/sec best case, uses a VIA chipset.
The other machines with the better numbers, have less dodgy
combinations of hardware. And the VIA chipset machine, is using
the same TG-3269 you're using. (They're the cheapest
cards I could buy here.) If you have bad PCI bus performance,
you might get a number like the 70MB/sec I saw.

Paul,

I have installed FileZilla server on the XP Pro PC and the client on
the Windows 7 PC.

from Windows 7 tp XP Pro I get 31MB/sec
from XP Pro tp Windows 7 I get 45MB/sec

Using iperf I get similar results.
Any idea why quicker from XP Pro to Windows 7 than the reverse?

I have selected 1Gbps for each NIC at the moment.

My figures are a long way from even 400MB/sec !?

Cheers

Geoff


I've seen the same kind of thing here. Namely, difference in the
transfer rate in one direction, than the other. The nice thing
about the test cases, is no two test cases give the same results.

*******

About all I can suggest at this point, is examining the Device Manager
options for the NIC entry.

IPV4 checksum offload (I presume that's done in hardware)

Large Send OFfload IPV4
Large Send Offload V2 (IPV4)
Large Send offload V2 (IPV6)

You might try disabling the last three. Apparently, the features
are a function of the NDIS revision, so Microsoft plays a part
in defining those things. One web page I could find, claimed
enabling those could result in "chunking" of data transfers.
And perhaps more ACKs and smaller transmission windows are the
result.

It probably isn't your PCI bus. Even my crappy VIA situation managed
70MB/sec. There is one ancient AMD chipset, where the 32 bit PCI bus
was crippled at 25MB/sec instead of the more normal 110-120MB/sec,
but I doubt you're using that :-)

You can slow down a PCI bus, by changing the burst size. It
was termed the "latency timer", but the setting has been
removed from modern BIOS. At one time, the default might have been a
setting of 32. People wishing to inflate a benchmark test, would
crank it to 64 or larger. The idea being, that higher values promote
PCI unfairness. A large value allows a longer burst, and gets you
closer to 120MB/sec or so.

I had one motherboard years ago, where you had to tune that one
*very* carefully, to get good system operation. I spent hours
playing with that one. If you set the setting two low, the PC just
*crawled*. That wasn't exactly a pleasant motherboard to play with,
because it barely worked. It probably had a Pentium 3 processor or
the like. I think there was only one latency setting, that made
the sound work properly, and I could still use the disk. (Back
then, everything ran off PCI.)

Back when I was testing Win2K, it was the Win2K protocol stack that
limited performance to around 40MB/sec. Both of your OSes should be
able to do better than that.

So either it's a PCI bus issue, or it's one of those Device
Manager NIC options. Apparently, the offload settings can cause
really low transfer rates, and your transfer rates aren't that
bad.

Paul


I've noticed just recently that throughput on file copying is dependent
on more than the network. I have gigabit NICs on all of my machines and
noticed last week that I can get bursts of copy speed (standard Windows
file sharing) pushing toward the theoretical limit but only when I'm
copying to two different machines. Example: I had just finished editing
a video sized about 900MB and, as usual, offloaded it from the SSD on my
work machine to my HTPC where I could view it on the big flat screen and
onto the server in the basement for backup purposes. Started the copy to
one machine and noticed that the speed was jumping around 20-40MB/s and
then without thinking I started the second copy before the first had
completed. At that point I saw that the downstream speed was spiking up
around 90MB/s. Neither destination machine would accept data as quickly
as my i7+SSD machine could spit it out, presumably because they have
relatively slower processors and 2tB 'green' spinning drives for storage
but together they managed to bring the output of the i7 machine up to
levels I've never seen before. That means that, at least for dumping
data down the pipe, this machine is certainly up to the task and to me
that looks as if the restriction is on the receiving/storing side.


If you've got enough RAM on the machine, you can test with a RAMDisk
as a storage target. I've been using 1GB RAMdisks for my testing,
because my machines have 2GB, 3GB, and 4GB installed RAM (I tested
with four different computers, and two of them only have 2GB).

http://memory.dataram.com/products-a...ftware/ramdisk

When I was doing testing with Linux as the OS, I used LiveCDs,
and they happen to mount /tmp on RAM, which effectively results
in the same thing (a 1GB sized RAMDisk).

I was expecting my test results to be all over the place, and
so far, I haven't been disappointed.

Paul
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
how get max transfer speed using 1Gbps nics? Geoff[_9_] General 11 June 8th 11 08:03 AM
update on file transfer Al Dente Dell Computers 0 July 11th 05 12:59 AM
media transfer rate, buffer to disk and internal transfer rate ABC Storage (alternative) 2 August 28th 03 10:04 PM


All times are GMT +1. The time now is 01:16 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.