A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » System Manufacturers & Vendors » Compaq Servers
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Pathetic Performance of RA4x000 RAID Arrays



 
 
Thread Tools Display Modes
  #1  
Old March 22nd 07, 05:36 AM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays

I've been using RA4x000 as a cheap disk source for applications that do not
require very high disk performance. I had assumed the RA4x000 RAID
systems would bottleneck on the 1 Gbit fibre channel interface, which is far
more throughput than I could get out even six drives in a RAID 0 array.
Well, I was wrong. I measured performance today on a Proliant 6400R with
4 GB of memory, copying from a RAID 5 array of five 15K rpm SCSI drives to a
RAID 5 array with six 15K rpm SCSI drives. Each array is located on a
separate RA4x00 locally attached by a separate Compaq FC 64-bit PCI card.
I am getting an absolutely miserable 2.5 MB per second average write
performance at the destination.

How is the above result possible? Each drive should pull data at minimum 3
MB per second, and five 15K drives in a RAID 5 should give me at minimum 10
MB/second read and write performance. I don't have bottlenecks in system
memory, on the PCI bus, on the fibre channel bus. I just don't see where
the 2.5MB/sec could be coming from unless it's hard coded into the array
itself. I was measuring performance on the "physicaldrive" in Windows
performance monitor, read and write, on each drive defined by the separate
RA4x000 arrays.

What current generation product is going to give me at least 20 MB/second
read and write performance with a RAID 5 array of at least six drives?
Heck, with my commodity home-user SATA arrays I am getting 30 MB/second read
performance, over gigabit ethernet at that. Does Compaq make anything on
the low end that can compete with consistent 20 MB/second or faster average
read and write speeds?

--
Will


  #2  
Old March 22nd 07, 06:26 AM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays

The performance of 2.5 MB/sec average write speed was seen while reading and
writing a single 250MB file, and the file was all contiguous on a 72 GB
logical drive that had about 40 GB unused. I examined the file's
allocation units on the drive with a file defrag application to verify that
everything was contiguous.

--
Will

"Will" wrote in message
...
I've been using RA4x000 as a cheap disk source for applications that do
not require very high disk performance. I had assumed the RA4x000 RAID
systems would bottleneck on the 1 Gbit fibre channel interface, which is
far more throughput than I could get out even six drives in a RAID 0
array. Well, I was wrong. I measured performance today on a Proliant
6400R with 4 GB of memory, copying from a RAID 5 array of five 15K rpm
SCSI drives to a RAID 5 array with six 15K rpm SCSI drives. Each array
is located on a separate RA4x00 locally attached by a separate Compaq FC
64-bit PCI card. I am getting an absolutely miserable 2.5 MB per second
average write performance at the destination.

How is the above result possible? Each drive should pull data at minimum
3 MB per second, and five 15K drives in a RAID 5 should give me at minimum
10 MB/second read and write performance. I don't have bottlenecks in
system memory, on the PCI bus, on the fibre channel bus. I just don't see
where the 2.5MB/sec could be coming from unless it's hard coded into the
array itself. I was measuring performance on the "physicaldrive" in
Windows performance monitor, read and write, on each drive defined by the
separate RA4x000 arrays.

What current generation product is going to give me at least 20 MB/second
read and write performance with a RAID 5 array of at least six drives?
Heck, with my commodity home-user SATA arrays I am getting 30 MB/second
read performance, over gigabit ethernet at that. Does Compaq make
anything on the low end that can compete with consistent 20 MB/second or
faster average read and write speeds?

--
Will



  #3  
Old March 24th 07, 03:54 PM posted to alt.sys.pc-clone.compaq.servers
Nut Cracker
external usenet poster
 
Posts: 196
Default Pathetic Performance of RA4x000 RAID Arrays

Hello Will,

Can you provide a little more information about how you are connecting your
server to the RA units? Which HBA's are you using, drivers and settings (if
applicable), is there an FCAL hub or switch involved, what is your
accleration policy at the RA (raid volume) configured for? Stripe size,
cluster size (allocation unit used when formatting your array volumes), and
what type of application uses these disks (file server, Exchange, SQL, etc).

I have actually seen performance with the RA4000's that exceeded my
expectations. The 4000 and 4100 use the same array controller module in the
chassis, the only difference between the two being that the 4000 supports
WUS3 disks (12x1" or 8x1.6" drives), in the old tongue-style trays, while
the 4100 supports the new style U2/U3 trays (and SCA backplane) with 12x1"
disks.

To acheive optimal storage performance, there are A LOT of factors that must
be taken into account when setting it up.

I've been working with NT based systems for probably 13 years, and with new
implimentations there is still an allocated period of setup and testing to
ensure that the storage is performing up to its potential.

- LC


"Will" wrote in message
...
The performance of 2.5 MB/sec average write speed was seen while reading
and writing a single 250MB file, and the file was all contiguous on a 72
GB logical drive that had about 40 GB unused. I examined the file's
allocation units on the drive with a file defrag application to verify
that everything was contiguous.

--
Will

"Will" wrote in message
...
I've been using RA4x000 as a cheap disk source for applications that do
not require very high disk performance. I had assumed the RA4x000 RAID
systems would bottleneck on the 1 Gbit fibre channel interface, which is
far more throughput than I could get out even six drives in a RAID 0
array. Well, I was wrong. I measured performance today on a Proliant
6400R with 4 GB of memory, copying from a RAID 5 array of five 15K rpm
SCSI drives to a RAID 5 array with six 15K rpm SCSI drives. Each array
is located on a separate RA4x00 locally attached by a separate Compaq FC
64-bit PCI card. I am getting an absolutely miserable 2.5 MB per second
average write performance at the destination.

How is the above result possible? Each drive should pull data at
minimum 3 MB per second, and five 15K drives in a RAID 5 should give me
at minimum 10 MB/second read and write performance. I don't have
bottlenecks in system memory, on the PCI bus, on the fibre channel bus.
I just don't see where the 2.5MB/sec could be coming from unless it's
hard coded into the array itself. I was measuring performance on the
"physicaldrive" in Windows performance monitor, read and write, on each
drive defined by the separate RA4x000 arrays.

What current generation product is going to give me at least 20 MB/second
read and write performance with a RAID 5 array of at least six drives?
Heck, with my commodity home-user SATA arrays I am getting 30 MB/second
read performance, over gigabit ethernet at that. Does Compaq make
anything on the low end that can compete with consistent 20 MB/second or
faster average read and write speeds?

--
Will





  #4  
Old March 24th 07, 06:23 PM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays

"Nut Cracker" wrote in message
...
Can you provide a little more information about how you are connecting

your
server to the RA units? Which HBA's are you using, drivers and settings

(if
applicable), is there an FCAL hub or switch involved,


On the host I installed two identical Compaq 64-bit fibre channel adapters.
These are the old style that are proprietary 64-bit PCI rather than true
PCI-X. Each adapter is directly cabled by multimode fibre to a separate
RA4100 array. The arrays are running the new-style trays that run at U160.
Drives are 73.4GB 15K U320. Under Windows 2000 there are no driver
settings (at least that I'm aware of).


what is your
accleration policy at the RA (raid volume) configured for?


Array accelerator is enabled, and I seem to remember that we have 25% read
cache and 75% write cache. We experimented with 0% read and 75% write as
well.


Stripe size,
cluster size (allocation unit used when formatting your array volumes),

and
what type of application uses these disks (file server, Exchange, SQL,

etc).

I converted over stripe size recently to 8K to try to spread out data for
small reads. Cluster size is the default Windows would use on an NTFS
volume of 60GB (approximate). Application is SQL Server.


To acheive optimal storage performance, there are A LOT of factors that

must
be taken into account when setting it up.


I'm all ears.


I've been working with NT based systems for probably 13 years, and with

new
implimentations there is still an allocated period of setup and testing to
ensure that the storage is performing up to its potential.


What's the best performance you have seen reading a large contiguous file on
a five to six drive RAID 5 array in an RA4x00 system?

--
Will


  #5  
Old March 26th 07, 07:28 AM posted to alt.sys.pc-clone.compaq.servers
NuT CrAcKeR
external usenet poster
 
Posts: 13
Default Pathetic Performance of RA4x000 RAID Arrays

"Will" wrote in message
...
"Nut Cracker" wrote in message
...
Can you provide a little more information about how you are connecting

your
server to the RA units? Which HBA's are you using, drivers and settings

(if
applicable), is there an FCAL hub or switch involved,


On the host I installed two identical Compaq 64-bit fibre channel
adapters.
These are the old style that are proprietary 64-bit PCI rather than true
PCI-X. Each adapter is directly cabled by multimode fibre to a separate
RA4100 array. The arrays are running the new-style trays that run at
U160.
Drives are 73.4GB 15K U320. Under Windows 2000 there are no driver
settings (at least that I'm aware of).


what is your
accleration policy at the RA (raid volume) configured for?


Array accelerator is enabled, and I seem to remember that we have 25% read
cache and 75% write cache. We experimented with 0% read and 75% write
as
well.


Stripe size,
cluster size (allocation unit used when formatting your array volumes),

and
what type of application uses these disks (file server, Exchange, SQL,

etc).

I converted over stripe size recently to 8K to try to spread out data for
small reads. Cluster size is the default Windows would use on an NTFS
volume of 60GB (approximate). Application is SQL Server.


To acheive optimal storage performance, there are A LOT of factors that

must
be taken into account when setting it up.


I'm all ears.


I've been working with NT based systems for probably 13 years, and with

new
implimentations there is still an allocated period of setup and testing
to
ensure that the storage is performing up to its potential.


What's the best performance you have seen reading a large contiguous file
on
a five to six drive RAID 5 array in an RA4x00 system?

--
Will




Ahhhh, SQL

Ok, here is what I would do:

Reformat your disks using a 64K cluster (allocation unit) size. Why? Because
SQL I/O is done in 64K "pages" (FYI, exchange does it in 4K pages). Why
would you have more Read/Write IOPs that needed to get data to, and from,
your disks? 1 SQL IOP; 1 disk chunk; 1 read/write from the OS to the
controller.

Crank up the stripe size on the logical volumes (via the ACU to the RA) to
64K. Same reason as above.

Windows default cluster size when formatting NTFS disks is 4K. This is
required by windows beause you cannot enable NTFS compression, or use
Windows Defrag if the cluster size is greater than 4K.

In DeviceManager, you can check the Properties for each physical drive, and
there are a couple of Performance options there. Check them.

Logical Array Acceleration Policy:
75% / 25% Read/Write, or preferably 100% / 0%. databases are normally read
intensive in nature. Max out the cache where it will do the most good.

Those StorageWorks controllers are not PCI-X. Yes, somewhat proprietary,
they are 66Mhz/64bit. Not the most robust controllers in the world, but I
have found little else that works with the RA units. The RA4100 supports
Ultra2 channel speeds, and believe it or not, the backplane IS split into
two channels. Seperate your disks if you can evenly across both channels.
The order in which you add physical disks counts.

BTW: RAID5 is a compromise between capacity, and cost. It is far from a high
performance config. Best performance is 0+1, but its relatively expensive as
1/2 of your disks are used for parity.

With 2 RA4100's, you could have a very capable storage configuration that
will meet your capacity needs.

If you really want to have good performance, present raided disks from each
chassis to your host, and use disk management to create striped sets. This
will leverage more spindles, both adapters, and both RA controller channels
for better speeds. Although, this is somewhat higher risk. the more parts
there are, the greater the chances of a failure there is. There is enough
inherent redundancy to mitigate most risk. The only exposure is that you
have a single path to your disks. But that too, can be solved.

What is the host with the HBA's? Are they on the same PCI bus? Do you know
if there are muliple peer-pci busses in the server?

Thats just the disks ... there are a lot of things you can do to performance
tune SQL. I am NOT a DBA, so I cant really give you any guidance on that
one.

Think about all that for a while and then let me know if you have any
questions.

- LC

  #6  
Old March 27th 07, 06:41 AM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays

Let's put aside your very good advice about SQL for a moment. I can try
most of those ideas and measure later. What I need to explain at first,
before I waste more time, is why I cannot just copy a simple 100 MB file
from one RA4100 to either local SCSI disk or the other RA4100 at faster than
about 2.5MB/second for contiguous data. That performance is so incredibly
hideous that I have to find a cause for it before I go optimizing around SQL
access.

--
Will


"NuT CrAcKeR" wrote in message
...
Ahhhh, SQL

Ok, here is what I would do:

Reformat your disks using a 64K cluster (allocation unit) size. Why?

Because
SQL I/O is done in 64K "pages" (FYI, exchange does it in 4K pages). Why
would you have more Read/Write IOPs that needed to get data to, and from,
your disks? 1 SQL IOP; 1 disk chunk; 1 read/write from the OS to the
controller.

Crank up the stripe size on the logical volumes (via the ACU to the RA) to
64K. Same reason as above.

Windows default cluster size when formatting NTFS disks is 4K. This is
required by windows beause you cannot enable NTFS compression, or use
Windows Defrag if the cluster size is greater than 4K.

In DeviceManager, you can check the Properties for each physical drive,

and
there are a couple of Performance options there. Check them.

Logical Array Acceleration Policy:
75% / 25% Read/Write, or preferably 100% / 0%. databases are normally read
intensive in nature. Max out the cache where it will do the most good.

Those StorageWorks controllers are not PCI-X. Yes, somewhat proprietary,
they are 66Mhz/64bit. Not the most robust controllers in the world, but I
have found little else that works with the RA units. The RA4100 supports
Ultra2 channel speeds, and believe it or not, the backplane IS split into
two channels. Seperate your disks if you can evenly across both channels.
The order in which you add physical disks counts.

BTW: RAID5 is a compromise between capacity, and cost. It is far from a

high
performance config. Best performance is 0+1, but its relatively expensive

as
1/2 of your disks are used for parity.

With 2 RA4100's, you could have a very capable storage configuration that
will meet your capacity needs.

If you really want to have good performance, present raided disks from

each
chassis to your host, and use disk management to create striped sets. This
will leverage more spindles, both adapters, and both RA controller

channels
for better speeds. Although, this is somewhat higher risk. the more parts
there are, the greater the chances of a failure there is. There is enough
inherent redundancy to mitigate most risk. The only exposure is that you
have a single path to your disks. But that too, can be solved.

What is the host with the HBA's? Are they on the same PCI bus? Do you know
if there are muliple peer-pci busses in the server?

Thats just the disks ... there are a lot of things you can do to

performance
tune SQL. I am NOT a DBA, so I cant really give you any guidance on that
one.

Think about all that for a while and then let me know if you have any
questions.

- LC



  #7  
Old March 27th 07, 09:14 PM posted to alt.sys.pc-clone.compaq.servers
Nut Cracker
external usenet poster
 
Posts: 196
Default Pathetic Performance of RA4x000 RAID Arrays


"Will" wrote in message
...
Let's put aside your very good advice about SQL for a moment. I can try
most of those ideas and measure later. What I need to explain at first,
before I waste more time, is why I cannot just copy a simple 100 MB file
from one RA4100 to either local SCSI disk or the other RA4100 at faster
than
about 2.5MB/second for contiguous data. That performance is so
incredibly
hideous that I have to find a cause for it before I go optimizing around
SQL
access.

--
Will


If you have any extra adapters or controllers, you might try to swapping
them out and testing again. It could be that you have a failing controller,
ailing GBIC, or cable that has a kink, or is rolled too sharply (too tight
of a curve someplace).

I did some testing with my RA's several years ago when I started collecting
and using them, and also noticed that RA to RA transfers were poor. However,
when I copied data between an RA and a SCSI enclosure, I was getting alomst
2GB/m, or about 33MB/s which was better than i expected with WUS3 (40MB/s)
drives.

To an extent, the question you are asking is ' why dont i get more than 12%
network utilization between 2 servers with GigE adapters?'.

Also, make sure your adapters are in the correctly supported slots, and that
the slots are not running at slower speeds due to incompatability.

- LC


  #8  
Old March 29th 07, 01:03 AM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays

"Nut Cracker" wrote in message
...
I did some testing with my RA's several years ago when I started

collecting
and using them, and also noticed that RA to RA transfers were poor.

However,
when I copied data between an RA and a SCSI enclosure, I was getting

alomst
2GB/m, or about 33MB/s which was better than i expected with WUS3 (40MB/s)
drives.

To an extent, the question you are asking is ' why dont i get more than

12%
network utilization between 2 servers with GigE adapters?'.


The two RA4100s are locally attached to ONE computer. There is no gigabit
ethernet here. I copy from one RA4100 directly to local SCSI or to the
other RA4100, and I get 2.5MB/second.


Also, make sure your adapters are in the correctly supported slots, and

that
the slots are not running at slower speeds due to incompatability.


I'm thinking something along these lines as well. I must have some weird
issue here with hardware interrupts, incorrect bus speeds, whatever. But
how do I get visibility on this once the OS is up and running?

--
Will


  #9  
Old March 29th 07, 04:13 PM posted to alt.sys.pc-clone.compaq.servers
NuT CrAcKeR
external usenet poster
 
Posts: 13
Default Pathetic Performance of RA4x000 RAID Arrays

"Will" wrote in message
...
"Nut Cracker" wrote in message
...
I did some testing with my RA's several years ago when I started

collecting
and using them, and also noticed that RA to RA transfers were poor.

However,
when I copied data between an RA and a SCSI enclosure, I was getting

alomst
2GB/m, or about 33MB/s which was better than i expected with WUS3
(40MB/s)
drives.

To an extent, the question you are asking is ' why dont i get more than

12%
network utilization between 2 servers with GigE adapters?'.


The two RA4100s are locally attached to ONE computer. There is no
gigabit
ethernet here. I copy from one RA4100 directly to local SCSI or to the
other RA4100, and I get 2.5MB/second.


Also, make sure your adapters are in the correctly supported slots, and

that
the slots are not running at slower speeds due to incompatability.


I'm thinking something along these lines as well. I must have some weird
issue here with hardware interrupts, incorrect bus speeds, whatever. But
how do I get visibility on this once the OS is up and running?



what kind of server are you using?

If your slots are clocking down, you will see a message during the post that
the configuration is less than optimal, and give suggestions for how to
correct it.

Im thinking there is a flakey piece of equipment in there someplace

  #10  
Old March 30th 07, 05:58 AM posted to alt.sys.pc-clone.compaq.servers
Will
external usenet poster
 
Posts: 338
Default Pathetic Performance of RA4x000 RAID Arrays


"NuT CrAcKeR" wrote in message
t...
what kind of server are you using?

If your slots are clocking down, you will see a message during the post

that
the configuration is less than optimal, and give suggestions for how to
correct it.

Im thinking there is a flakey piece of equipment in there someplace


I'm using a Proliant 6400R, and have RA4100 on DL580 servers as well. I
agree on hardware. Tried a copy of the same test file I have been using on
a DL580 and I was getting well over 30 MB/sec, quite surprising to me
considering my target was a single spindle.

My problem is I have no visibility on the issue. At bootup these 64 bit
controllers like to go into stealth mode. They have no BIOS display at all
that I can see, certainly no explicit BIOS control or setup in the preboot
environment. They have no logs perse. The drivers under Windows are not
reporting errors.

I have the two 64 bit cards in slots 4 and 5. Probably I should get them
on different busses? Would there be any reason to hardcode either of them
with a specific interrupt in the system BIOS setup?

--
Will


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid Arrays iamollie Homebuilt PC's 6 February 11th 06 06:48 PM
USB 2.0 Performance for Storage Arrays Will Storage & Hardrives 0 August 12th 05 08:14 PM
8 Drive Raid 10 vs 3 separate Raid Arrays [email protected] Storage & Hardrives 1 March 1st 05 02:48 AM
Are IDE RAID arrays compatible with different RAID controllers? Rob Nicholson Storage (alternative) 2 June 28th 04 09:10 AM
How fast are your RAID arrays??? TJM Homebuilt PC's 4 May 13th 04 10:27 PM


All times are GMT +1. The time now is 09:05 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.