A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

15K rpm SCSI-disk



 
 
Thread Tools Display Modes
  #11  
Old November 6th 04, 06:28 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 5 Nov 2004 17:00:40 +0100, "Ronny Mandal"
wrote:

The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0. A good cost-effective compromise
(particularly if you don't have a decent SCSI controller
already) would be an SATA Western Digital Raptor 74GB, or a
pair of them... ideally the OS, applications, and the data
files would be on different drives.


15K FDB Cheetahs are much more reliable and have a proven track record
over the Raptors. If all you are not connecting a lot, a basic card
like this LSI Logic should be fine at around 35USD
http://www.newegg.com/app/ViewProduc...118-009&depa=1

You can often find good internal cabling for under 20USD on eBay if
retail stores fail you. Controller & cabling costs should not be seen
as necessarily being prohibitively expensive.
  #12  
Old November 19th 04, 06:20 PM
kony
external usenet poster
 
Posts: n/a
Default

On Thu, 04 Nov 2004 09:32:16 GMT, Curious George
wrote:

On Wed, 3 Nov 2004 21:22:20 +0100, "Joris Dobbelsteen"
wrote:

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable, not "simplicity" relative to anything
else, not lower maintenance, no easier troubleshooting down
the line, and not less power consumption, heat or PSU
issues.

You truely are CLUELESS.

Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.



modern enterprise drives should be fine power cycling a couple times
per day for several years. While personal storage devices are more
geared to this use both have a limit before affecting reliability - so
it's not ideal in either case.

They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.



  #13  
Old November 23rd 04, 06:23 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 19 Nov 2004 18:20:40 GMT, kony wrote:

On Thu, 04 Nov 2004 09:32:16 GMT, Curious George
wrote:

On Wed, 3 Nov 2004 21:22:20 +0100, "Joris Dobbelsteen"
wrote:

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable


Wrong.

Array MTBF calculation necessarily yields a much lower value than a
single drive installation. For RAID 0 (which is what I think he is
implying) the array life is limited by the shortest lasting drive
(which is totally unpredictable) and when it does go it takes all the
data on all the other disks with it.

Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives. Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy. Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.

Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.

, not "simplicity" relative to anything else,


Wrong.

We're talking specifically about a 15K cheetah compared to ata raid
not "anything else."

RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...

not lower maintenance,


Wrong.

With a simple disk there is no drive synchronization, no time
consuming parity level initialization, no management software updates
or configuration, there is no backup of controller config that needs
to be performed, adding drives never implies much in the way of low
level configuration & never the adjustment of existing storage...

no easier troubleshooting down the line,


Wrong.

Power failure or crash can really screw up a lot of raids. A faulty
disk will take a crap all over the entire filesystem with raid 0.
Defunct disks due to power cable or backplane issues is a PITA- with a
single drive you just push in the plug better and press the power
button. You almost never have to worry about drive firmware issues or
conflicts. You almost never have to think about getting bare metal
recovery software to work or play nice with a storage driver.
Transient disk error passed on in RAID 5 for example is a nightmare to
troubleshoot...

and not less power consumption, heat or PSU
issues.

Totally absurd with raid recommendations for the low end desktop.
Difference in power consumption of current scsi and ata drives is no
longer significant. Using several disks at the same time is -
especially during power up.

Of course I'm not advocating any low end desktop

You truely are CLUELESS.

You truly are hilarious

Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


Oh yeah, More absurd trash.

-Not at all with write back cache disabled so the SATA RAID doesn't
bite you.
-Not at all for an individual SCSI disk
-Not at all if SCSI disks are mainly used 1 at a time
-Not for read/writes through most of the platters of 2 scsi drives
used 'simultaneously' esp if the PCI bus isn't handling much else.
-Latency is far from marginal esp for multiuser & multitasking
-Not nearly as expensive as you wish to imply

I'd also be careful if you are thinking all southbridge devices are
always, & always have been, off the PCI bus.

You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.


Just plain dumb.

modern enterprise drives should be fine power cycling a couple times
per day for several years. While personal storage devices are more
geared to this use both have a limit before affecting reliability - so
it's not ideal in either case.

They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.


At least we're talking about the same kind of failure.

If you spin down every few months there are only small amounts/smaller
particles which you allow to settle in the drive. If you wait too
long there are larger amounts /larger particles which are being
churned around & when they settle can cause stiction when re-powered.
Planning powering down can extend somewhat the useable life before
stiction- or it at least allows you to control the failure event
during maintenance as opposed to when you need it most (The Monday
Morning Blues).
  #14  
Old November 23rd 04, 11:15 PM
kony
external usenet poster
 
Posts: n/a
Default

On Tue, 23 Nov 2004 06:23:47 GMT, Curious George
wrote:


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable


Wrong.

Array MTBF calculation necessarily yields a much lower value than a
single drive installation. For RAID 0 (which is what I think he is
implying) the array life is limited by the shortest lasting drive
(which is totally unpredictable) and when it does go it takes all the
data on all the other disks with it.


OK then, but there was no mention of RAID0. Why would we
bother to contrast anything with RAID0?



Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives.


But that means very little without insider info about the
cause... it could simply be that the SCSI line is producing
a lot of defective drives.


Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy.


.... and a huge difference in total good units you may have
the chance to buy, too.

Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.


Most ppl tend to buy large lots of SCSI drives?
I suggest that any significant data store is tested before
being deployed, with the actual parts to be used. Further
that NO data store on a RAID controller be kept without an
alternate backup method.



Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.


I disagree that they "tend to implement new features more
conservatively", a couple days ago you listed many features
added less conservatively.


, not "simplicity" relative to anything else,


Wrong.

We're talking specifically about a 15K cheetah compared to ata raid
not "anything else."





RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...


I disagree with that assessment. In one sentence you write
"more parts and tools to learn and use" but then come back
with "never have to think about DMA mode". You can't have
it both ways, it most certainly is more to think about.

I suggest that anyone who can't understand DMA mode on ATA
should not be making any kind of data storage decisions,
instead buying a pre-configured system and not touching
whichever storage solution it might contain.



not lower maintenance,


Wrong.

With a simple disk there is no drive synchronization, no time
consuming parity level initialization, no management software updates
or configuration, there is no backup of controller config that needs
to be performed, adding drives never implies much in the way of low
level configuration & never the adjustment of existing storage...


So you're trying to compare a single non-RAID drive to a
RAIDed config now? SCSI, including the Cheetah, does not
eliminate management software updates or config.
What backup of the controller config is needed on ATA beyond
SCSI?


no easier troubleshooting down the line,


Wrong.

Power failure or crash can really screw up a lot of raids. A faulty
disk will take a crap all over the entire filesystem with raid 0.


yes but again, this is not an argument FOR SCSI Cheetahs,
simply to avoid RAID0. Granted that was part of the context
of the reply, but it didnt end there, you tried to extend
the argument further.

Defunct disks due to power cable or backplane issues is a PITA- with a
single drive you just push in the plug better and press the power
button. You almost never have to worry about drive firmware issues or
conflicts. You almost never have to think about getting bare metal
recovery software to work or play nice with a storage driver.
Transient disk error passed on in RAID 5 for example is a nightmare to
troubleshoot...

and not less power consumption, heat or PSU
issues.

Totally absurd with raid recommendations for the low end desktop.
Difference in power consumption of current scsi and ata drives is no
longer significant. Using several disks at the same time is -
especially during power up.


Except that you're ignoring a large issue... the drive IS
storage. You can avoid RAID0, which I agree with, but can't
just claim the Cheetah uses less power without considering
that it a) has lower capacity b) costs a lot more per GB.
c) it's performance advantage drops the further it's filled
relative to one much larger ATA drive at same or lower
price-point, perhaps even at less than 50% of the cost.


Of course I'm not advocating any low end desktop

You truely are CLUELESS.

You truly are hilarious


Thank you, laughing is good for us.



Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


Oh yeah, More absurd trash.


Do you not even understand the aforementioned PCI
bottleneck? Southbridge integral (or dedicated bus) is
essential for utmost performance on the now-aged PC 33/32
bus. Do you assume people won't even use the PCI bus for
anything but their SCSI array? Seems unlikley, the array
can't even begin to be competitive unless it's consuming
most of the bus througput, making anything from sound to nic
to modem malfunction in use else performance drops.



-Not at all with write back cache disabled so the SATA RAID doesn't
bite you.
-Not at all for an individual SCSI disk
-Not at all if SCSI disks are mainly used 1 at a time
-Not for read/writes through most of the platters of 2 scsi drives
used 'simultaneously' esp if the PCI bus isn't handling much else.
-Latency is far from marginal esp for multiuser & multitasking


This I agree with, latency reduction is a very desirable
thing for many uses... but not very useful for others.

-Not nearly as expensive as you wish to imply

I'd also be careful if you are thinking all southbridge devices are
always, & always have been, off the PCI bus.



Never wrote "always been", we're talking about choices
today. What modern chipset puts integrated ATA on PCI bus?
What 2 year old chipset does?



You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.


Just plain dumb.


No, if you could compare them you'd see that a SCSI PCI card
will never exceed around 128MB/s, while southbridge ATA
RAIDs may easily exceed that... throw a couple WD Raptors
in a box and presto, it's faster and cheaper. Keep in mind
that either way I would only recommend a futher backup
strategy, data should not be only on (any) drives used
regularly in the system.



Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.


At least we're talking about the same kind of failure.

If you spin down every few months there are only small amounts/smaller
particles which you allow to settle in the drive. If you wait too
long there are larger amounts /larger particles which are being
churned around & when they settle can cause stiction when re-powered.
Planning powering down can extend somewhat the useable life before
stiction- or it at least allows you to control the failure event
during maintenance as opposed to when you need it most (The Monday
Morning Blues).


I don't believe there is enough evidence to conclude
anything near this, seems more like an urban legend.
I suggest not powering down the drives at all, until their
scheduled replacement.
  #15  
Old November 25th 04, 06:39 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 05 Nov 2004 11:44:21 GMT, kony wrote:

The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0.


Come on.

So you agree with a 15k scsi drive recommendation but disagree with a
15k scsi drive recommendation?

Quite amusing really


On Tue, 23 Nov 2004 23:15:52 GMT, kony wrote:

On Tue, 23 Nov 2004 06:23:47 GMT, Curious George
wrote:


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.

More of your complete and utter nonsense.
Not more reliable


Wrong.

Array MTBF calculation necessarily yields a much lower value than a
single drive installation. For RAID 0 (which is what I think he is
implying) the array life is limited by the shortest lasting drive
(which is totally unpredictable) and when it does go it takes all the
data on all the other disks with it.


OK then, but there was no mention of RAID0. Why would we
bother to contrast anything with RAID0?


Come on.

Ronny Mandal said
"So you are saying that two IDE in e.g. RAID 0 will outperform
the SCSI disk in speed, besides storage etc?"

and you mentioned RAID0 (see above) in your answer to him. It's a
valid and real part of previous discussion thread (from 2 weeks ago).

Certainly RAID0 is part of the category of inexpensive raid initially
brought up by the initial post Joris Dobbelsteen so it SHOULD be
discussed by ALL sub branches.

Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives.


But that means very little without insider info about the
cause... it could simply be that the SCSI line is producing
a lot of defective drives.


Quality control is usually relaxed because of the relative tradeoff in
profitability / defect rates.

It's good to know you have been reading most of my postings. It makes
me feel good to see use of phrases like "insider info" with regard to
this subject. Only that doesn't really make it YOUR argument or mean
a manipulation of words is an argument.

Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy.


... and a huge difference in total good units you may have
the chance to buy, too.


yes and that is offset by the huge numbers of customers and units'
population spread across many more resellers...

Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.


Most ppl tend to buy large lots of SCSI drives?


scsi is very often bought for multi-drive servers- average according
to Adaptec is usually around 4 per or 4 per channel. scsi has also
been used in disk arrays for some time. Many companies/enterprises
buy many servers with multiple arrays and often many workstations with
scsi drives also. It's usually uncommon for consumers or small
business (who tend to by small amounts of storage) to even consider
scsi.

That's not the whole poop though. Even when buying a single disk your
statistical relationship to the entire population of either is
different.

I admit the complexity of this comparison makes it somewhat fuzzy.
Even if you reject this and say scsi drives are of identical build
quality or you have equal chances of getting a good scsi or ATA drive-
it doesn't alter OUR suggestion which endorses the scsi drive. It
also doesn't successfully indict my reliability point as it has
already been satisfied with relative MTBF.

I suggest that any significant data store is tested before
being deployed, with the actual parts to be used. Further
that NO data store on a RAID controller be kept without an
alternate backup method.


Come on.

Of course. that has never been in contest. It's also not exactly
news.

But taking further this comment pulled out of thin air - backup
applies to multiple data categories on EVERY kind of storage volume.
IT's not much a raid suggestion anyhow.

so you're going to make a point of telling someone to back up his
raid0 volume if it only holds /tmp/ or paging data?

You backup data not storage or "data store".

Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.


I disagree that they "tend to implement new features more
conservatively", a couple days ago you listed many features
added less conservatively.


Come on.

I didn't say that. Implementing more advanced features (which is what
I assume you are referring to) is different than implementing features
more or less conservatively; they are implementing advanced features
in a more conservative fashion.

There is no logical conflict because certain advanced features aren't
put in ata drives ONLY because they want to differentiate the product
lines/product classes.

, not "simplicity" relative to anything else,


Wrong.

We're talking specifically about a 15K cheetah compared to ata raid
not "anything else."





RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...


I disagree with that assessment. In one sentence you write
"more parts and tools to learn and use" but then come back
with "never have to think about DMA mode". You can't have
it both ways, it most certainly is more to think about.


Come one.

Read it again. Remember it is a 1 or 2 scsi drive vs ata raid
comparison. (as it always has been)

I suggest that anyone who can't understand DMA mode on ATA
should not be making any kind of data storage decisions,
instead buying a pre-configured system and not touching
whichever storage solution it might contain.


There isn't very much to understand about DMA (for the end-user) it's
a matter of familiarity/learning. If they never touch it then how are
they supposed to learn? How are they supposed to get problems fixed
when/if they arise and they have only phone support? Is this all some
kind of secret club?

Come on.

That has nothing to do with it. The point is there are more things to
look at & think of with ATA raid over a single scsi drive and that
makes it less simple. I'm not claiming any of these by themselves are
overwhelming. Put them together, though, and there is a _difference_
in overall simplicity of the different systems. Furthermore this
simplicity point is one of many items used to substantiate and
elaborate on a recommendation you agree with. It's unreasonable to
now claim one aspect of one of the many points makes or breaks the
overall recommendation & argument.

not lower maintenance,


Wrong.

With a simple disk there is no drive synchronization, no time
consuming parity level initialization, no management software updates
or configuration, there is no backup of controller config that needs
to be performed, adding drives never implies much in the way of low
level configuration & never the adjustment of existing storage...


So you're trying to compare a single non-RAID drive to a
RAIDed config now?


Come on.

That always was the case. We both recommended the same thing and BOTH
compared it to ATA RAID earlier.

This smear attempt of yours is becoming very transparent. If the
thread confuses you so, why bother posting?

SCSI, including the Cheetah, does not
eliminate management software updates or config.


Come on

What "management software" does a single Cheetah use on a vanilla hba?

What backup of the controller config is needed on ATA beyond
SCSI?


Come on.

It's smart to backup a raid controller's config (if you can - or
perhaps even if you have to take it off the drives). There's no
reason or ability to do that with a vanilla scsi hba.

no easier troubleshooting down the line,


Wrong.

Power failure or crash can really screw up a lot of raids. A faulty
disk will take a crap all over the entire filesystem with raid 0.


yes but again, this is not an argument FOR SCSI Cheetahs,
simply to avoid RAID0. Granted that was part of the context
of the reply, but it didnt end there, you tried to extend
the argument further.


Come on.

All this is in response to Joris' post:

"Besides this these disks are way to expensive and you get much
better performance and several times the storage space by
spending that money on a RAID array.

Why you need a Cheetah 15k disk?"

So we both later made an identical recommendation (the 15k cheetah) in
comparison to ATA raid. In facy YOU recommended a single cheetah
when compared to ATA RAID0!

Did I really _extend_ the argument _further_, or simply elaborate
/provide an explanation/details on the benefits which affect not only
performance but also user/operator productivity (which is WHY ppl are
concerned with performance in the first place).

So when you said:
"The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0."

That was more worthwhile because you made no attempt to elaborate on
the attributes that would be helpful to the OP and WHY it is a better
for for him?

Come on.

Defunct disks due to power cable or backplane issues is a PITA- with a
single drive you just push in the plug better and press the power
button. You almost never have to worry about drive firmware issues or
conflicts. You almost never have to think about getting bare metal
recovery software to work or play nice with a storage driver.
Transient disk error passed on in RAID 5 for example is a nightmare to
troubleshoot...

and not less power consumption, heat or PSU
issues.

Totally absurd with raid recommendations for the low end desktop.
Difference in power consumption of current scsi and ata drives is no
longer significant. Using several disks at the same time is -
especially during power up.


Except that you're ignoring a large issue... the drive IS
storage.


Come on.

That doesn't even make any sense.

You can avoid RAID0, which I agree with, but can't
just claim the Cheetah uses less power without considering
that it a) has lower capacity


Come on.

The OP was considering a single 15K cheetah NOT say 250gigs of storage
for example.

b) costs a lot more per GB.


Come on.

Has nothing to do with electrical power.

Also $/GB is overly simplistic - it is not the only variable in TCO or
ROI for storage.

c) it's performance advantage drops the further it's filled
relative to one much larger ATA drive at same or lower
price-point, perhaps even at less than 50% of the cost.


Come on.

Has nothing to do with electrical power

Also Not true. These drops are case by case and not by interface.
Look at the Seagate Cheetah 36ES for example which dropps extremely
little across the entire disk.

To compare raw thruput on similar price point you are talking about
antiquated scsi with less dense platters vs modern ata with very dense
platters. That's too unfair to even be serious. It also isn't very
serious because the comparison is based on an overly simplistic view
of both performance and valuation.

Of course I'm not advocating any low end desktop

You truely are CLUELESS.

You truly are hilarious


Thank you, laughing is good for us.


Yeah. I'm still laughing.

sigh

OK, getting less funny...

Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


Oh yeah, More absurd trash.


Do you not even understand the aforementioned PCI
bottleneck? Southbridge integral (or dedicated bus) is
essential for utmost performance on the now-aged PC 33/32
bus. Do you assume people won't even use the PCI bus for
anything but their SCSI array? Seems unlikley, the array
can't even begin to be competitive unless it's consuming
most of the bus througput, making anything from sound to nic
to modem malfunction in use else performance drops.


If you look at REAL STR numbers, REAL bus numbers, REAL overhead
numbers, and REAL usage patterns you will understand my point.

Remember the comparison is for 1 or 2 plain scsi 15k on a vanilla hba
vs. some kind of ATA RAID. Stop creating your own comparisons NOW
which are different that what the thread has been about ALL ALONG -
Including the time you also recommended the Cheetah over ata RAID0 and
everone put this to bed 2 weeks ago.

If the comparision you are making NOW is germane or there is such a
HUGE difference you should have put the SATA as YOUR primary
recommendation instead of the SCSI and challanged my recommendation
honestly.

How transparent your "argument" is...

-Not at all with write back cache disabled so the SATA RAID doesn't
bite you.
-Not at all for an individual SCSI disk
-Not at all if SCSI disks are mainly used 1 at a time
-Not for read/writes through most of the platters of 2 scsi drives
used 'simultaneously' esp if the PCI bus isn't handling much else.
-Latency is far from marginal esp for multiuser & multitasking


This I agree with, latency reduction is a very desirable
thing for many uses... but not very useful for others.


For general purpose "workstation" performance from reduced latency
(15K) and load balancing (2x 15K) it is _extremely_ important.

The bandwidth associated with RAID0 on a dedicated bus is only
necessary for a handfull of special tasks. That's not what the OP is
looking for/needs.

The OP primarily wants "Fast access to files, short response times,
fast copying - just some luxury issues."

That's why you endorsed the 15K scsi like I did as the primary/best
recommendation. Furthermore you called 2x SATA Raptors a
"cost-effective compromise" not best. not necessary.

pathetic

-Not nearly as expensive as you wish to imply

I'd also be careful if you are thinking all southbridge devices are
always, & always have been, off the PCI bus.



Never wrote "always been", we're talking about choices
today.


Why I said "I'd also be careful if"

We're not really talking about chipset choices today - at least that's
only something you pulled out of thin air and threw into the thread 2
weeks after the fact to attempt to confuse the discussion.

I'm clarifying how your assumptions are wrong or exaggerations and how
you are interjecting irrelevant comparisons.

What modern chipset puts integrated ATA on PCI bus?
What 2 year old chipset does?


No you don't have to look far back.

That's not the point though; since you were overstating the advantage
of "southbridge-integral SATA" I warned you against other
similar/related false notions.

You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.

You MEAN, YOU PERSONALLY can't compare them because you are
clueless.


Just plain dumb.


No, if you could compare them you'd see that a SCSI PCI card
will never exceed around 128MB/s, while southbridge ATA
RAIDs may easily exceed that... throw a couple WD Raptors
in a box and presto, it's faster and cheaper.


Come on.

If you could compare them you'd see there isn't much of a difference
when you look at overhead.

So you are _always_ moving _files_ at max _raw_ thruput through _all_
parts of the disk with RAID0 type usage even with basic disks? What
about REAL usage patterns?

And what about the greater overhead of SATA? and inefficiencies of
some controllers (even though they are point to point) esp relative to
scsi which still has real potential with complex multi-disk access
(the 2 plain scsi drive scenario). Do you really think that some
marginal theoretical maximal bandwidth issue is going to be a huge
drawback against the multitasking responsiveness of reduced latency
esp with 2 regular 15K load balancing type storage approach? Do you
really think that a single 15K scsi or 1 15k scsi used at a time is
going to saturate the bus? I already specified "if the pci bus isn't
doing much else" which is likely if there is no pci video or other pci
storage or 'exotic' pci devices. You act like it would be crippled if
the full theoretical maximal potential isn't reached - and it just
doesn't work that way.

The OP wanted "Fast access to files, short response times, fast
copying - just some luxury issues."

The OP claims to be using a "workstation" which might very well imply
having a faster or multiple PCI busses anyway. You're only guessing a
single 32/33 pci is relevant.

Keep in mind
that either way I would only recommend a futher backup
strategy, data should not be only on (any) drives used
regularly in the system.


Of course. Everybody does. So?

Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.

WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.


At least we're talking about the same kind of failure.

If you spin down every few months there are only small amounts/smaller
particles which you allow to settle in the drive. If you wait too
long there are larger amounts /larger particles which are being
churned around & when they settle can cause stiction when re-powered.
Planning powering down can extend somewhat the useable life before
stiction- or it at least allows you to control the failure event
during maintenance as opposed to when you need it most (The Monday
Morning Blues).


I don't believe there is enough evidence to conclude
anything near this, seems more like an urban legend.
I suggest not powering down the drives at all, until their
scheduled replacement.


Well that's not _necessarily_ a bad or wrong suggestion- but it
usually isn't practical, esp on a "workstation", to never spin down
for the entire disk service life (typically 3-5 years) or system life
(typically 3 years). Given the total number of times modern drives
are safe to spin up it makes no sense to be _totally_ afraid of it.
If you ARE totally afraid there may be something to be said for
bringing a latent problem to a head when it is convenient and handling
a warranty replacement when you can afford to as opposed to allowing a
random occurrence - which always follows Murphy's Law. You should
investigate this more if you don't believe this (admittedly ancient)
"best practice."


You're getting desperate. These objections are your ego talking and
not your head.

You didn't see any problem with my post for 2 weeks until you started
getting snotty in another thread. This thread has been dead for so
long it took me a while to even notice your "objections." I thought
we already settled all this silliness?

I thought your objections & snottiness was only due to my alleged
"lack of specificity" or "details" (like the other thread we locked
horns). I first elaborated on my recommendation here (which you
agreed with) and now I have twice supported my elaborating details
with "specifics". If you disagreed with my recommendation or
reasoning it would have appeared more genuine to raise such issues
then.

Your "criticism" is just silly, arbitrary, & confused.
  #16  
Old November 25th 04, 11:28 AM
Joris Dobbelsteen
external usenet poster
 
Posts: n/a
Default

snip

This thread is getting total crap.
Just lets take a economic approach.
(Prices may vary depending on whatever....)

A Cheetah 15K.3 36.7 GB costs EUR 315.
Access times are 3.6 ms, 8 MB cache. U320.
EUR 8.50 / GB. 50-75 MB/s sequential read.
Maxtor claims faster access times (3.2 ms) at lower costs (EUR 299).
Fujitsu claims faster access times than cheetah (3.3 ms) at lower costs (EUR
229)

A Cheetah 10K.6 (ST336607LC) 36.7 GB costs EUR 159.
Access times are 4.7 ms, 8 MB cache. U320.
EUR 4.33 / GB. 40-70 MB/s sequential read.

A Cheetah 10K.6 (ST336607LC) 74 GB costs EUR 315.
Access times are 4.7 ms, 4 MB cache. U320.
EUR 4.26 / GB. 40-70 MB/s sequential read.

A WD Raptor (WD740GD) 74 GB costs EUR 175.
Access times are 4.5 ms, 8 MB cache. SATA.
EUR 2.36 / GB. 40-70 MB/s sustained read (60 average).

A Hitachi Deskstar 7K250 (HDS722516VLSA80) 160 GB costs EUR 99.
Access times are 8.5 ms, 8 MB cache, SATA.
EUR 0.60 / GB

The cheapest SCSI controller I could find was Adaptec ASC-19160 at EUR 149.

The cheapest SATA RAID controller is Promise S150 TX2Plus (2xSATA + 1xIDE)
at EUR 59.
Or the SATA RAID controller Promise S150 TX4 (4xSATA) at EUR 85.
Better is a mother-board integrated SATA controller. Today there are
4-channel on the motherboard integrated which can be set to RAID0/1.

Now the why:
Fast access to files, short response times, fast copying - just some luxury
issues.


Short response times. Cheetah 15K leads, but short response times for what?
If you are just using some programs or a (decent) database, consider putting
in some more RAM. It can do magic sometimes. I had a system that had a
memory upgrade from 256 to 768 MB and it did increase performance of some
elements by several factors. Now it handles more tasks at decent
performance.
Swapping will kill your system, no matter what disks.

Fast copying. Try RAID0 (or even independent) Raptor. You get 4x the storage
capacity with better performance than the Cheetah 15K.3. Access times
decrease a little bit (~0.5 ms) though.

Because you state
just some luxury issues.

consider that you are paying a lot of money for SCSI. Remember that the 36
GB can be quite small for a home PC. I have 180 GB and its filled nearly
beyond the capacity. The dual raptor provides 140 GB, which gives a decent
storage capacity with good performance.

Don't try a 10K rpm SCSI disk, the raptor provides equal/better performance
at its much cheaper.

Failures?
Make backups. You will need them anyways, no matter what you are doing.
If this is a mayor concern, two RAID1 raptors have equal costs to a single
Cheetah 15K.3 and a much better MTBF (theoratically).
Read throughput should be 2x a single raptor (with a decent RAID controller
of course), while writes still have the same speeds.
I also believe you should actively cool todays disk when you have 2 or more
close together (put a fan close to it). I have a Seagate Baracude 20GB and
Maxtor 160 GB and they stay quite cool due to fan (80mm of out of detective
PSU) that is besides them.

- Joris


  #17  
Old November 26th 04, 10:00 AM
kony
external usenet poster
 
Posts: n/a
Default

On Thu, 25 Nov 2004 06:39:30 GMT, Curious George
wrote:

On Fri, 05 Nov 2004 11:44:21 GMT, kony wrote:

The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0.


Come on.

So you agree with a 15k scsi drive recommendation but disagree with a
15k scsi drive recommendation?

Quite amusing really


How dense can you be?
IF the choice were one or the other... but the choice ISN'T
only one or the other.

I don't recommend either, it's stupid to buy a decent SCSI
controller and one drive, when one could just buy a Raptor
instead, not RAID0'd at all.




OK then, but there was no mention of RAID0. Why would we
bother to contrast anything with RAID0?


Come on.

Ronny Mandal said
"So you are saying that two IDE in e.g. RAID 0 will outperform
the SCSI disk in speed, besides storage etc?"


And?
I wasn't the one who brought it up.



and you mentioned RAID0 (see above) in your answer to him. It's a
valid and real part of previous discussion thread (from 2 weeks ago).

Certainly RAID0 is part of the category of inexpensive raid initially
brought up by the initial post Joris Dobbelsteen so it SHOULD be
discussed by ALL sub branches.


Discuss whatever you like, that doesn't not bind anyone else
to address and rehash every point of (any particular
thread).


Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives.


But that means very little without insider info about the
cause... it could simply be that the SCSI line is producing
a lot of defective drives.


Quality control is usually relaxed because of the relative tradeoff in
profitability / defect rates.


Seems you're speculating without any evidence again.
Either way a drive failure is a loss, both financial and
potential loss of customer. In fact the far larger sales
are ATA to OEMs, so profitability is key to ATA, not SCSI,
all the more reason ATA would need be more reliable if we
want to make speculations.



It's good to know you have been reading most of my postings. It makes
me feel good to see use of phrases like "insider info" with regard to
this subject. Only that doesn't really make it YOUR argument or mean
a manipulation of words is an argument.



The phrase "insider info" was meant to imply that you're not
supplying any facts but rather trying to think
altruistically about SCSI and one make in particular, and
thus can't be taken for more than a zealot.


Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy.


... and a huge difference in total good units you may have
the chance to buy, too.


yes and that is offset by the huge numbers of customers and units'
population spread across many more resellers...


Again, you have no evidence. We might agree that more ATA
are sold, but that has no necessary bearing on the failure
rate. To take an opposing view, more research might be put
into their primary volume products and again a reason why
ATA ends up being higher quality.


Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.


Most ppl tend to buy large lots of SCSI drives?


scsi is very often bought for multi-drive servers- average according
to Adaptec is usually around 4 per or 4 per channel. scsi has also
been used in disk arrays for some time. Many companies/enterprises
buy many servers with multiple arrays and often many workstations with
scsi drives also. It's usually uncommon for consumers or small
business (who tend to by small amounts of storage) to even consider
scsi.


Ever wonder why? You'd presume to be the only one to see
somthing in SCSI that the majority, don't?
As I've mentioned previously, SCSI is superior in it's bus,
the ability to access so many drives, but that has nothing
to do with your claims.



That's not the whole poop though. Even when buying a single disk your
statistical relationship to the entire population of either is
different.

I admit the complexity of this comparison makes it somewhat fuzzy.
Even if you reject this and say scsi drives are of identical build
quality or you have equal chances of getting a good scsi or ATA drive-
it doesn't alter OUR suggestion which endorses the scsi drive. It
also doesn't successfully indict my reliability point as it has
already been satisfied with relative MTBF.


We can conclude nothing about MTBF when SCSI is, as yoy
mentioned, primarily used in roles where drives aren't
spun-down so often and in more robustly engineered systems
from a power and cooling perspective, on average.



I suggest that any significant data store is tested before
being deployed, with the actual parts to be used. Further
that NO data store on a RAID controller be kept without an
alternate backup method.


Come on.

Of course. that has never been in contest. It's also not exactly
news.


No it's not, but if you claim higher reliability then it has
to be questioned whether a questionable (if any) benefit is
worth a price-premium when another backup means should be
employed regardless, and with there almost always being a
"total budget", a compromise of other backup means could
easily result from paying multiple times as much for SCSI
when it isn't even demonstrated to offer that much of an
advantage in single-disk uses.


But taking further this comment pulled out of thin air - backup
applies to multiple data categories on EVERY kind of storage volume.
IT's not much a raid suggestion anyhow.

so you're going to make a point of telling someone to back up his
raid0 volume if it only holds /tmp/ or paging data?


You'd suggest hundreds of $$$ for SCSI to store a paging
file instead of buying more ram? Let's be realistic.


You backup data not storage or "data store".

Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.


I disagree that they "tend to implement new features more
conservatively", a couple days ago you listed many features
added less conservatively.


Come on.

I didn't say that. Implementing more advanced features (which is what
I assume you are referring to) is different than implementing features
more or less conservatively; they are implementing advanced features
in a more conservative fashion.


doubletalk


There is no logical conflict because certain advanced features aren't
put in ata drives ONLY because they want to differentiate the product
lines/product classes.


So?
In the end it only matters if the needed features are
present. I don't recall anyone posting lamentations of how
they have major problems because their ATA drive doesn't
have some SCSI feature. NCQ would be nice, but that is now
in the market for SATA.


RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...


I disagree with that assessment. In one sentence you write
"more parts and tools to learn and use" but then come back
with "never have to think about DMA mode". You can't have
it both ways, it most certainly is more to think about.


Come one.

Read it again. Remember it is a 1 or 2 scsi drive vs ata raid
comparison. (as it always has been)


Sometimes an argument isn't worth following, for example
when someone suggested several hundred $$$ spent on a single
SCSI drive and controller to end up with less than a few
hundred GB of space. A SCSI drive that's full doesn't have
that performance edge anymore, and you still haven't
provided any solid evidence that they're more reliable, so
there's little reason left to choose SCSI... over a SINGLE
ATA drive, forget about RAID0. Just because you want to
argue RAID0 doesn't mean the world is obliged to follow.


I suggest that anyone who can't understand DMA mode on ATA
should not be making any kind of data storage decisions,
instead buying a pre-configured system and not touching
whichever storage solution it might contain.


There isn't very much to understand about DMA (for the end-user) it's
a matter of familiarity/learning. If they never touch it then how are
they supposed to learn? How are they supposed to get problems fixed
when/if they arise and they have only phone support? Is this all some
kind of secret club?

Come on.


Just how many problems do you expect to have? I keep
gettting the feeling that all those features for SCSI are
because you've seen a much higher problem rate than with
ATA.

Sure the learning is important, and should be done PRIOR to
depending on that technology for data storage, not "during".
The majority of data loss occurs from either disk failure or
user error. Experience and more disks (allowed by lower
per unit cost) help to reduce rates of these common causes.


That has nothing to do with it. The point is there are more things to
look at & think of with ATA raid over a single scsi drive and that
makes it less simple.


Get beyond the idea of ATA raid0 already.
You might as well compare ATA RAID0 to SCSI RAID0 instead,
if you insist on talking about RAID.


I'm not claiming any of these by themselves are
overwhelming. Put them together, though, and there is a _difference_
in overall simplicity of the different systems. Furthermore this
simplicity point is one of many items used to substantiate and
elaborate on a recommendation you agree with. It's unreasonable to
now claim one aspect of one of the many points makes or breaks the
overall recommendation & argument.


Again doubletalk.
You claim the simplicity is a virtue and yet previously went
on about "advanced features" of SCSI... and again, fixated
on RAID0. I _never_ suggested RAID0, and am certainly not
bound to argue FOR (or against) it because someone ELSE
suggested it.


So you're trying to compare a single non-RAID drive to a
RAIDed config now?


Come on.


I think the record player is broken, "come on" keeps
repeating.


That always was the case. We both recommended the same thing and BOTH
compared it to ATA RAID earlier.


No, I choose between the only two alternative presented,
that most certainly does NOT mean it's what I suggest.
Suppose I asked you if it'd be better to hammer nails with a
potato or a brick, would your choosing the brick mean you
recommend hammering nails with a brick?



This smear attempt of yours is becoming very transparent. If the
thread confuses you so, why bother posting?



Congratulations on stooping to insults again when you've ran
out of arguments, let alone evidence.


SCSI, including the Cheetah, does not
eliminate management software updates or config.


Come on

What "management software" does a single Cheetah use on a vanilla hba?


Again you fixate on RAID0. Seems like you have no argument
else you'd not try to slant the whole conversation.



What backup of the controller config is needed on ATA beyond
SCSI?


Come on.

It's smart to backup a raid controller's config (if you can - or
perhaps even if you have to take it off the drives). There's no
reason or ability to do that with a vanilla scsi hba.


Again the question, "What backup of the controller config is
needed on ATA beyond SCSI?" We're talking apples to apples,
RAID to RAID, not your twisted argument.


Come on.

All this is in response to Joris' post:

"Besides this these disks are way to expensive and you get much
better performance and several times the storage space by
spending that money on a RAID array.

Why you need a Cheetah 15k disk?"

So we both later made an identical recommendation (the 15k cheetah) in
comparison to ATA raid. In facy YOU recommended a single cheetah
when compared to ATA RAID0!


I recommend a single anything that's NOT on the 33/32 PCI
bus. I'd sooner recommend a pair of RAID0 Raptors on SB
controller than an expensive SCSI PCI controller and the
Cheetah 15K, but that is not what was being discussed, my
reply was to one specific question regardless of the larger
picture.


Did I really _extend_ the argument _further_, or simply elaborate
/provide an explanation/details on the benefits which affect not only
performance but also user/operator productivity (which is WHY ppl are
concerned with performance in the first place).

So when you said:
"The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0."

That was more worthwhile because you made no attempt to elaborate on
the attributes that would be helpful to the OP and WHY it is a better
for for him?

Come on.


Perhaps you should stop using contexts only when it suits
you.


Except that you're ignoring a large issue... the drive IS
storage.


Come on.

That doesn't even make any sense.


Sure it does, you suggest an extremely expensive way to get
the least storage of (almost any) modern drive available...
Making additional storage of the least benefit possible.
Inner tracks of a fast SCSI drive, aren't so fast. A Maxtor
Maxline would eat your SCSI suggestion alive once the (SCSI)
drive became nearly full, except on some specific uses like
databases.



You can avoid RAID0, which I agree with, but can't
just claim the Cheetah uses less power without considering
that it a) has lower capacity


Come on.

The OP was considering a single 15K cheetah NOT say 250gigs of storage
for example.


Why not say 250GB of storage? In SATA drives It's a lot
cheaper than a Cheetah plus controller.


b) costs a lot more per GB.


Come on.

Has nothing to do with electrical power.


Nope, but having to run 3 Cheetahs to get same capacity uses
a wee bit more power wouldn't you say?


Also $/GB is overly simplistic - it is not the only variable in TCO or
ROI for storage.


.... and ignoring $/GB in favor of a single SCSI drive is
overly foolish.


c) it's performance advantage drops the further it's filled
relative to one much larger ATA drive at same or lower
price-point, perhaps even at less than 50% of the cost.


Come on.

Has nothing to do with electrical power

Also Not true. These drops are case by case and not by interface.
Look at the Seagate Cheetah 36ES for example which dropps extremely
little across the entire disk.


In single drive configuration or only because it, in
multi-drive arrays, was already so bottlenecked by the PCI
bus that it was wasted $$$.



To compare raw thruput on similar price point you are talking about
antiquated scsi with less dense platters vs modern ata with very dense
platters. That's too unfair to even be serious. It also isn't very
serious because the comparison is based on an overly simplistic view
of both performance and valuation.


Nope, modern drives... or at least as modern as possible
since the SCSI drives / controller are so much more
expensive. It is very rare for a system to benefit from a
few hundred $$$ more on SCSI than spending that money
elsewhere on upgrades.
  #18  
Old November 26th 04, 07:41 PM
Curious George
external usenet poster
 
Posts: n/a
Default

I see you're still hitting the pipe.

enjoy!
  #19  
Old November 26th 04, 08:50 PM
Curious George
external usenet poster
 
Posts: n/a
Default

On Thu, 25 Nov 2004 12:28:30 +0100, "Joris Dobbelsteen"
wrote:

snip

This thread is getting total crap.
Just lets take a economic approach.
(Prices may vary depending on whatever....)

A Cheetah 15K.3 36.7 GB costs EUR 315.
Access times are 3.6 ms, 8 MB cache. U320.
EUR 8.50 / GB. 50-75 MB/s sequential read.
Maxtor claims faster access times (3.2 ms) at lower costs (EUR 299).
Fujitsu claims faster access times than cheetah (3.3 ms) at lower costs (EUR
229)

snip

Appreciate the detail. Very helpful to the OP & group.

The cheapest SCSI controller I could find was Adaptec ASC-19160 at EUR 149.


LSI cards are good low cost alternatives. For a few disks/ one
channel/ 64/33 pci or slower 40USD is enough.

The cheapest SATA RAID controller is Promise S150 TX2Plus (2xSATA + 1xIDE)
at EUR 59.
Or the SATA RAID controller Promise S150 TX4 (4xSATA) at EUR 85.
Better is a mother-board integrated SATA controller. Today there are
4-channel on the motherboard integrated which can be set to RAID0/1.


Which mobo(s) would you recommend?
How well does their raid deal with management, recovery scenarios,
defect scanning, SMART? (sincerely curious)

Now the why:
Fast access to files, short response times, fast copying - just some luxury
issues.


Short response times. Cheetah 15K leads, but short response times for what?


Hit the nail on the head. The OP has to identify the kinds of tasks
which are choking his disk subsystem and use one or a combination of
suggestions already mentioned in the thread to open the bottleneck.

If you throw too much disk intensive stuff at any storage it will
choke regardless of whether it is raptor raid0 on a dedicated bus,
large 15k array, or whatever. Dividing the load can often be more
important than having the fastest disk or logical disk.

If there is no bottleneck (after all this) I question the importance
of this upgrade and wonder if the expense is warranted for either ata
raid, raptors, scsi, etc. For casual use & casual performance
requirements it's usually hard to justify the extra cost for anything
above non-raid 7200rpm ata (If we're really going to be disciplined
about talking money & ROI).

If you are just using some programs or a (decent) database, consider putting
in some more RAM. It can do magic sometimes. I had a system that had a
memory upgrade from 256 to 768 MB and it did increase performance of some
elements by several factors. Now it handles more tasks at decent
performance.
Swapping will kill your system, no matter what disks.
Fast copying. Try RAID0 (or even independent) Raptor. You get 4x the storage
capacity with better performance than the Cheetah 15K.3. Access times
decrease a little bit (~0.5 ms) though.

Because you state
just some luxury issues.

consider that you are paying a lot of money for SCSI. Remember that the 36
GB can be quite small for a home PC. I have 180 GB and its filled nearly
beyond the capacity. The dual raptor provides 140 GB, which gives a decent
storage capacity with good performance.

Don't try a 10K rpm SCSI disk, the raptor provides equal/better performance
at its much cheaper.


I guess that's by location. Where I live there is little price
difference between raptors and current 10K scsi and hba and cabling
need not be highly expensive- and well you already know my bias.

Failures?
Make backups. You will need them anyways, no matter what you are doing.
If this is a mayor concern, two RAID1 raptors have equal costs to a single
Cheetah 15K.3 and a much better MTBF (theoratically).


Please explain.

for arrays basically
Array MTBF = Drive MTBF / N Drive

(well actuall you're supposed to include the MTBF of the controller,
etc wich lowers MTBF further)

Array MTBF is significantly lower than a single disk. Raid is
supposed to make up for that by providing storage service continuity
and enhanced data integrity (in most cases) and other features.

Both the cheetah and raptor are rated 1,200,000-hour MTBF
(theoretical) so a raid1 or 2 disk raid0 array of either yields
600,000 hours (actually lower when including the other non-drive
storage componants).

Of course manufacturers provide theoretical MTBF not operational MTBF
and MTBF never actually characterizes a particular disk and should be
taken with a grain of salt...

Read throughput should be 2x a single raptor (with a decent RAID controller
of course), while writes still have the same speeds.
I also believe you should actively cool todays disk when you have 2 or more
close together (put a fan close to it). I have a Seagate Baracude 20GB and
Maxtor 160 GB and they stay quite cool due to fan (80mm of out of detective
PSU) that is besides them.

- Joris


  #20  
Old November 30th 04, 04:42 PM
Joris Dobbelsteen
external usenet poster
 
Posts: n/a
Default

"Curious George" wrote in message
...
On Thu, 25 Nov 2004 12:28:30 +0100, "Joris Dobbelsteen"
wrote:

snip

This thread is getting total crap.
Just lets take a economic approach.
(Prices may vary depending on whatever....)

A Cheetah 15K.3 36.7 GB costs EUR 315.
Access times are 3.6 ms, 8 MB cache. U320.
EUR 8.50 / GB. 50-75 MB/s sequential read.
Maxtor claims faster access times (3.2 ms) at lower costs (EUR 299).
Fujitsu claims faster access times than cheetah (3.3 ms) at lower costs

(EUR
229)

snip

Appreciate the detail. Very helpful to the OP & group.

The cheapest SCSI controller I could find was Adaptec ASC-19160 at EUR

149.

LSI cards are good low cost alternatives. For a few disks/ one
channel/ 64/33 pci or slower 40USD is enough.


Sorry, they didn't have these cards.
As stated above
(Prices may vary depending on whatever....)

thus...

The cheapest SATA RAID controller is Promise S150 TX2Plus (2xSATA +

1xIDE)
at EUR 59.
Or the SATA RAID controller Promise S150 TX4 (4xSATA) at EUR 85.
Better is a mother-board integrated SATA controller. Today there are
4-channel on the motherboard integrated which can be set to RAID0/1.


Which mobo(s) would you recommend?
How well does their raid deal with management, recovery scenarios,
defect scanning, SMART? (sincerely curious)


The cheapest ASUS, ABIT, MSI, Gigabyte branded board that fits within the
specification.
Intel boards are not my first choice, because they are usually expensive and
don't have the features that the other brands provide.
I had trouble with AOpen (incompatible mainboard and jet-engine like
CD-ROM), so I don't use this brand any more.
Of course this is my opinion and its probably quite biased.

What management, just install the array and you are done. It works just like
a normal disk (except for setting up the array once).
With some controllers you might get in trouble when you use different disks,
so use the same brand AND model.

Recovery: RAID1: turn of the system, remove the defective drive and replace
it. Turn on, repair the array, wait a few seconds for the disk copy and
done.
RAID0 or one-disk. Replace the defective drive. Grab your backups and have a
good time for the coming day(s).

Todays disks are capable of relocating damaged sectors, they do it all (same
reason your 128MB USB drive/memory stick only has 120 MB storage capacity).

Now the why:
Fast access to files, short response times, fast copying - just some

luxury
issues.


Short response times. Cheetah 15K leads, but short response times for

what?

Hit the nail on the head. The OP has to identify the kinds of tasks
which are choking his disk subsystem and use one or a combination of
suggestions already mentioned in the thread to open the bottleneck.

If you throw too much disk intensive stuff at any storage it will
choke regardless of whether it is raptor raid0 on a dedicated bus,
large 15k array, or whatever. Dividing the load can often be more
important than having the fastest disk or logical disk.


Simply call it resource contention. For a single-user system the Raptor will
handle the resource contention better than the SCSI system.
Of course this is subject to the opinion expressed by a third party, who may
resonabily be expected to have sufficient knowledge of the system to provide
such an 'opinion'.
Usually response times, throughput and storage capacity requires a
trade-off.
My trade-off would favor storage capacity over throughput over response
times.
I need a lot of storage (movies & DVD). I do work that involves a lot of
copying (DVD authoring). Programs I use frequently will be put in memory
cache (memory response times (couple ns) are much better than disk response
times (couple ms)). I also never found a good reason to use RAID1 for my
system.

If there is no bottleneck (after all this) I question the importance
of this upgrade and wonder if the expense is warranted for either ata
raid, raptors, scsi, etc. For casual use & casual performance
requirements it's usually hard to justify the extra cost for anything
above non-raid 7200rpm ata (If we're really going to be disciplined
about talking money & ROI).


Indeed, but if you want luxery, you are (or someone else is if you are
lucky) going to pay for it anyway. Its just considering how much you are
willing to spend for you luxery.
However for the same luxery (or even the same essential product that you
simply need) there is a large variation of prices that you can pay.

If you are just using some programs or a (decent) database, consider

putting
in some more RAM. It can do magic sometimes. I had a system that had a
memory upgrade from 256 to 768 MB and it did increase performance of some
elements by several factors. Now it handles more tasks at decent
performance.
Swapping will kill your system, no matter what disks.
Fast copying. Try RAID0 (or even independent) Raptor. You get 4x the

storage
capacity with better performance than the Cheetah 15K.3. Access times
decrease a little bit (~0.5 ms) though.

Because you state
just some luxury issues.

consider that you are paying a lot of money for SCSI. Remember that the

36
GB can be quite small for a home PC. I have 180 GB and its filled nearly
beyond the capacity. The dual raptor provides 140 GB, which gives a

decent
storage capacity with good performance.

Don't try a 10K rpm SCSI disk, the raptor provides equal/better

performance
at its much cheaper.


I guess that's by location. Where I live there is little price
difference between raptors and current 10K scsi and hba and cabling
need not be highly expensive- and well you already know my bias.


I took the store here that was quite cheap compared to many others. Of
course prices where provided "AS IS", meaning they can differ arround the
world and between stores. See above (I'm beginning to repeat).

Failures?
Make backups. You will need them anyways, no matter what you are doing.
If this is a mayor concern, two RAID1 raptors have equal costs to a

single
Cheetah 15K.3 and a much better MTBF (theoratically).


Please explain.

for arrays basically
Array MTBF = Drive MTBF / N Drive

(well actuall you're supposed to include the MTBF of the controller,
etc wich lowers MTBF further)


Lets asume most chip manufacturers (NOT designers, there are only a few
manufacturers) are equally capable of making the same quality product.
Besides the mechanical parts are more likely to fail than electrical.
A very hot CPU would last for 10 years (its designed for it anyway). I
expect the same for chipsets.
I only saw electronics fail because of ESD, lightning storms and some
chemicals (e.g. from batteries).
I wouldn't consider the controller to be a major problem with disk
subsystems.

Array MTBF is significantly lower than a single disk. Raid is
supposed to make up for that by providing storage service continuity
and enhanced data integrity (in most cases) and other features.


When using 2-disk RAID 1 (NOT RAID 0): when 1 disks fails the system
continues to operate correctly, leaving you time to replace the defective
material with no loss of continuity.
One disk systems will stop working when the disk failes.

Besides recovery times for RAID1 are probably lower than for one-disk
systems.

Both the cheetah and raptor are rated 1,200,000-hour MTBF
(theoretical) so a raid1 or 2 disk raid0 array of either yields
600,000 hours (actually lower when including the other non-drive
storage componants).

Of course manufacturers provide theoretical MTBF not operational MTBF
and MTBF never actually characterizes a particular disk and should be
taken with a grain of salt...


Basically under normal operations the system will continue to work without
failing once. You are probably have more problems with software than you
will have with hardware. Most down-time is either human or software related,
not hardware. The issue is that when its hardware related, recovery costs
much more time and you have a bigger risk of losing valuable data.
When the disk will start to fail, it will probably be obsolute anyways,
unless your system lasts for more than 6 years. Of course, when you want
this, you should rather prepare for the worst and have a 4-computer
clustered installed with fail-over capability.

Asuming its for luxery and I have here a system that is in operation for
already 5 years and is subject to frequent transports and some very
disk-intensive work at times, it never left me alone due to a hardware
failure (the normal minor stuff because I forgot some cables or didn't
attach them too well provided). All the products I used where the cheapest
compared to competitors, however some trades where made between brands when
I think for only a very small difference I could get something I expect to
be more reliable or better.

Read throughput should be 2x a single raptor (with a decent RAID

controller
of course), while writes still have the same speeds.
I also believe you should actively cool todays disk when you have 2 or

more
close together (put a fan close to it). I have a Seagate Baracude 20GB

and
Maxtor 160 GB and they stay quite cool due to fan (80mm of out of

detective
PSU) that is besides them.

- Joris




 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Best drive configuration? Noozer General 20 May 27th 04 03:10 AM
RAID card for my PC?? TANKIE General 5 May 22nd 04 01:09 AM
Adding IDE drive to SCSI system thinman General 7 May 15th 04 01:57 PM
Axis Storpoint CD and CD/T upgrade to SCSI Disk Drives Mad Diver General 0 December 31st 03 07:07 PM
SCSI trouble Alien Zord General 1 June 25th 03 03:08 AM


All times are GMT +1. The time now is 11:42 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.