A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

15K rpm SCSI-disk



 
 
Thread Tools Display Modes
  #1  
Old November 2nd 04, 04:07 PM
Ronny Mandal
external usenet poster
 
Posts: n/a
Default 15K rpm SCSI-disk

Hi.

I have a question.

I am really eager to buy a Seagate Cheetah 15K rmp disk for my workstation.
The only issue is that I've heard that these disks are not suitable for
frequently power on/off, i.e. turning off the computer once or twice+ a day.
They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Thanks,


Ronny Mandal


  #2  
Old November 3rd 04, 08:22 PM
Joris Dobbelsteen
external usenet poster
 
Posts: n/a
Default

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?

- Joris

"Ronny Mandal" wrote in message
...
Hi.

I have a question.

I am really eager to buy a Seagate Cheetah 15K rmp disk for my

workstation.
The only issue is that I've heard that these disks are not suitable for
frequently power on/off, i.e. turning off the computer once or twice+ a

day.
They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Thanks,


Ronny Mandal




  #3  
Old November 4th 04, 09:27 AM
Ronny Mandal
external usenet poster
 
Posts: n/a
Default

In fact I do not need it, I need performance.

So you are saying that two IDE in e.g. RAID 0 wil outperform the SCSI disk
in speed, besides storgae etc?

Thanks.


Ronny Mandal

"Joris Dobbelsteen" wrote in message
...
Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on
a
RAID array.

Why you need a Cheetah 15k disk?

- Joris

"Ronny Mandal" wrote in message
...
Hi.

I have a question.

I am really eager to buy a Seagate Cheetah 15K rmp disk for my

workstation.
The only issue is that I've heard that these disks are not suitable for
frequently power on/off, i.e. turning off the computer once or twice+ a

day.
They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Thanks,


Ronny Mandal






  #4  
Old November 4th 04, 11:09 AM
kony
external usenet poster
 
Posts: n/a
Default

On Thu, 4 Nov 2004 10:27:49 +0100, "Ronny Mandal"
wrote:

In fact I do not need it, I need performance.

So you are saying that two IDE in e.g. RAID 0 wil outperform the SCSI disk
in speed, besides storgae etc?


Speed at what, specifically? Will there be a lot of
multiple simultaneous I/O or a lot of random access like
with running OS or large database work or need for highest
sustained throughput? (pick one)

Will your work involve different source and destination
files of fair size like with video editing?

I wouldn't worry about power or heat too much. Well, they
are concerns but all that need be done is to have adequate
airflow and power, as with any other configuration.

Spin-up frequency effects all drives, not just the SCSI you
mention. For maximum life they should be kept spinning,
there is nothing unique about the mentioned drive that would
make it more (or less) problematic to turn system off or let
it sleep. Well, perhaps slightly worse for a higher RPM
drive, having higher stress to spin-up to higher RPM, but
relatively speaking the stress of that will impact any
drive.
  #5  
Old November 5th 04, 10:00 AM
Ronny Mandal
external usenet poster
 
Posts: n/a
Default

Hmm.

Fast access to files, short response times, fast copying - just some luxury
issues.

And the I tend to power up in the morning, approx. 6:am and power down at
about 22:30 ++

Ronny Mandal


"kony" wrote in message
...
On Thu, 4 Nov 2004 10:27:49 +0100, "Ronny Mandal"
wrote:

In fact I do not need it, I need performance.

So you are saying that two IDE in e.g. RAID 0 wil outperform the SCSI disk
in speed, besides storgae etc?


Speed at what, specifically? Will there be a lot of
multiple simultaneous I/O or a lot of random access like
with running OS or large database work or need for highest
sustained throughput? (pick one)

Will your work involve different source and destination
files of fair size like with video editing?

I wouldn't worry about power or heat too much. Well, they
are concerns but all that need be done is to have adequate
airflow and power, as with any other configuration.

Spin-up frequency effects all drives, not just the SCSI you
mention. For maximum life they should be kept spinning,
there is nothing unique about the mentioned drive that would
make it more (or less) problematic to turn system off or let
it sleep. Well, perhaps slightly worse for a higher RPM
drive, having higher stress to spin-up to higher RPM, but
relatively speaking the stress of that will impact any
drive.



  #6  
Old November 5th 04, 11:44 AM
kony
external usenet poster
 
Posts: n/a
Default

On Fri, 5 Nov 2004 11:00:42 +0100, "Ronny Mandal"
wrote:

Hmm.

Fast access to files, short response times, fast copying - just some luxury
issues.

And the I tend to power up in the morning, approx. 6:am and power down at
about 22:30 ++

Ronny Mandal


The 15K SCSI drive will be of more benefit than a pair of
typical ATA in RAID0. A good cost-effective compromise
(particularly if you don't have a decent SCSI controller
already) would be an SATA Western Digital Raptor 74GB, or a
pair of them... ideally the OS, applications, and the data
files would be on different drives.

Powering on once a day seems reasonable enough for any
drive. Either way the best course of action is still to
make regular backups.



  #7  
Old November 4th 04, 09:32 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Wed, 3 Nov 2004 21:22:20 +0100, "Joris Dobbelsteen"
wrote:

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.

You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'. IMHO RAID is not worth doing without a
decent controller and disks as reliable as cheetahs - so doing it
'right' wouldn't save and money. Plus if he is planning to frequently
power cycle, RAID of any caliber is the last thing you want to
recommend (for multiple reliability-related reasons for starters).


- Joris

"Ronny Mandal" wrote in message
...
Hi.

I have a question.

I am really eager to buy a Seagate Cheetah 15K rmp disk for my

workstation.
The only issue is that I've heard that these disks are not suitable for
frequently power on/off, i.e. turning off the computer once or twice+ a

day.


modern enterprise drives should be fine power cycling a couple times
per day for several years. While personal storage devices are more
geared to this use both have a limit before affecting reliability - so
it's not ideal in either case.

They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.

The fluid bearing cheetahs are wonderful & have an excellent track
record. Highly reliable, durable, quiet, and extremely responsive. I
wouldn't worry too much & consider it a safe purchase you shouldn't
regret.

Any add-on controller (SCSI, SCSI RAID, ATA RAID, SATA RAID) may
affect power features and may be more of a concern (resuming power may
be delayed or poor drivers may prohibit certain power features.) So
the simpler the disk subsystem the more likely you will have success
using various convenience related features associated with turning on
the computer a few times/day.

Today's computers in general are less temperamental and susceptible to
problems from frequent power cycling but it is still not ideal. If
you have a good machine why not leave it on a good deal of the time
and have it do stuff for you or have it available to access if you
need something but are away?
  #8  
Old November 19th 04, 06:20 PM
kony
external usenet poster
 
Posts: n/a
Default

On Thu, 04 Nov 2004 09:32:16 GMT, Curious George
wrote:

On Wed, 3 Nov 2004 21:22:20 +0100, "Joris Dobbelsteen"
wrote:

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable, not "simplicity" relative to anything
else, not lower maintenance, no easier troubleshooting down
the line, and not less power consumption, heat or PSU
issues.

You truely are CLUELESS.

Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.



modern enterprise drives should be fine power cycling a couple times
per day for several years. While personal storage devices are more
geared to this use both have a limit before affecting reliability - so
it's not ideal in either case.

They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.



  #9  
Old November 23rd 04, 06:23 AM
Curious George
external usenet poster
 
Posts: n/a
Default

On Fri, 19 Nov 2004 18:20:40 GMT, kony wrote:

On Thu, 04 Nov 2004 09:32:16 GMT, Curious George
wrote:

On Wed, 3 Nov 2004 21:22:20 +0100, "Joris Dobbelsteen"
wrote:

Besides this these disks are way to expensive and you get much better
performance and several times the storage space by spending that money on a
RAID array.

Why you need a Cheetah 15k disk?


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable


Wrong.

Array MTBF calculation necessarily yields a much lower value than a
single drive installation. For RAID 0 (which is what I think he is
implying) the array life is limited by the shortest lasting drive
(which is totally unpredictable) and when it does go it takes all the
data on all the other disks with it.

Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives. Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy. Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.

Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.

, not "simplicity" relative to anything else,


Wrong.

We're talking specifically about a 15K cheetah compared to ata raid
not "anything else."

RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...

not lower maintenance,


Wrong.

With a simple disk there is no drive synchronization, no time
consuming parity level initialization, no management software updates
or configuration, there is no backup of controller config that needs
to be performed, adding drives never implies much in the way of low
level configuration & never the adjustment of existing storage...

no easier troubleshooting down the line,


Wrong.

Power failure or crash can really screw up a lot of raids. A faulty
disk will take a crap all over the entire filesystem with raid 0.
Defunct disks due to power cable or backplane issues is a PITA- with a
single drive you just push in the plug better and press the power
button. You almost never have to worry about drive firmware issues or
conflicts. You almost never have to think about getting bare metal
recovery software to work or play nice with a storage driver.
Transient disk error passed on in RAID 5 for example is a nightmare to
troubleshoot...

and not less power consumption, heat or PSU
issues.

Totally absurd with raid recommendations for the low end desktop.
Difference in power consumption of current scsi and ata drives is no
longer significant. Using several disks at the same time is -
especially during power up.

Of course I'm not advocating any low end desktop

You truely are CLUELESS.

You truly are hilarious

Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


Oh yeah, More absurd trash.

-Not at all with write back cache disabled so the SATA RAID doesn't
bite you.
-Not at all for an individual SCSI disk
-Not at all if SCSI disks are mainly used 1 at a time
-Not for read/writes through most of the platters of 2 scsi drives
used 'simultaneously' esp if the PCI bus isn't handling much else.
-Latency is far from marginal esp for multiuser & multitasking
-Not nearly as expensive as you wish to imply

I'd also be careful if you are thinking all southbridge devices are
always, & always have been, off the PCI bus.

You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.


Just plain dumb.

modern enterprise drives should be fine power cycling a couple times
per day for several years. While personal storage devices are more
geared to this use both have a limit before affecting reliability - so
it's not ideal in either case.

They're more suitable to be left on, in e.g. a server, and that it is
hazardous to power on/off frequently.

Is this correct?


Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.


At least we're talking about the same kind of failure.

If you spin down every few months there are only small amounts/smaller
particles which you allow to settle in the drive. If you wait too
long there are larger amounts /larger particles which are being
churned around & when they settle can cause stiction when re-powered.
Planning powering down can extend somewhat the useable life before
stiction- or it at least allows you to control the failure event
during maintenance as opposed to when you need it most (The Monday
Morning Blues).
  #10  
Old November 23rd 04, 11:15 PM
kony
external usenet poster
 
Posts: n/a
Default

On Tue, 23 Nov 2004 06:23:47 GMT, Curious George
wrote:


Compared to low-end RAID, 1 or 2 of these drives would still bring
incredible responsiveness but with much higher reliability, simplicity
of installation, maintenance, & potential troubleshooting down the
line, as well as less power consumption, heat, or potential PSU
issues.


More of your complete and utter nonsense.
Not more reliable


Wrong.

Array MTBF calculation necessarily yields a much lower value than a
single drive installation. For RAID 0 (which is what I think he is
implying) the array life is limited by the shortest lasting drive
(which is totally unpredictable) and when it does go it takes all the
data on all the other disks with it.


OK then, but there was no mention of RAID0. Why would we
bother to contrast anything with RAID0?



Also for ATA drive manufacturing the percentile component rejection
rate is generally around 5x less rigorous than scsi drives.


But that means very little without insider info about the
cause... it could simply be that the SCSI line is producing
a lot of defective drives.


Since ATA
drives ship at a rate of around 6 to 1 over scsi, that amounts to a
huge difference in total questionable units you may have the chance to
buy.


.... and a huge difference in total good units you may have
the chance to buy, too.

Your likelihood of getting one such lemon is only offset by the
much larger number of consumers and stores that deal with ATA & the
fact that most ppl tend not to buy large lots of ATA drives.


Most ppl tend to buy large lots of SCSI drives?
I suggest that any significant data store is tested before
being deployed, with the actual parts to be used. Further
that NO data store on a RAID controller be kept without an
alternate backup method.



Also enterprise drives & systems tend to implement new features more
conservatively which can affect reliability and they tend to employ
more data protection features like background defect scanning and
arguably better ECC checking incl of transmissions & additional parity
checking, etc. Also performance characteristics can be tweaked and
low level issues can be better observed using a few tools.


I disagree that they "tend to implement new features more
conservatively", a couple days ago you listed many features
added less conservatively.


, not "simplicity" relative to anything else,


Wrong.

We're talking specifically about a 15K cheetah compared to ata raid
not "anything else."





RAID has more parts and tools to learn & use. There is a learning
curve if it is your first time and esp. if you care about getting all
the benefits you are expecting. Installing a simple disk or two is so
simple it's totally mindless. With scsi you never have to think about
DMA mode or some corollary to get optimum performance...


I disagree with that assessment. In one sentence you write
"more parts and tools to learn and use" but then come back
with "never have to think about DMA mode". You can't have
it both ways, it most certainly is more to think about.

I suggest that anyone who can't understand DMA mode on ATA
should not be making any kind of data storage decisions,
instead buying a pre-configured system and not touching
whichever storage solution it might contain.



not lower maintenance,


Wrong.

With a simple disk there is no drive synchronization, no time
consuming parity level initialization, no management software updates
or configuration, there is no backup of controller config that needs
to be performed, adding drives never implies much in the way of low
level configuration & never the adjustment of existing storage...


So you're trying to compare a single non-RAID drive to a
RAIDed config now? SCSI, including the Cheetah, does not
eliminate management software updates or config.
What backup of the controller config is needed on ATA beyond
SCSI?


no easier troubleshooting down the line,


Wrong.

Power failure or crash can really screw up a lot of raids. A faulty
disk will take a crap all over the entire filesystem with raid 0.


yes but again, this is not an argument FOR SCSI Cheetahs,
simply to avoid RAID0. Granted that was part of the context
of the reply, but it didnt end there, you tried to extend
the argument further.

Defunct disks due to power cable or backplane issues is a PITA- with a
single drive you just push in the plug better and press the power
button. You almost never have to worry about drive firmware issues or
conflicts. You almost never have to think about getting bare metal
recovery software to work or play nice with a storage driver.
Transient disk error passed on in RAID 5 for example is a nightmare to
troubleshoot...

and not less power consumption, heat or PSU
issues.

Totally absurd with raid recommendations for the low end desktop.
Difference in power consumption of current scsi and ata drives is no
longer significant. Using several disks at the same time is -
especially during power up.


Except that you're ignoring a large issue... the drive IS
storage. You can avoid RAID0, which I agree with, but can't
just claim the Cheetah uses less power without considering
that it a) has lower capacity b) costs a lot more per GB.
c) it's performance advantage drops the further it's filled
relative to one much larger ATA drive at same or lower
price-point, perhaps even at less than 50% of the cost.


Of course I'm not advocating any low end desktop

You truely are CLUELESS.

You truly are hilarious


Thank you, laughing is good for us.



Oh yeah, SCSI for 2 drives on a 33MHz, 32bit PCI PC
interface is significantly slower than a pair of Raptors on
southbridge-integral SATA. It'll have marginally lower
latency, which is trivial compared to the cost.


Oh yeah, More absurd trash.


Do you not even understand the aforementioned PCI
bottleneck? Southbridge integral (or dedicated bus) is
essential for utmost performance on the now-aged PC 33/32
bus. Do you assume people won't even use the PCI bus for
anything but their SCSI array? Seems unlikley, the array
can't even begin to be competitive unless it's consuming
most of the bus througput, making anything from sound to nic
to modem malfunction in use else performance drops.



-Not at all with write back cache disabled so the SATA RAID doesn't
bite you.
-Not at all for an individual SCSI disk
-Not at all if SCSI disks are mainly used 1 at a time
-Not for read/writes through most of the platters of 2 scsi drives
used 'simultaneously' esp if the PCI bus isn't handling much else.
-Latency is far from marginal esp for multiuser & multitasking


This I agree with, latency reduction is a very desirable
thing for many uses... but not very useful for others.

-Not nearly as expensive as you wish to imply

I'd also be careful if you are thinking all southbridge devices are
always, & always have been, off the PCI bus.



Never wrote "always been", we're talking about choices
today. What modern chipset puts integrated ATA on PCI bus?
What 2 year old chipset does?



You simply cannot compare the overall user productivity and computing
experience with 1 or 2 good enterprise quality drives to a personal
storage caliber 'array'.


You MEAN, YOU PERSONALLY can't compare them because you are
clueless.


Just plain dumb.


No, if you could compare them you'd see that a SCSI PCI card
will never exceed around 128MB/s, while southbridge ATA
RAIDs may easily exceed that... throw a couple WD Raptors
in a box and presto, it's faster and cheaper. Keep in mind
that either way I would only recommend a futher backup
strategy, data should not be only on (any) drives used
regularly in the system.



Sort of. You might also not want to go too long without powering off
these drives for relaibility reasons also.


WRONG.
Drives do not need power cycled for reliability reasons.
The dumbest thing someone can do is power off drives on a
lark, the vast majority of failures occur after a drive
spins down then tried coming up again. Pay more attention
and you might notice it.


At least we're talking about the same kind of failure.

If you spin down every few months there are only small amounts/smaller
particles which you allow to settle in the drive. If you wait too
long there are larger amounts /larger particles which are being
churned around & when they settle can cause stiction when re-powered.
Planning powering down can extend somewhat the useable life before
stiction- or it at least allows you to control the failure event
during maintenance as opposed to when you need it most (The Monday
Morning Blues).


I don't believe there is enough evidence to conclude
anything near this, seems more like an urban legend.
I suggest not powering down the drives at all, until their
scheduled replacement.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Best drive configuration? Noozer General 20 May 27th 04 03:10 AM
RAID card for my PC?? TANKIE General 5 May 22nd 04 01:09 AM
Adding IDE drive to SCSI system thinman General 7 May 15th 04 01:57 PM
Axis Storpoint CD and CD/T upgrade to SCSI Disk Drives Mad Diver General 0 December 31st 03 07:07 PM
SCSI trouble Alien Zord General 1 June 25th 03 03:08 AM


All times are GMT +1. The time now is 11:28 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.