A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

New hard disk architectures



 
 
Thread Tools Display Modes
  #41  
Old December 20th 05, 01:05 PM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
external usenet poster
 
Posts: n/a
Default New hard disk architectures

Yousuf Khan wrote:

Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.


You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


So what? And how much of that will you actually be recovering? Changing
the sector size doesn't recover _all_ of the space used by ECC. Further,
you've totally neglected sparing--to have the same number of spare sectors
available you'd have to devote 8 times as much space to sparing.

Yousuf Khan


--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
  #42  
Old December 20th 05, 05:57 PM posted to comp.sys.ibm.pc.hardware.storage,comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

"J. Clarke" wrote in message
Yousuf Khan wrote:

Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.


You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


So what? And how much of that will you actually be recovering?
Changing the sector size doesn't recover _all_ of the space used by ECC.
Further, you've totally neglected sparing--


to have the same number of spare sectors available
you'd have to devote 8 times as much space to sparing.


Nonsense. You're still using 512 byte logical sectors.


Yousuf Khan

  #43  
Old December 20th 05, 06:25 PM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
external usenet poster
 
Posts: n/a
Default New hard disk architectures

In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote:
Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.


You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.


O.k., so this is "experts only", i.e. again does not make sense in
a consumer product. And no, you cannot preload device drivers in
any meaningful way, since it is not loading but hardware detection
and initialisation that takes the time.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


10% is not significant and certainly does not justify such a change.
Seems this is corporate greed and stupidity at work with the
enginners not protesting loudly enough.

Arno
  #44  
Old December 20th 05, 06:29 PM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

In comp.sys.ibm.pc.hardware.storage J. Clarke wrote:
Yousuf Khan wrote:


Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.


You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


So what? And how much of that will you actually be recovering? Changing
the sector size doesn't recover _all_ of the space used by ECC. Further,
you've totally neglected sparing--to have the same number of spare sectors
available you'd have to devote 8 times as much space to sparing.


Good point. Since the defective sector rate is low, you will
likely have only one per sector, even with larger sectors and
indeed 8 times as much overhead for spares.

The more I look at this, the more bogus it looks to me.

Arno
  #45  
Old December 20th 05, 07:25 PM posted to comp.sys.ibm.pc.hardware.chips,comp.sys.ibm.pc.hardware.storage
external usenet poster
 
Posts: n/a
Default New hard disk architectures

Arno Wagner wrote:

In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote:
Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.


You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.


O.k., so this is "experts only", i.e. again does not make sense in
a consumer product. And no, you cannot preload device drivers in
any meaningful way, since it is not loading but hardware detection
and initialisation that takes the time.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


10% is not significant and certainly does not justify such a change.
Seems this is corporate greed and stupidity at work with the
enginners not protesting loudly enough.


Researching this a bit I'm finding that they've had to increase the
complexity of the ECC code to cope with increased areal density--apparently
the size of typical defects in the disk doesn't change when the areal
density increases, which means that they have to be able to correct more
dead bits in a sector in order to meet their performance specifications. I
found a letter from Fujitsu that says that they've had to increase the ECC
space from 10% to 15% of total capacity, and that anticipated future
increases in areal density may drive the ECC levels as high as 30% with a
512 byte sector size. It's not clear how much they expect to save by going
to a 4096-byte sector though.

Arno


--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
  #46  
Old December 20th 05, 08:14 PM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

In comp.sys.ibm.pc.hardware.storage J. Clarke wrote:
Arno Wagner wrote:


In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote:
Arno Wagner wrote:

[...]
I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


10% is not significant and certainly does not justify such a change.
Seems this is corporate greed and stupidity at work with the
enginners not protesting loudly enough.


Researching this a bit I'm finding that they've had to increase the
complexity of the ECC code to cope with increased areal density--apparently
the size of typical defects in the disk doesn't change when the areal
density increases, which means that they have to be able to correct more
dead bits in a sector in order to meet their performance specifications. I
found a letter from Fujitsu that says that they've had to increase the ECC
space from 10% to 15% of total capacity, and that anticipated future
increases in areal density may drive the ECC levels as high as 30% with a
512 byte sector size. It's not clear how much they expect to save by going
to a 4096-byte sector though.


O.k., this does make sense. If they have a certain maximum error burst
length, then the longer the sector it is in, the less relative
overhead they need to correct it. It is not linear, but ECC does work
better for longer sectors. The mathematics is complicated, don't ask.
(Or if you want to know, Reed-Solomons Coding is the keyword for
burst-error correctin codes). And if they expect 30% overhead, that
may be significant enough to make the change.

There is some precedent, namely the 2048 byte sectors in CDs. That
is also the reason why most modern OSses can deal with 2k secotrs.
Other reason is MODs that also have 2k sectors. I am not aware of
any current random-access device with 4k sectors.

Arno

  #47  
Old December 20th 05, 09:48 PM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage J. Clarke wrote:
Yousuf Khan wrote:


Arno Wagner wrote:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.
You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.
You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


So what? And how much of that will you actually be recovering? Changing
the sector size doesn't recover _all_ of the space used by ECC. Further,
you've totally neglected sparing--to have the same number of spare sectors
available you'd have to devote 8 times as much space to sparing.


Good point. Since the defective sector rate is low, you will
likely have only one per sector, even with larger sectors and
indeed 8 times as much overhead for spares.


Space on the platters is so cheap that an 8-fold increase in an
already small quantity is easily dismissed. Platters are so
cheap that drive manufactures routinely make drives that only use
60% or 80% of the platters. What matters is the logic involved
in managing the drives - and if the manufacturers think they can
make things work faster, more reliably, or both, with 4096 byte
sectors, then I'm willing to keep an open mind until they are
proven wrong.

Also being overlooked in this thread is how a drive with 4096
byte physical sectors will interact with the operating system.
With NTFS, for example, 512 byte allocation units (AKA
"clusters") are possible, but 4096 bytes is by far the mostly
commonly used cluster size. What kind of performance differences
might we see if there is a one-to-one correspondence between 4 KB
allocation units in the file system and 4 KB phyical sectors,
instead of having to address 8 separate 512 byte sectors for each
cluster?

In other words, the effect on how the drive and the OS work
together could be far more important than the effect on the raw
drive performance. Hardware design /should/ take into account
the software that will use it - and vice versa.

As well, clusters larger than 4 KB are possible with most file
systems, but except with FAT16 they are very seldom used. If the
option of super-sizing clusters was dropped, that would allow for
leaner and meaner versions of file systems like NTFS and FAT32 -
they could drop the "allocation unit" concept and deal strictly
in terms of physical sectors. Simpler software with less cpu
overhead to manage the file system can't possibly hurt.




  #48  
Old December 20th 05, 10:00 PM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

In comp.sys.ibm.pc.hardware.storage J. Clarke wrote:
Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote:
Arno Wagner wrote:

[...]
I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.


You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.


10% is not significant and certainly does not justify such a change.
Seems this is corporate greed and stupidity at work with the
enginners not protesting loudly enough.


[addendum to myself]
Also note that if you have 10% overhead, you will not save them all
by using longer sectors. More likely you will go down to 8%...5% or so.

Researching this a bit I'm finding that they've had to increase the
complexity of the ECC code to cope with increased areal density--apparently
the size of typical defects in the disk doesn't change when the areal
density increases, which means that they have to be able to correct more
dead bits in a sector in order to meet their performance specifications. I
found a letter from Fujitsu that says that they've had to increase the ECC
space from 10% to 15% of total capacity, and that anticipated future
increases in areal density may drive the ECC levels as high as 30% with a
512 byte sector size. It's not clear how much they expect to save by going
to a 4096-byte sector though.


(Seems I killed my first replay...)

This actually makes sense. If you have a certain maximum burst length
to correct, it requires less overhead per bit of data in a longer data
packet than in a smaller one. It is not linear, but the effect is
noticeable. (See theory of Reed-Solomons coding for details.) If they
expect 30% overhead, longer sectors could cause significant savings.

Arno


  #49  
Old December 23rd 05, 06:14 AM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

On Mon, 19 Dec 2005 17:07:35 +0100, Yousuf Khan wrote:

They're just saying they can do a more efficient error correction over
4096 byte sectors rather than 512 byte sectors.


and its not only about capacity, speed maters too

--
I really have no idea what this means. And since I can't install linux on
it, I'm gonna go back to surfing pr0n.
the penguins are psychotic / just smile and wave
  #50  
Old December 23rd 05, 04:52 PM posted to comp.sys.ibm.pc.hardware.chips
external usenet poster
 
Posts: n/a
Default New hard disk architectures

On Fri, 23 Dec 2005 06:14:16 +0100, "hackbox.info"
wrote:

On Mon, 19 Dec 2005 17:07:35 +0100, Yousuf Khan wrote:

They're just saying they can do a more efficient error correction over
4096 byte sectors rather than 512 byte sectors.


and its not only about capacity, speed maters too


Huh?

I really have no idea what this means. And since I can't install linux on
it, I'm gonna go back to surfing pr0n.


Shouldn't that be "p0rn"?
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Hard Disk Drive Not Found [email protected] Dell Computers 13 August 10th 05 12:03 AM
Cannot boot from secondary hard disk (bios setup) Ian Compaq Computers 1 January 5th 05 11:13 PM
Primary Hard Disk Drive 1 Not Found brandon General Hardware 5 July 18th 04 11:39 PM
Hard Disk Nightmare Brian McGee General Hardware 2 June 11th 04 02:22 PM
primary master hard disk fail berthold Storage (alternative) 5 May 15th 04 03:28 AM


All times are GMT +1. The time now is 11:48 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.