If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#41
|
|||
|
|||
New hard disk architectures
Yousuf Khan wrote:
Arno Wagner wrote: How would you determine where the boot-sequence ends? What if it forks? How far would you get actually (personal guess: not far)? And does it realyy give you significant speed imptovement? With Linux, kernel loading is the fastest part of booting. The part that takes long is device detection and initialisatiom. My guess is it is the same with Windows, so almost no gain from reading the boot data faster. You would manually choose which components go into the flash disk. Or you would get a program to analyse the boot sequence and it will choose which components to send to the flash. You can even pre-determine what devices are in the system and preload their device drivers. I think that is nonsense. ECC is something like 10%. It does not make sense to rewrite every driver and the whole virtual layer just to make this a bit smaller, except meybe from the POV of a salesperson. From an enginnering POV there is good reason not to change complex systems for a minor gain. You've just made the perfect case for why it's needed. 10% of a 100GB drive is 10GB, 10% of 200GB is 20GB, and so on. So what? And how much of that will you actually be recovering? Changing the sector size doesn't recover _all_ of the space used by ECC. Further, you've totally neglected sparing--to have the same number of spare sectors available you'd have to devote 8 times as much space to sparing. Yousuf Khan -- --John to email, dial "usenet" and validate (was jclarke at eye bee em dot net) |
#42
|
|||
|
|||
New hard disk architectures
"J. Clarke" wrote in message
Yousuf Khan wrote: Arno Wagner wrote: How would you determine where the boot-sequence ends? What if it forks? How far would you get actually (personal guess: not far)? And does it realyy give you significant speed imptovement? With Linux, kernel loading is the fastest part of booting. The part that takes long is device detection and initialisatiom. My guess is it is the same with Windows, so almost no gain from reading the boot data faster. You would manually choose which components go into the flash disk. Or you would get a program to analyse the boot sequence and it will choose which components to send to the flash. You can even pre-determine what devices are in the system and preload their device drivers. I think that is nonsense. ECC is something like 10%. It does not make sense to rewrite every driver and the whole virtual layer just to make this a bit smaller, except meybe from the POV of a salesperson. From an enginnering POV there is good reason not to change complex systems for a minor gain. You've just made the perfect case for why it's needed. 10% of a 100GB drive is 10GB, 10% of 200GB is 20GB, and so on. So what? And how much of that will you actually be recovering? Changing the sector size doesn't recover _all_ of the space used by ECC. Further, you've totally neglected sparing-- to have the same number of spare sectors available you'd have to devote 8 times as much space to sparing. Nonsense. You're still using 512 byte logical sectors. Yousuf Khan |
#43
|
|||
|
|||
New hard disk architectures
In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote:
Arno Wagner wrote: How would you determine where the boot-sequence ends? What if it forks? How far would you get actually (personal guess: not far)? And does it realyy give you significant speed imptovement? With Linux, kernel loading is the fastest part of booting. The part that takes long is device detection and initialisatiom. My guess is it is the same with Windows, so almost no gain from reading the boot data faster. You would manually choose which components go into the flash disk. Or you would get a program to analyse the boot sequence and it will choose which components to send to the flash. You can even pre-determine what devices are in the system and preload their device drivers. O.k., so this is "experts only", i.e. again does not make sense in a consumer product. And no, you cannot preload device drivers in any meaningful way, since it is not loading but hardware detection and initialisation that takes the time. I think that is nonsense. ECC is something like 10%. It does not make sense to rewrite every driver and the whole virtual layer just to make this a bit smaller, except meybe from the POV of a salesperson. From an enginnering POV there is good reason not to change complex systems for a minor gain. You've just made the perfect case for why it's needed. 10% of a 100GB drive is 10GB, 10% of 200GB is 20GB, and so on. 10% is not significant and certainly does not justify such a change. Seems this is corporate greed and stupidity at work with the enginners not protesting loudly enough. Arno |
#44
|
|||
|
|||
New hard disk architectures
Arno Wagner wrote:
In comp.sys.ibm.pc.hardware.storage Yousuf Khan wrote: Arno Wagner wrote: How would you determine where the boot-sequence ends? What if it forks? How far would you get actually (personal guess: not far)? And does it realyy give you significant speed imptovement? With Linux, kernel loading is the fastest part of booting. The part that takes long is device detection and initialisatiom. My guess is it is the same with Windows, so almost no gain from reading the boot data faster. You would manually choose which components go into the flash disk. Or you would get a program to analyse the boot sequence and it will choose which components to send to the flash. You can even pre-determine what devices are in the system and preload their device drivers. O.k., so this is "experts only", i.e. again does not make sense in a consumer product. And no, you cannot preload device drivers in any meaningful way, since it is not loading but hardware detection and initialisation that takes the time. I think that is nonsense. ECC is something like 10%. It does not make sense to rewrite every driver and the whole virtual layer just to make this a bit smaller, except meybe from the POV of a salesperson. From an enginnering POV there is good reason not to change complex systems for a minor gain. You've just made the perfect case for why it's needed. 10% of a 100GB drive is 10GB, 10% of 200GB is 20GB, and so on. 10% is not significant and certainly does not justify such a change. Seems this is corporate greed and stupidity at work with the enginners not protesting loudly enough. Researching this a bit I'm finding that they've had to increase the complexity of the ECC code to cope with increased areal density--apparently the size of typical defects in the disk doesn't change when the areal density increases, which means that they have to be able to correct more dead bits in a sector in order to meet their performance specifications. I found a letter from Fujitsu that says that they've had to increase the ECC space from 10% to 15% of total capacity, and that anticipated future increases in areal density may drive the ECC levels as high as 30% with a 512 byte sector size. It's not clear how much they expect to save by going to a 4096-byte sector though. Arno -- --John to email, dial "usenet" and validate (was jclarke at eye bee em dot net) |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Hard Disk Drive Not Found | [email protected] | Dell Computers | 13 | August 10th 05 12:03 AM |
how to test psu and reset to cmos to default | Tanya | General | 23 | February 7th 05 09:56 AM |
Cannot boot from secondary hard disk (bios setup) | Ian | Compaq Computers | 1 | January 5th 05 10:13 PM |
Primary Hard Disk Drive 1 Not Found | brandon | General Hardware | 5 | July 18th 04 11:39 PM |
primary master hard disk fail | berthold | Storage (alternative) | 5 | May 15th 04 03:28 AM |