If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
|
Thread Tools | Display Modes |
#11
|
|||
|
|||
M3A and ATAPI
Rhino wrote:
What's "the Performance plugin" and where can I find it? If you look in Control Panels under Administrative Tools, there is an option there called Performance. If you right click in the graph pane, you can select Add Counters. I think the Delete key can be used to delete the existing graphs. You'll also need to scale the graphs, because the graph thing really isn't all that smart. On my machine, setting the graph scale to 10000 gives 100MB/sec full scale on the graph. Since new disks can hit 125MB/sec, I set the graph scale to 20000 so the full scale is 200MB/sec. (The first time, I calibrated against HDTune, so I could be sure of what it was measuring!) Other graph types have different scaling requirements. That's part of the fun of using it I guess. If you go to Start : Run, you can type perfmon.msc and run the plugin directly that way. That is a second way to reach it. The Add Counters thing has the counters in groups. The ones for the disks are under "PhysicalDisk". I use things like read bytes per second and write bytes per second and select "total" as I'm too lazy to be more specific. Usually during benchmarking, only one disk is doing the reading, one disk the writing, and a total count is good enough. The counter only detects certain kinds of operations. Some operations "slide under the radar", so not every operation gets counted. And yes, that can be a nuisance. To give an example of things that slide under the radar, I set up a RAMDisk on a computer, the RAMDisk claimed to be offering "hard drive emulation", and yet reads and writes to the disk didn't show in the Performance plugin at all. It's possible that things like Paging operations aren't counted, Some software paths don't seem to be hooked. So YMMV. If a SATA hard drive had an issue, all you could do is try another cable. So I don't really need to worry about that then. I'll simply replace the cable if I get symptoms that the cable is pinched. What symptoms would I expect then? Writes to the hard drive taking much longer than usual? Or actual error messages popping up? Or the Blue Screen of Death itself? If you got a "Delayed Write" error, you might suspect something. If the I/O rate had slipped into PIO mode, that might be a sign as well. If you run HDTune and the graph is a flat line at around 4MB/sec to 8MB/sec (when it should be 125MB/sec and a curve), then you'd suspect PIO (and you got there, via CRC errors). I don't have a recipe, like a checklist or anything. Another note about SATA versus IDE. When you push the reset button on the computer, the IDE disk should always be reset. There are plenty of wires on the cable, so on the IDE ribbon, they had a way to signal the reset condition. On SATA, I've actually had a SATA drive stop communicating, and pressing the reset button didn't bring it back. This is a violation of a basic hardware design principle (in hardware design reviews at work, this is one of the first items the reviewers ask about - does your reset work). I had to turn off computer power, to get the SATA disk to re-initialize. So if you ever find yourself in a situation where the SATA drive "stops talking", the reset button might not help. But power cycling fixed it. Sounds like optical burners need to start using the SATA approach ;-) You can get SATA interface burners. I'm pretty slow at buying new hardware, and I have a grand total of one SATA burner now. All the rest I have are IDE. IDE is perfectly good, and I have no complaints. I'm glad it gave you some clues about why it was misbehaving. Sometimes, these computers don't give you very obvious clues at all about what's going wrong on them. I wasn't paying attention at first. I just started a burn like usual. And then noticed Nero was claiming completion would be "some time in the next century", and that's when I glanced at the drive LED and noticed the strange pattern on the LED. Flipping open Device Manager, I noticed the "Enhanced" USB entry was missing (USB2, 30MB/sec), so then I had a possible answer. Benchmarking with one of the Nero tools (InfoTool?) can show whether the transfer rate is "normal" or not, like doing a read test on an existing burned disc. The very fastest optical devices, like a Blu Ray burner running at top speed (with non-existent top speed media), can approach 30MB/sec, so even USB2 is getting close to being a limiting factor. But I won't be burning Blu Ray any time soon, and in any case, the quality of media I can buy at the store here, the chances of finding a "top speed" kind of media are pretty slim. All I get here for blanks, is junk. So the chances of me owning a USB optical burner that runs flat out at 30MB/sec are pretty slim. I remember a work computer I had once that kept crashing, probably a couple of times a week at its worst. It was running OS/2 and when I'd call Help Desk, they'd advise me to reinstall OS/2. After doing that several times, I spoke to a technical guy I knew and described my symptoms. He said it sounded like an I/O Controller issue - this was almost 20 years back so I may be misremembering the name of the component - and he helped me arrange a replacement of that component, which completely solved the problem. I was still just transitioning to PCs at the time so I remember being very surprised that PCs even had hardware issues; on mainframes, the issues were almost always software. But it was a handy experience. Not too much later, I started having some similar symptoms on my home computer and it turned out to be a problem with the same component as the work computer; replacing that component fixed my home PC too. But I'm darned if I can remember any obvious symptoms on either computer that suggested that the I/O Controller had a problem! -- Rhino I watched a tech working on the university mainframe, looking for a memory problem, and it wasn't pretty. Going from cabinet to cabinet, flipping doors open, probing stuff. The hardware in there looked like Spaghettios. I think that machine may have had core memory. Practically any DIMM I have in the house, would have a higher capacity than all those cabinets holding cores. Think how easy it is, snapping in a new DIMM, versus all those cabinets :-) On the plus side, if the power went off, core memory saves state, so in that respect it is better than DRAM in personal computers. http://en.wikipedia.org/wiki/Core_memory I don't think they ever relied on the "saving state" property though for that university mainframe. It's possible the machine was checkpointed on its disk drives, and could roll back to the last checkpoint. I have run into one other computing device, which relied on core memory, and you could unplug it and it could pick up where it left off. And that was before EEPROMs or Flash memory. Very convenient, but not very scalable to large quantities of memory. The computing device in question, was like a large scientific programmable calculator. Paul |
#12
|
|||
|
|||
M3A and ATAPI
"Paul" wrote in message ... Rhino wrote: What's "the Performance plugin" and where can I find it? If you look in Control Panels under Administrative Tools, there is an option there called Performance. If you right click in the graph pane, you can select Add Counters. I think the Delete key can be used to delete the existing graphs. You'll also need to scale the graphs, because the graph thing really isn't all that smart. On my machine, setting the graph scale to 10000 gives 100MB/sec full scale on the graph. Since new disks can hit 125MB/sec, I set the graph scale to 20000 so the full scale is 200MB/sec. (The first time, I calibrated against HDTune, so I could be sure of what it was measuring!) Other graph types have different scaling requirements. That's part of the fun of using it I guess. If you go to Start : Run, you can type perfmon.msc and run the plugin directly that way. That is a second way to reach it. The Add Counters thing has the counters in groups. The ones for the disks are under "PhysicalDisk". I use things like read bytes per second and write bytes per second and select "total" as I'm too lazy to be more specific. Usually during benchmarking, only one disk is doing the reading, one disk the writing, and a total count is good enough. Thanks for clearing up what you meant about the Performance plugin. I thought you were talking about something I had to download. I didn't realize you meant a built-in Windows facility. I found the Performance option and set it up as close to what you described as I could but I'm getting a couple of issues. I couldn't set the scale to 20,000; I had to choose between 10,000 and 100,000. I was going to ask how to set the scale to 20,000 but that is a moot point since the Disk Write Bytes/Sec is a straight horizontal line at the very top of the scale for _all_ values of scale from 10,000 on up! (Average is 1.3 million, minimum is 262,000, and maximum is 4 million.) Disk Read Bytes/Sec has a more normal graph but it's average is around 45,000 with a minimum of 0 and a maximum of 663,598. These numbers are a _lot_ higher than the ones you cited for your computer but I can't believe my computer would be that much faster than yours. Something doesn't make sense here.... By the way, I tracked down HD Tune and ran it on my C: drive. I got a transfer rate minimum of 7.0 MB/sec; maximum of 96.1, average of 67.4. Access time was 16.6 ms. Burst rate is 146.8 MB/sec. CPU usage is 7.5%. I noticed that in the Health tab, the last benchmark given, C7, is Ultra DMA CRC Error Count. I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. 200 errors seems like a lot to me but apparently isn't for HD Tune. Apparently, UDMA Mode 6 (Ultra ATA/133) is both supported and active for the drive. I also ran HD Tune of my D: drive. I got a transfer rate minimum of 52.1 MB/sec; maximum of 108.0, average of 84.5.4. Access time was 12.3 ms. Burst rate is 157.2 MB/sec. CPU usage is 8.0%. For Ultra DMA CRC Error, I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. I was surprised that the performance on the D: was significantly better than the C: given that they are identical drives; I expected the numbers to be much closer. The counter only detects certain kinds of operations. Some operations "slide under the radar", so not every operation gets counted. And yes, that can be a nuisance. To give an example of things that slide under the radar, I set up a RAMDisk on a computer, the RAMDisk claimed to be offering "hard drive emulation", and yet reads and writes to the disk didn't show in the Performance plugin at all. It's possible that things like Paging operations aren't counted, Some software paths don't seem to be hooked. So YMMV. Okay, fair enough. If a SATA hard drive had an issue, all you could do is try another cable. So I don't really need to worry about that then. I'll simply replace the cable if I get symptoms that the cable is pinched. What symptoms would I expect then? Writes to the hard drive taking much longer than usual? Or actual error messages popping up? Or the Blue Screen of Death itself? If you got a "Delayed Write" error, you might suspect something. If the I/O rate had slipped into PIO mode, that might be a sign as well. If you run HDTune and the graph is a flat line at around 4MB/sec to 8MB/sec (when it should be 125MB/sec and a curve), then you'd suspect PIO (and you got there, via CRC errors). I don't have a recipe, like a checklist or anything. Another note about SATA versus IDE. When you push the reset button on the computer, the IDE disk should always be reset. There are plenty of wires on the cable, so on the IDE ribbon, they had a way to signal the reset condition. On SATA, I've actually had a SATA drive stop communicating, and pressing the reset button didn't bring it back. This is a violation of a basic hardware design principle (in hardware design reviews at work, this is one of the first items the reviewers ask about - does your reset work). I had to turn off computer power, to get the SATA disk to re-initialize. So if you ever find yourself in a situation where the SATA drive "stops talking", the reset button might not help. But power cycling fixed it. The challenge is going to be remembering that when (and if) the time comes ;-) I'll probably remember that I once read something about a simple fix for SATA drives but I won't remember the details and I won't remember to find them.... Sounds like optical burners need to start using the SATA approach ;-) You can get SATA interface burners. I'm pretty slow at buying new hardware, and I have a grand total of one SATA burner now. All the rest I have are IDE. IDE is perfectly good, and I have no complaints. I'm glad it gave you some clues about why it was misbehaving. Sometimes, these computers don't give you very obvious clues at all about what's going wrong on them. I wasn't paying attention at first. I just started a burn like usual. And then noticed Nero was claiming completion would be "some time in the next century", and that's when I glanced at the drive LED and noticed the strange pattern on the LED. Flipping open Device Manager, I noticed the "Enhanced" USB entry was missing (USB2, 30MB/sec), so then I had a possible answer. Benchmarking with one of the Nero tools (InfoTool?) can show whether the transfer rate is "normal" or not, like doing a read test on an existing burned disc. The very fastest optical devices, like a Blu Ray burner running at top speed (with non-existent top speed media), can approach 30MB/sec, so even USB2 is getting close to being a limiting factor. But I won't be burning Blu Ray any time soon, and in any case, the quality of media I can buy at the store here, the chances of finding a "top speed" kind of media are pretty slim. All I get here for blanks, is junk. So the chances of me owning a USB optical burner that runs flat out at 30MB/sec are pretty slim. I'm not going to rush out and buy the latest and greatest of everything either ;-) I remember a work computer I had once that kept crashing, probably a couple of times a week at its worst. It was running OS/2 and when I'd call Help Desk, they'd advise me to reinstall OS/2. After doing that several times, I spoke to a technical guy I knew and described my symptoms. He said it sounded like an I/O Controller issue - this was almost 20 years back so I may be misremembering the name of the component - and he helped me arrange a replacement of that component, which completely solved the problem. I was still just transitioning to PCs at the time so I remember being very surprised that PCs even had hardware issues; on mainframes, the issues were almost always software. But it was a handy experience. Not too much later, I started having some similar symptoms on my home computer and it turned out to be a problem with the same component as the work computer; replacing that component fixed my home PC too. But I'm darned if I can remember any obvious symptoms on either computer that suggested that the I/O Controller had a problem! -- Rhino I watched a tech working on the university mainframe, looking for a memory problem, and it wasn't pretty. Going from cabinet to cabinet, flipping doors open, probing stuff. The hardware in there looked like Spaghettios. I think that machine may have had core memory. I expect that hardware issues _did_ affect mainframes from time to time. After all, the term "bug" refers to actual insects that were found in some of the earliest mainframes! But I don't remember ever encountering them in my job. The programs we wrote would inevitably have errors and sometimes there would be issues in the operating system. I still remember the system dump I encountered the very first time I had to support a program on our system. I was brand new and soon had to call for help from our lead programmer. He looked it over for a while and finally called down to one of the system programmers and confirmed that they'd put some kind of patch on the night before; he persuaded the system programmer that it had messed up our system. The patch was backed out and everything started working correctly again. He'd seen a similar problem once before, 14 years previously, and that was all he needed to get to the bottom of the problem. But we never saw memory failures or drive crashes, although I expect they happened occasionally and were handled by the operators. Practically any DIMM I have in the house, would have a higher capacity than all those cabinets holding cores. Your wris****ch probably has more capacity than an old mainframe ;-) Think how easy it is, snapping in a new DIMM, versus all those cabinets :-) On the plus side, if the power went off, core memory saves state, so in that respect it is better than DRAM in personal computers. http://en.wikipedia.org/wiki/Core_memory I don't think they ever relied on the "saving state" property though for that university mainframe. It's possible the machine was checkpointed on its disk drives, and could roll back to the last checkpoint. I have run into one other computing device, which relied on core memory, and you could unplug it and it could pick up where it left off. And that was before EEPROMs or Flash memory. Very convenient, but not very scalable to large quantities of memory. The computing device in question, was like a large scientific programmable calculator. When I was in my first job, the old-timers told us about a piece of hardware they'd had in the 50s that was so big you could actually walk through it! They'd give people tours and let them walk INSIDE that component. I can't recall what the component was - a printer maybe or some form of storage device - but there was a maintenance tunnel through the darned thing that they'd let people walk through. Try to find any computer component made since the 60s that you can walk through.... -- Rhino |
#13
|
|||
|
|||
M3A and ATAPI
Rhino wrote:
Thanks for clearing up what you meant about the Performance plugin. I thought you were talking about something I had to download. I didn't realize you meant a built-in Windows facility. I found the Performance option and set it up as close to what you described as I could but I'm getting a couple of issues. I couldn't set the scale to 20,000; I had to choose between 10,000 and 100,000. I was going to ask how to set the scale to 20,000 but that is a moot point since the Disk Write Bytes/Sec is a straight horizontal line at the very top of the scale for _all_ values of scale from 10,000 on up! (Average is 1.3 million, minimum is 262,000, and maximum is 4 million.) Disk Read Bytes/Sec has a more normal graph but it's average is around 45,000 with a minimum of 0 and a maximum of 663,598. These numbers are a _lot_ higher than the ones you cited for your computer but I can't believe my computer would be that much faster than yours. Something doesn't make sense here.... By the way, I tracked down HD Tune and ran it on my C: drive. I got a transfer rate minimum of 7.0 MB/sec; maximum of 96.1, average of 67.4. Access time was 16.6 ms. Burst rate is 146.8 MB/sec. CPU usage is 7.5%. I noticed that in the Health tab, the last benchmark given, C7, is Ultra DMA CRC Error Count. I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. 200 errors seems like a lot to me but apparently isn't for HD Tune. Apparently, UDMA Mode 6 (Ultra ATA/133) is both supported and active for the drive. I also ran HD Tune of my D: drive. I got a transfer rate minimum of 52.1 MB/sec; maximum of 108.0, average of 84.5.4. Access time was 12.3 ms. Burst rate is 157.2 MB/sec. CPU usage is 8.0%. For Ultra DMA CRC Error, I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. I was surprised that the performance on the D: was significantly better than the C: given that they are identical drives; I expected the numbers to be much closer. When I fire up the Performance plugin here, the default "Vertical Scale" is listed as 0 to 100. The scale needed, may vary depending on what counter you're using. The SMART stats, I still don't understand them. I take a display like this 200 200 0 0 to mean the CRC error count was actually zero. Not all of the statistics in SMART are raw counters. Some are processed numbers and not "events". UDMA6 Ultra133 is just a placeholder. You can see from your 146.8MB/sec burst rate, that 133 cannot support 146. So the actual rate is faster than the bogus labeling scheme. About the only thing consistent, is Intel SATA ports wear a bogus label of UDMA5 (a tradition), while other chipsets may bear a bogus label of UDMA6. But if you had SATA III ports, they could do over 300MB/sec, well above UDMA6. As a rough estimate, if your max was 96MB/sec (at the beginning of the disk), then without measuring it, I'd expect the end of the disk to be around 48MB/sec. About a 2:1 ratio. There is some room for variation. The disk could be "short-stroked" for example, and the heads might not approach as close to the hub of the spindle. The min value would then not have the 2:1 relationship. When you see a spike downwards, that can be caused by a problem with the program (execution priority), but it can also be caused by spared-out sectors. To reach a spare sector, potentially takes head movement. Or, to read a bad sector, might require retries. That can lead to spiking. If you use HDTune when a disk is brand new, you might notice a difference in the amount and number of downward spikes, after a year of using the disk. I've also had the opposite happen. A pretty crappy disk chart. Then, I use the bad block scan option in HDTune, forcing some reads from one end of the disk to the other. For some reason, subsequent disk benchmark runs, weren't as spiky. That really shouldn't do anything, because sparing is supposed to happen on writes, not on reads. Doing a read scan shouldn't change things as far as I know. These are some of my results, for comparison. I bought a couple new ones of these, and they're more "spiky" than this. When I compare to older disks (like the Maxtor I revived yesterday), the old disks showed a nice, zoned curve, with very little spiking. That could be a controller issue, making some of the noise evident, or the actual number of spared out sectors is much larger on the newer disks (more garbage to ride over). http://img829.imageshack.us/img829/8...scomposite.gif I've had some results that looked dreadful, and it was an issue with how the internal buses in the chipset were tuned. And whatever setting was involved, I couldn't find a utility to check it. Fortunately, only one motherboard does that, and the other ones I've got are OK. Paul |
#14
|
|||
|
|||
M3A and ATAPI
"Paul" wrote in message ... Rhino wrote: Thanks for clearing up what you meant about the Performance plugin. I thought you were talking about something I had to download. I didn't realize you meant a built-in Windows facility. I found the Performance option and set it up as close to what you described as I could but I'm getting a couple of issues. I couldn't set the scale to 20,000; I had to choose between 10,000 and 100,000. I was going to ask how to set the scale to 20,000 but that is a moot point since the Disk Write Bytes/Sec is a straight horizontal line at the very top of the scale for _all_ values of scale from 10,000 on up! (Average is 1.3 million, minimum is 262,000, and maximum is 4 million.) Disk Read Bytes/Sec has a more normal graph but it's average is around 45,000 with a minimum of 0 and a maximum of 663,598. These numbers are a _lot_ higher than the ones you cited for your computer but I can't believe my computer would be that much faster than yours. Something doesn't make sense here.... By the way, I tracked down HD Tune and ran it on my C: drive. I got a transfer rate minimum of 7.0 MB/sec; maximum of 96.1, average of 67.4. Access time was 16.6 ms. Burst rate is 146.8 MB/sec. CPU usage is 7.5%. I noticed that in the Health tab, the last benchmark given, C7, is Ultra DMA CRC Error Count. I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. 200 errors seems like a lot to me but apparently isn't for HD Tune. Apparently, UDMA Mode 6 (Ultra ATA/133) is both supported and active for the drive. I also ran HD Tune of my D: drive. I got a transfer rate minimum of 52.1 MB/sec; maximum of 108.0, average of 84.5.4. Access time was 12.3 ms. Burst rate is 157.2 MB/sec. CPU usage is 8.0%. For Ultra DMA CRC Error, I got a current value of 200, a worst of 200, threshold of 0, data of 0, and status of ok. I was surprised that the performance on the D: was significantly better than the C: given that they are identical drives; I expected the numbers to be much closer. When I fire up the Performance plugin here, the default "Vertical Scale" is listed as 0 to 100. The scale needed, may vary depending on what counter you're using. The SMART stats, I still don't understand them. I take a display like this 200 200 0 0 to mean the CRC error count was actually zero. Not all of the statistics in SMART are raw counters. Some are processed numbers and not "events". UDMA6 Ultra133 is just a placeholder. You can see from your 146.8MB/sec burst rate, that 133 cannot support 146. So the actual rate is faster than the bogus labeling scheme. About the only thing consistent, is Intel SATA ports wear a bogus label of UDMA5 (a tradition), while other chipsets may bear a bogus label of UDMA6. But if you had SATA III ports, they could do over 300MB/sec, well above UDMA6. As a rough estimate, if your max was 96MB/sec (at the beginning of the disk), then without measuring it, I'd expect the end of the disk to be around 48MB/sec. About a 2:1 ratio. There is some room for variation. The disk could be "short-stroked" for example, and the heads might not approach as close to the hub of the spindle. The min value would then not have the 2:1 relationship. When you see a spike downwards, that can be caused by a problem with the program (execution priority), but it can also be caused by spared-out sectors. To reach a spare sector, potentially takes head movement. Or, to read a bad sector, might require retries. That can lead to spiking. If you use HDTune when a disk is brand new, you might notice a difference in the amount and number of downward spikes, after a year of using the disk. I've also had the opposite happen. A pretty crappy disk chart. Then, I use the bad block scan option in HDTune, forcing some reads from one end of the disk to the other. For some reason, subsequent disk benchmark runs, weren't as spiky. That really shouldn't do anything, because sparing is supposed to happen on writes, not on reads. Doing a read scan shouldn't change things as far as I know. These are some of my results, for comparison. I bought a couple new ones of these, and they're more "spiky" than this. When I compare to older disks (like the Maxtor I revived yesterday), the old disks showed a nice, zoned curve, with very little spiking. That could be a controller issue, making some of the noise evident, or the actual number of spared out sectors is much larger on the newer disks (more garbage to ride over). http://img829.imageshack.us/img829/8...scomposite.gif I've had some results that looked dreadful, and it was an issue with how the internal buses in the chipset were tuned. And whatever setting was involved, I couldn't find a utility to check it. Fortunately, only one motherboard does that, and the other ones I've got are OK. I played with the Performance graph a little and it seems that you can make the range of values on the vertical axis anything you like but it doesn't actually change the line that's drawn, just the labelling of the vertical axis. I right-clicked on the graph, chose Properties, went to the Graph tab and changed the Vertical Scale so that Maximum was a million (instead of 100) and Minimum stayed 0, applied, then cleared the graph and let it run for a bit. The graph was completely unchanged in terms of the height of the spikes and the low values were still at 0. Only the labels on the vertical axis were different. If I want the height of the spikes to be different, I need to change the scale of the _counter_ by selecting a specific counter, right-clicking, choosing Properties, and then choosing a different value for scale. When I did that and lowered the scale value from 10000000 to 1, the height of the spikes was suddenly much lower. The numbers in the Last, Average, Minimum, Maximum, and Duration boxes still had the same magnitude as they had before I touched the scales of the individual counters. Well, I'm not going to flog this to death. As far as I can tell, my hard drives are working just fine so I'm going to follow the First Rule of Tech Support: "If it works, don't fix it." I appreciate all the help and information, Paul. I hope I run into you or someone like you when and if I have a real problem somewhere down the road! -- Rhino |
#15
|
|||
|
|||
M3A and ATAPI
On 6/21/2011 9:54 PM, Rhino wrote:
When I was in my first job, the old-timers told us about a piece of hardware they'd had in the 50s that was so big you could actually walk through it! They'd give people tours and let them walk INSIDE that component. I can't recall what the component was - a printer maybe or some form of storage device - but there was a maintenance tunnel through the darned thing that they'd let people walk through. Try to find any computer component made since the 60s that you can walk through.... Sounds like the old USAF Air Defense Command's SAGE system (Semi-Automatic Ground Environment). It was all tubes, with aisles between racks so the maintenance crews could do mass retubes. Some of them had big enough cross-aisles that you could drive a pickup thru. They were still in use up thru 1972 or so. -- "**** this is it, all the pieces do fit. We're like that crazy old man jumping out of the alleyway with a baseball bat, saying, "Remember me mother****er?" Jim “Dandy” Mangrum |
#16
|
|||
|
|||
M3A and ATAPI
"Nobody (Revisited)" wrote in message ... On 6/21/2011 9:54 PM, Rhino wrote: When I was in my first job, the old-timers told us about a piece of hardware they'd had in the 50s that was so big you could actually walk through it! They'd give people tours and let them walk INSIDE that component. I can't recall what the component was - a printer maybe or some form of storage device - but there was a maintenance tunnel through the darned thing that they'd let people walk through. Try to find any computer component made since the 60s that you can walk through.... Sounds like the old USAF Air Defense Command's SAGE system (Semi-Automatic Ground Environment). It was all tubes, with aisles between racks so the maintenance crews could do mass retubes. Some of them had big enough cross-aisles that you could drive a pickup thru. They were still in use up thru 1972 or so. You haven't said what SAGE actually did but I doubt that was what the device our oldtimers were talking about; I was working for an insurance company at the time. The device they told me about might have been a printer or a storage device of some kind but nothing in the way of a defensive or offensive weapon system ;-) But SAGE may have been made by the same people who made the device I was told about and utilized the same style of maintenance access for all I know.... -- Rhino |
|
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
M3A Drivers | Rhino[_2_] | Asus Motherboards | 4 | February 8th 09 10:20 PM |
M3A Behaviour | Murray McNeill[_4_] | Asus Motherboards | 3 | October 2nd 08 01:06 AM |
NB voltage on M3A | Rich[_13_] | Asus Motherboards | 5 | August 7th 08 01:50 AM |
M3A-H/HDMI | ROB | Asus Motherboards | 12 | May 25th 08 01:03 PM |
M3A and Fedora | Philip Wright | Asus Motherboards | 0 | May 16th 08 01:30 AM |