If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
(Google Translated) In France, the Grand National Equipment Intensive Computing (Genci) striking hard since his first supercomputer, one of the first machine of its kind to use the GPU processing for calculation. According to our information, the 1068 processors are indeed Nehalem eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla. This will be in addition to a new generation of Tesla, with a very high probability of the GT200 chip controllers, and even several for each module (we imagine two, as in the current solutions Tesla NVIDIA). In this machine, all the CPU is supposed to produce a powerful theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against GPU 1068 CPU, it is already clear domination of the GPU on the CPU intensive computing, thanks to its multiple parallel processors flow. http://www.google.com/translate?u=ht...&hl=en&ie=UTF8 |
#2
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
AirRaid wrote:
(Google Translated) In France, the Grand National Equipment Intensive Computing (Genci) striking hard since his first supercomputer, one of the first machine of its kind to use the GPU processing for calculation. According to our information, the 1068 processors are indeed Nehalem eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla. This will be in addition to a new generation of Tesla, with a very high probability of the GT200 chip controllers, and even several for each module (we imagine two, as in the current solutions Tesla NVIDIA). In this machine, all the CPU is supposed to produce a powerful theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against GPU 1068 CPU, it is already clear domination of the GPU on the CPU intensive computing, thanks to its multiple parallel processors flow. http://www.google.com/translate?u=ht...&hl=en&ie=UTF8 1) I was indeed invited at the press conference and could clarify a few details. Oh, and i have taken a lot of interesting pictures :-) [to be published in an upcoming french Linux Magazine] 2) the translation is awful. 3) what a dirty crosspost. 4) see my post on comp.arch dated 19/04/2008 titled "The next french "supercomputer" will have both CPUs and GPUs" 5) let's continue this discussion on comp.arch ? regards, yg from f-cpu.org or ygdes.com |
#3
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
'AirRaid' wrote, in part:
(Google Translated) In France, the Grand National Equipment Intensive Computing (Genci) striking hard since his first supercomputer, one of the first machine of its kind to use the GPU processing for calculation. According to our information, the 1068 processors are indeed Nehalem eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla. This will be in addition to a new generation of Tesla, with a very high probability of the GT200 chip controllers, and even several for each module (we imagine two, as in the current solutions Tesla NVIDIA). In this machine, all the CPU is supposed to produce a powerful theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against GPU 1068 CPU, it is already clear domination of the GPU on the CPU intensive computing, thanks to its multiple parallel processors flow. _____ This and similar posts are a good argument for filtering out Googlegroups postings. Why do you continue to post this kind of article without proper attribution? And crosspost to boot? And never have anything of your own to contribute? And, for this particular post, use a perfectly terrible translation? Considering you had to use a Google translation, how did you ever decide whether the original article was WORTH posting? If you can't add value, don't post. Especially don't crosspost. Phil Weldon "AirRaid" wrote in message ... (Google Translated) In France, the Grand National Equipment Intensive Computing (Genci) striking hard since his first supercomputer, one of the first machine of its kind to use the GPU processing for calculation. According to our information, the 1068 processors are indeed Nehalem eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla. This will be in addition to a new generation of Tesla, with a very high probability of the GT200 chip controllers, and even several for each module (we imagine two, as in the current solutions Tesla NVIDIA). In this machine, all the CPU is supposed to produce a powerful theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against GPU 1068 CPU, it is already clear domination of the GPU on the CPU intensive computing, thanks to its multiple parallel processors flow. http://www.google.com/translate?u=ht...&hl=en&ie=UTF8 |
#4
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
'whygee' wrote, in part:
1) I was indeed invited at the press conference and could clarify a few details. Oh, and i have taken a lot of interesting pictures :-) [to be published in an upcoming french Linux Magazine] 2) the translation is awful. 3) what a dirty crosspost. 4) see my post on comp.arch dated 19/04/2008 titled "The next french "supercomputer" will have both CPUs and GPUs" _____ Hey, it could be worse - it could have been a post from 'Skybuck' (if you do not recognize the sig 'Skybuck', consider yourself fortunate.) Phil Weldon "whygee" wrote in message ... AirRaid wrote: (Google Translated) In France, the Grand National Equipment Intensive Computing (Genci) striking hard since his first supercomputer, one of the first machine of its kind to use the GPU processing for calculation. According to our information, the 1068 processors are indeed Nehalem eight of hearts, and 48 modules are GPU solutions from NVIDIA Tesla. This will be in addition to a new generation of Tesla, with a very high probability of the GT200 chip controllers, and even several for each module (we imagine two, as in the current solutions Tesla NVIDIA). In this machine, all the CPU is supposed to produce a powerful theoretical 103 Tflops, up from 192 Tflops for GPU. With 96 against GPU 1068 CPU, it is already clear domination of the GPU on the CPU intensive computing, thanks to its multiple parallel processors flow. http://www.google.com/translate?u=ht...&hl=en&ie=UTF8 1) I was indeed invited at the press conference and could clarify a few details. Oh, and i have taken a lot of interesting pictures :-) [to be published in an upcoming french Linux Magazine] 2) the translation is awful. 3) what a dirty crosspost. 4) see my post on comp.arch dated 19/04/2008 titled "The next french "supercomputer" will have both CPUs and GPUs" 5) let's continue this discussion on comp.arch ? regards, yg from f-cpu.org or ygdes.com |
#5
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
On Tue, 22 Apr 2008 21:09:15 -0400, "Phil Weldon"
wrote in comp.arch.embedded: [snip spam] This and similar posts are a good argument for filtering out Googlegroups postings. Yes, and it's an even better argument for NOT RESPONDING to usenet spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the cross-posted groups. Spammers rarely read the groups they abuse, and even if this one does, he/she/it will certainly ignore your rant. In the meantime you have DOUBLED his exposure. In fact, this idiot was caught by my filters, and I would never have seen his drivel at all had you not replied and quoted it. |
#6
|
|||
|
|||
GPU Supercomputer - 300 TFLOPs
'Jack Klein' wrote, in part:
Yes, and it's an even better argument for NOT RESPONDING to usenet spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the cross-posted groups. _____ 'AirRaid' is not a spammer, and the post is not spam. The transgression is NOT that he posted the URL of the poorly translated webpage (see my original post.) I quoted because it illustrates the problem 'AirRaid' presents. And how is THAT different from your post that lengthens the thread? Sometimes you just have describe a problem to get it fixed. Phil Weldon "Jack Klein" wrote in message ... On Tue, 22 Apr 2008 21:09:15 -0400, "Phil Weldon" wrote in comp.arch.embedded: [snip spam] This and similar posts are a good argument for filtering out Googlegroups postings. Yes, and it's an even better argument for NOT RESPONDING to usenet spam, and ESPECIALLY NOT QUOTING THE ENTIRE SPAM PAYLOAD in all of the cross-posted groups. Spammers rarely read the groups they abuse, and even if this one does, he/she/it will certainly ignore your rant. In the meantime you have DOUBLED his exposure. In fact, this idiot was caught by my filters, and I would never have seen his drivel at all had you not replied and quoted it. |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
GPU Supercomputer - 300 TFLOPs | AirRaid | Intel | 5 | April 23rd 08 02:46 AM |
PS3 GPU: 1.8 TFLOPs ~ Teraflops | [email protected] | Nvidia Videocards | 0 | May 23rd 05 01:10 PM |
PS3 GPU: 1.8 TFLOPs ~ Teraflops | TS | Ati Videocards | 2 | May 19th 05 10:37 PM |
PS3 GPU: 1.8 TFLOPs ~ Teraflops | dawg | Nvidia Videocards | 0 | May 19th 05 10:37 PM |
PS3 GPU: 1.8 TFLOPs ~ Teraflops | TS | Nvidia Videocards | 1 | May 17th 05 06:23 PM |