HardwareBanter

HardwareBanter (http://www.hardwarebanter.com/index.php)
-   Nvidia Videocards (http://www.hardwarebanter.com/forumdisplay.php?f=16)
-   -   How to execute code on graphics processor? (http://www.hardwarebanter.com/showthread.php?t=55173)

O.B. September 14th 04 10:50 PM

How to execute code on graphics processor?
 
Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and Linux
Knoppix v3.4. I am curious to see if it would be possible to offload the CPU by
moving some math operations in my program to the graphics processor on the video
card.

For example, assume that I have two matrices, A and B and I want to multiply
them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send this
to the graphics processor for calculation and then return the result back to the
main program.

I have a lot of experience with C/C++ programming, but none in interacting with
a video card. Are there any libraries, published API's, etc available to help
with this endeavor? Tips, comments, links, book recommendations, etc are
greatly welcomed.

Thanks.


rms September 14th 04 11:03 PM

This has been done more than once already. There is a open source library
for certain math operations that was announced some months ago on the major
news sites, as well as a music application that does mathematical
convolution on the videocard that was announced on hardocp et all a couple
weeks ago. Time for some googling.

rms



Chingy September 15th 04 07:33 AM



Are you some sort of electronic circuit designr or something Hmm I just
can't figure it out why you would want a 6800 ultra with XEON(dual) CPU...

"O.B." wrote in message
...
Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and
Linux Knoppix v3.4. I am curious to see if it would be possible to
offload the CPU by moving some math operations in my program to the
graphics processor on the video card.

For example, assume that I have two matrices, A and B and I want to
multiply them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send
this to the graphics processor for calculation and then return the result
back to the main program.

I have a lot of experience with C/C++ programming, but none in interacting
with a video card. Are there any libraries, published API's, etc
available to help with this endeavor? Tips, comments, links, book
recommendations, etc are greatly welcomed.

Thanks.




Spajky September 15th 04 07:35 PM

On Tue, 14 Sep 2004 16:50:16 -0500, "O.B."
wrote:

Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and Linux
Knoppix v3.4. I am curious to see if it would be possible to offload the CPU by
moving some math operations in my program to the graphics processor on the video
card.


you´ve been thinking something about this?
http://www.bionicfx.com/
--
Regards, SPAJKY ®
& visit my site @ http://www.spajky.vze.com
"Tualatin OC-ed / BX-Slot1 / inaudible setup!"
E-mail AntiSpam: remove ##

jafar September 15th 04 09:42 PM

On Wed, 15 Sep 2004 20:35:01 +0200, Spajky wrote:

you´ve been thinking something about this?
http://www.bionicfx.com/


An absolutely freaking marvellous link. Thanks :)

--
Jafar Calley
-----BEGIN GEEK CODE BLOCK-----
d+ s-:+ a C++++ L++ E--- W++ N++ w-- PE- t* 5++ R+ !tv D+ G e* h---- x?
------END GEEK CODE BLOCK------
Registered Linux User #359623
http://fatcatftp.homelinux.org


Eric Witte September 16th 04 08:51 PM

"Chingy" wrote in message ...
Are you some sort of electronic circuit designr or something Hmm I just
can't figure it out why you would want a 6800 ultra with XEON(dual) CPU...


It could just be a home PC :) I'd like a dual Xeon for home if I had
the $. If the GPU could be utilized for math operations it would
completely kill any CPU out there right now. If I remember correctly
from that music application that makes use of the GPU your talking 40
million floating point operations per second versus 5. The current
generation of 3d cards are the only ones AFAIK that would be
programmable in any way. And the new Nvidias are most programmable.

Eric

John G. Shaw September 16th 04 11:01 PM

Look at http://www.gpgpu.org/



--
___________________________________________
John G. Shaw (from home)

Notes:
1. Attachments greater than 100K will be deleted!
___________________________________________

"O.B." wrote in message
...
Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and
Linux Knoppix v3.4. I am curious to see if it would be possible to
offload the CPU by moving some math operations in my program to the
graphics processor on the video card.

For example, assume that I have two matrices, A and B and I want to
multiply them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send
this to the graphics processor for calculation and then return the result
back to the main program.

I have a lot of experience with C/C++ programming, but none in interacting
with a video card. Are there any libraries, published API's, etc
available to help with this endeavor? Tips, comments, links, book
recommendations, etc are greatly welcomed.

Thanks.





All times are GMT +1. The time now is 06:03 AM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
HardwareBanter.com