A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Video Cards » Nvidia Videocards
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

How to execute code on graphics processor?



 
 
Thread Tools Display Modes
  #1  
Old September 14th 04, 10:50 PM
O.B.
external usenet poster
 
Posts: n/a
Default How to execute code on graphics processor?

Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and Linux
Knoppix v3.4. I am curious to see if it would be possible to offload the CPU by
moving some math operations in my program to the graphics processor on the video
card.

For example, assume that I have two matrices, A and B and I want to multiply
them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send this
to the graphics processor for calculation and then return the result back to the
main program.

I have a lot of experience with C/C++ programming, but none in interacting with
a video card. Are there any libraries, published API's, etc available to help
with this endeavor? Tips, comments, links, book recommendations, etc are
greatly welcomed.

Thanks.

  #2  
Old September 14th 04, 11:03 PM
rms
external usenet poster
 
Posts: n/a
Default

This has been done more than once already. There is a open source library
for certain math operations that was announced some months ago on the major
news sites, as well as a music application that does mathematical
convolution on the videocard that was announced on hardocp et all a couple
weeks ago. Time for some googling.

rms


  #3  
Old September 15th 04, 07:33 AM
Chingy
external usenet poster
 
Posts: n/a
Default



Are you some sort of electronic circuit designr or something Hmm I just
can't figure it out why you would want a 6800 ultra with XEON(dual) CPU...

"O.B." wrote in message
...
Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and
Linux Knoppix v3.4. I am curious to see if it would be possible to
offload the CPU by moving some math operations in my program to the
graphics processor on the video card.

For example, assume that I have two matrices, A and B and I want to
multiply them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send
this to the graphics processor for calculation and then return the result
back to the main program.

I have a lot of experience with C/C++ programming, but none in interacting
with a video card. Are there any libraries, published API's, etc
available to help with this endeavor? Tips, comments, links, book
recommendations, etc are greatly welcomed.

Thanks.



  #4  
Old September 15th 04, 07:35 PM
Spajky
external usenet poster
 
Posts: n/a
Default

On Tue, 14 Sep 2004 16:50:16 -0500, "O.B."
wrote:

Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and Linux
Knoppix v3.4. I am curious to see if it would be possible to offload the CPU by
moving some math operations in my program to the graphics processor on the video
card.


you´ve been thinking something about this?
http://www.bionicfx.com/
--
Regards, SPAJKY ®
& visit my site @ http://www.spajky.vze.com
"Tualatin OC-ed / BX-Slot1 / inaudible setup!"
E-mail AntiSpam: remove ##
  #5  
Old September 15th 04, 09:42 PM
jafar
external usenet poster
 
Posts: n/a
Default

On Wed, 15 Sep 2004 20:35:01 +0200, Spajky wrote:

you´ve been thinking something about this?
http://www.bionicfx.com/


An absolutely freaking marvellous link. Thanks

--
Jafar Calley
-----BEGIN GEEK CODE BLOCK-----
d+ s-:+ a C++++ L++ E--- W++ N++ w-- PE- t* 5++ R+ !tv D+ G e* h---- x?
------END GEEK CODE BLOCK------
Registered Linux User #359623
http://fatcatftp.homelinux.org

  #6  
Old September 16th 04, 08:51 PM
Eric Witte
external usenet poster
 
Posts: n/a
Default

"Chingy" wrote in message ...
Are you some sort of electronic circuit designr or something Hmm I just
can't figure it out why you would want a 6800 ultra with XEON(dual) CPU...


It could just be a home PC I'd like a dual Xeon for home if I had
the $. If the GPU could be utilized for math operations it would
completely kill any CPU out there right now. If I remember correctly
from that music application that makes use of the GPU your talking 40
million floating point operations per second versus 5. The current
generation of 3d cards are the only ones AFAIK that would be
programmable in any way. And the new Nvidias are most programmable.

Eric
  #7  
Old September 16th 04, 11:01 PM
John G. Shaw
external usenet poster
 
Posts: n/a
Default

Look at http://www.gpgpu.org/



--
___________________________________________
John G. Shaw (from home)

Notes:
1. Attachments greater than 100K will be deleted!
___________________________________________

"O.B." wrote in message
...
Newbie question

I have a nVidia GeForce 6800 Ultra running with a dual Zeon 3.0 GHz and
Linux Knoppix v3.4. I am curious to see if it would be possible to
offload the CPU by moving some math operations in my program to the
graphics processor on the video card.

For example, assume that I have two matrices, A and B and I want to
multiply them together and store the result in a third matrix, C. Where
for (row=0; rowmaxRows; ++row) {
for (column=0; columnmaxColumns; ++column) {
C[row][column] = A[row][column] * B[row][column]
}
}
Instead of executing this code on the CPU, I want the main program to send
this to the graphics processor for calculation and then return the result
back to the main program.

I have a lot of experience with C/C++ programming, but none in interacting
with a video card. Are there any libraries, published API's, etc
available to help with this endeavor? Tips, comments, links, book
recommendations, etc are greatly welcomed.

Thanks.



 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
pc problems after g card upgrade + sp2 ben reed Homebuilt PC's 9 November 30th 04 02:04 AM
Made the switch... A7V8X-X to A7N8X... Now my graphics card fan is annoying. hupjack Asus Motherboards 9 April 13th 04 10:17 AM
Graphics Card -- Geforce 3 Ti 200 -- Fan stopped working. Midnight Nvidia Videocards 4 November 2nd 03 02:47 PM
PII vs PIII Gregory L. Hansen General 114 October 15th 03 05:56 PM
AIB Companies To Adopt XGI Volari GPUs? graphics processing unit Nvidia Videocards 18 October 5th 03 12:46 AM


All times are GMT +1. The time now is 06:21 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.