Gpgpu

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • So lately, I've become interested in using the GPU for general programming (and I'm gonna take a course on Cuda next Semester :) ),
      where Cuda and OpenCl come to mind.
      I also know that Havok uses (used?) OpenCl.
      Since I didn't find much about it elsewhere-
      is Cuda/OpenCL also used for game programming in general?
      I could imagine that for graphically demanding games, not much
      if anything is run on the GPU except for graphics code; but maybe
      graphically less intensive games might take advantage of the computational power of the GPU?
      What are your experiences with Cuda/OpenCL? (Especially if you used it in a professional game studio)
      I'd be happy if you could share some of your stories :)
    • I've never used it myself but I saw a great talk at GDC about using it. It's a lot slower than typical CPU operations for most things, but it helps paralellize processing. In practice, it sounds like servers and other non-graphical applications are trying to make use of it.

      -Rez
    • Thanks for the input!
      I just checked gdcvault and there's a lot of interesting stuff :)

      Yes, it's typically not as fast as stuff on the CPU, but can process many thousands of threads in parallel, which makes the overall process much faster (at least if there is enough stuff to be processed, which in case of havok should be the case). At least that's my impression.

      Thanks again! :)
    • There is some really cool physics stuff people do on the GPU. Particularly a research group at Nvidia published an awesome rigid fracture paper. This video is a demo of what they did.

      They're doing the dynamic fracture on the GPU in real time which is pretty awesome. They published a paper on it for SIGGRAPH 2013 here (the nvidia page has a DIFFERENT video of the fracture involving meteors).
    • I've done some CUDA. It's good if you have a relatively simple "kernel". For instance, if you have a ga-ba-zillion coordinate transformations or if you have a matrix M that takes 4GB to store and you want M^5 and your too lazy to do any math besides M*M*M*M*M.

      Anything complicated is pretty much infeasible. At least when I was doing it, you had to write the kernel in C. So, only built in types and no STL - blah.

      For what its worth, I was doing the lattice Boltzmann method for fluids.

      Not mine - LBM
    • Wow. Thanks for the link, that's awesome. I'm always amazed by such demos. Watching stuff like that always helps to keep me motivated :)

      I've never done more physics programming than some basic velocity/acceleration/gravity stuff, but doing these kind of things dynamically at runtime is amazing. Thanks again for the links!

    • I've done some CUDA. It's good if you have a relatively simple "kernel". For instance, if you have a ga-ba-zillion coordinate transformations or if you have a matrix M that takes 4GB to store and you want M^5 and your too lazy to do any math besides M*M*M*M*M.

      Anything complicated is pretty much infeasible. At least when I was doing it, you had to write the kernel in C. So, only built in types and no STL - blah.

      For what its worth, I was doing the lattice Boltzmann method for fluids.

      Not mine - LBM


      Thanks for the info!
      Yes, the kernels seem to permit only C, maybe OpenCL allows more liberties here, but I don't know.
      Not being able to use C++ features as destructors and STL of course make live harder, but it's not impossible to do some more advanced stuff in C ;)


      When you download the CUDA SDK it gives you a bunch of test problems. Some are very technical things like the LBM, some are games, some just look cool. You should check them out and play around!

      Thanks for the info! I'm planning on doing exactly that.
    • Cuda is NVidia GPU only. So not my preference. But I am a novice hobby programmer. But as far I get the idea. render task is very concurrent which scale just fine. So gpgpu does well when solution for a problem is very concurrent. Way better is as it is as much pure concurrent.

      Physics and AI are both very wide field. which can be from very sequential to very concurrent. It depends on scale and specific physics or AI feature.
      Instead of Cuda I would take OpenCL also as a Path to AMD ATI hardware and iNTel & AMD APU.

      If you got use of MS VS then C++AMP might be option. If C++ is more your thing then those C variants
      Might be that NV is pushing there next gen CUDA to more C++ style.

      Havok doesn't use OpenCL because wenn iNtel took it over has had a Havok FX module which supported GPGPU from every GPU firm making G-cards. But Intel Killed that. As in the war of CPU vs GPGPU. So AMD and NV didn't get HavokFX to let hardware accelerated physics take of on GPU.

      So NV as response took over Ageia and with that there PhysX SDK former Novodex.
      Implementing Cuda path for there NV GPU.

      So solving problems in a more concurrent way.
      I am reading about Data oriented programming and Functional programming.
      which is in some way's much more concurrent friendly.

      As the future is that CPU become also many core processing units. So in the future is a very concurrent friendly solution get to be a requirement to get most out off the hardware.