Parallel Programming in Native Code

Parallel programming using C++ AMP, PPL and Agents libraries.

NVIDIA Tesla’s support for C++ AMP

NVIDIA Tesla’s support for C++ AMP

  • Comments 4

Some have asked whether Tesla cards from NVIDIA supports C++ AMP. The short answer is all Tesla cards, except for Tesla K20 Active Accelerator for workstation, supports C++ AMP.

One of the benefits of C++ AMP is the portability across GPU cards. The easiest way to check whether a card supports C++ AMP or not, is to check for its DirectX 11 support. In case of Tesla cards, to use C++ AMP, you will have to switch GPU Operation Mode (GOM) from compute-only to ALL_ON (using nvidia-smi's --gom option) and then switch the Driver model from TCC to WDDM (using nvidia-smi's --driver-model option). Refer to NVIDIA's nvidia-smi documentation for more details. Only K20 Active Accelerator for workstation card does not allow this switching.

Another way to check whether C++ AMP can run on your device or not is to programmatically invoke accelerator:: get_all() method as described in the book on C++ AMP. For those who would like an executable instead of code sample, download the utility in the "Can I Run C++ AMP on My Device" blog post and execute it to see the output.

Blog - Comment List MSDN TechNet
  • Loading...
Leave a Comment
  • Please add 7 and 5 and type the answer here:
  • Post