
Quite often, I get the CUDA library to completely fail and return with an error 46 ("all CUDA-capable devices are busy or unavailable") even for simple calls like cudaMalloc. The code runs successfully if I restart the computer, but this is far from ideal. This problem is apparently quite common.
My setup is the following:
- OSX 10.6.8
- NVIDIA CUDA drivers : CUDA Driver Version: 4.0.31 (latest)
- GPU Driver Version: 1.6.36.10 (256.00.35f11)
I tried many solutions from the Nvidia forum, but it didn't work. I don't want to reboot every time it happens. I also tried to unload and reload the driver with a procedure I assume to be correct (may not be)
kextunload -b com.nvidia.CUDA kextload -b com.nvidia.CUDA
But still it does not work. How can I kick the GPU (or CUDA) back into sanity ?
This is the device querying result
CUDA Device Query (Runtime API) version (CUDART static linking) Found 1 CUDA Capable device(s) Device 0: "GeForce 9400M" CUDA Driver Version / Runtime Version 4.0 / 4.0 CUDA Capability Major/Minor version number: 1.1 Total amount of global memory: 254 MBytes (265945088 bytes) ( 2) Multiprocessors x ( 8) CUDA Cores/MP: 16 CUDA Cores GPU Clock Speed: 1.10 GHz Memory Clock rate: 1075.00 Mhz Memory Bus Width: 128-bit Max Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048) Max Layered Texture Size (dim) x layers 1D=(8192) x 512, 2D=(8192,8192) x 512 Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 16384 bytes Total number of registers available per block: 8192 Warp size: 32 Maximum number of threads per block: 512 Maximum sizes of each dimension of a block: 512 x 512 x 64 Maximum sizes of each dimension of a grid: 65535 x 65535 x 1 Maximum memory pitch: 2147483647 bytes Texture alignment: 256 bytes Concurrent copy and execution: No with 0 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: Yes Support host page-locked memory mapping: Yes Concurrent kernel execution: No Alignment requirement for Surfaces: Yes Device has ECC support enabled: No Device is using TCC driver mode: No Device supports Unified Addressing (UVA): No Device PCI Bus ID / PCI location ID: 2 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce 9400M [deviceQuery] test results... PASSED
This is an example of code that may fail (although in normal conditions it does not)
#include <stdio.h> __global__ void add(int a, int b, int *c) { *c = a + b; } int main(void) { int c; int *dev_c; cudaMalloc( (void **) &dev_c, sizeof(int)); // fails here, returning 46 add<<<1,1>>>(2,7,dev_c); cudaMemcpy(&c, dev_c, sizeof(int), cudaMemcpyDeviceToHost); printf("hello world, %d\n",c); cudaFree( dev_c); return 0; }
I also found out that occasionally I get to revert back to a sane behavior without a reboot. I still don't know what triggers it.
94 Answers
I confirm the statement made by the commenters to my post. The GPU may not work if other applications are taking control of it. In my case, the flash player in firefox was apparently occupying all the available resources on the card. I killed the firefox plugin for flash and the card immediately started working again.
3Restarting my computer did the trick for me.
3You can do this to save your day:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
It worked for me.
1Change the Nvidia driver version for ubuntu (Nvidia 450 ) works
ncG1vNJzZmirpJawrLvVnqmfpJ%2Bse6S7zGiorp2jqbawutJobXJuZmmGd3vNr6CdoZFisLawwGacq6qfp3qiuMtmmq6ckWKworzAm6OeZZSaw6qvxKxkmqqVYq%2B2v9hmpqtlpaOut63IpZibpJVivK95zqyv