To verify the CuDNN (CUDA Deep Neural Network library) installation on your system, you can follow these steps:
Step 1: Check the NVIDIA GPU and CUDA version
Ensure you have an NVIDIA GPU and the appropriate version of the CUDA Toolkit installed on your system. You can check the CUDA version by running the following command in the terminal or command prompt.
nvcc --version
Step 2: Locate the CuDNN files
After installing CuDNN, the library files should be in your CUDA Toolkit installation directory. Locate the CuDNN header and library files in the following directories (assuming the default installation paths):
On Linux:
- Header file (cudnn.h): /usr/local/cuda/include
- Library files (libcudnn.*): /usr/local/cuda/lib64
On Windows:
- Header file (cudnn.h): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include
- Library files (cudnn.lib, cudnn64_X.dll): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\11.0\lib\x64
Step 3: Verify CuDNN with a deep-learning framework
To ensure that CuDNN is working correctly with a deep learning framework such as TensorFlow or PyTorch, you can run a simple test script.
Here’s an example of TensorFlow.
import tensorflow as tf
# Check if GPU is available and if CuDNN is enabled
if tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None):
print("GPU is available")
print("CuDNN is enabled:", tf.test.is_built_with_cudnn())
else:
print("GPU is not available")
If the script prints that the GPU is available and CuDNN is enabled, CuDNN is installed correctly and working with TensorFlow.
If it does not, check your installation process and ensure the necessary files are in the correct directories and that the environment variables are correctly set.
Note: The above code snippet is for TensorFlow 1.x. If you use TensorFlow 2.x, you can replace tf.test.is_gpu_available() with tf.config.list_physical_devices(‘GPU’) and check if the output is non-empty.
That’s it.