GPU devices checking tips for Deep Learning (Be careful!)
[Tensorflow]
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
[Keras]
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
[Pytorch]
import torch
Be careful with the above codes!
The code for checking device(GPU) takes your GPU memory as the hostage.
Especially, Tensorflow's and Keras's code takes almost 100% percent of your GPU memory when the code is running. If you use Jupyter Notebook, You need to turn off the running notebook after you check device.
Pytorch's code takes just almost 500Mbytes of your GPU memory.
In linux, you can use below query for watching GPU usage in real-time (better than nvidia-smi -l)
watch nvidia-smi
If you want to see the GPU usage easier than nvidia-smi, use below
pip install gpustat
(after installed)
gpustat