Shakerato

GPU devices checking tips for Deep Learning (Be careful!) 본문

Research

GPU devices checking tips for Deep Learning (Be careful!)

Shakeratto 2019. 1. 16. 17:44

[Tensorflow]

from tensorflow.python.client import device_lib

print(device_lib.list_local_devices())


[Keras]

from keras import backend as K

K.tensorflow_backend._get_available_gpus()


[Pytorch]

import torch

torch.cuda.get_device_name(0) # number of gpu (cuda:0)
torch.cuda.get_device_name(1) # number of gpu (cuda:1)



 Be careful with the above codes!

 The code for checking device(GPU) takes your GPU memory as the hostage

 Especially, Tensorflow's and Keras's code takes almost 100% percent of your GPU memory when the code is running. If you use Jupyter Notebook, You need to turn off the running notebook after you check device.

 Pytorch's code takes just almost 500Mbytes of your GPU memory.



In linux, you can use below query for watching GPU usage in real-time (better than nvidia-smi -l)

watch nvidia-smi


If you want to see the GPU usage easier than nvidia-smi, use below

pip install gpustat

(after installed)

gpustat




Comments