일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
- dataset
- shakeratos
- gpu memory
- urllib
- pyTorch
- Anaconda
- raspberry pi
- ppc64le
- Windows 10
- python
- face_recognition
- colab
- FIle
- TensorFlow
- ubuntu
- colaboratory
- YouTube 8M
- object detection
- python3
- windows
- keras
- 딥러닝
- error
- download
- Jupyter notebook
- dlib
- install
- linux
- Deep Learning
- CUDA
- Today
- Total
Shakerato
GPU devices checking tips for Deep Learning (Be careful!) 본문
[Tensorflow]
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
[Keras]
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
[Pytorch]
import torch
Be careful with the above codes!
The code for checking device(GPU) takes your GPU memory as the hostage.
Especially, Tensorflow's and Keras's code takes almost 100% percent of your GPU memory when the code is running. If you use Jupyter Notebook, You need to turn off the running notebook after you check device.
Pytorch's code takes just almost 500Mbytes of your GPU memory.
In linux, you can use below query for watching GPU usage in real-time (better than nvidia-smi -l)
watch nvidia-smi
If you want to see the GPU usage easier than nvidia-smi, use below
pip install gpustat
(after installed)
gpustat
'Research' 카테고리의 다른 글
Theano (using GPU) install in Windows 10 (0) | 2019.02.07 |
---|---|
딥러닝 객체인식 네트워크 retinanet 내 데이터셋으로 학습하는 방법 (1) | 2019.01.17 |
Counting files with extension in linux (0) | 2019.01.13 |
pytorch "RuntimeError: ~ inplace operation" for CUDA (0) | 2019.01.05 |
CUDA 9.2가 깔려있어도 CUDA 9.0을 참조해 에러나는경우, ImportError: libcublas.so.9.0 (0) | 2018.09.07 |