Notice
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
Tags
- download
- raspberry pi
- dataset
- ppc64le
- install
- dlib
- keras
- gpu memory
- FIle
- urllib
- Jupyter notebook
- colab
- shakeratos
- face_recognition
- Deep Learning
- CUDA
- Anaconda
- Windows 10
- error
- python
- pyTorch
- 딥러닝
- YouTube 8M
- object detection
- ubuntu
- python3
- TensorFlow
- windows
- linux
- colaboratory
Archives
- Today
- Total
Shakerato
How to implement next_batch for mini batch gradient descent in deep learning 본문
Research
How to implement next_batch for mini batch gradient descent in deep learning
Shakeratto 2018. 4. 2. 15:29Full Code: https://www.kaggle.com/pedrolcn/deep-tensorflow-ccn-cross-validation
class TrainBatcher(object):
# Class constructor
def __init__(self, examples, labels):
self.labels = labels
self.examples = examples
self.index_in_epoch = 0
self.num_examples = examples.shape[0]
# mini-batching method
def next_batch(self, batch_size):
start = self.index_in_epoch
self.index_in_epoch += batch_size
# When all the training data is ran, shuffles it
if self.index_in_epoch > self.num_examples:
perm = np.arange(self.num_examples)
np.random.shuffle(perm)
self.examples = self.examples[perm]
self.labels = self.labels[perm]
# Start next epoch
start = 0
self.index_in_epoch = batch_size
assert batch_size <= self.num_examples
end = self.index_in_epoch
return self.examples[start:end], self.labels[start:end]
mnist = TrainBatcher(train_images, train_labels)
batch_xs, batch_ys = mnist.next_batch(BATCH_SIZE)
'Research' 카테고리의 다른 글
CUDA 9 visual studio integration problem 문제 해결, cuda9 install with vs2017 (1) | 2018.05.25 |
---|---|
Visual Studio Code IntelliSence 줄바꿈시 코드 자동 채워짐 문제 (0) | 2018.04.17 |
Role of validation for neural network (deep learning) - validation 역할 (0) | 2018.04.01 |
Google colab에서 한국어 NLP 처리 환경 구축 (1) | 2018.03.31 |
Tensorflow Object Detection Tutorial (0) | 2018.03.29 |
Comments