Research
pytorch "RuntimeError: ~ inplace operation" for CUDA
Shakeratto
2019. 1. 5. 17:58
def __init__(self, input_size, hidden_size, num_layers=2):
super().__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
def forward(self, features, init_hidden=None):
self.lstm.flatten_parameters()
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Move the self.lstm.flatten_parameters() in def foward() to def __init__.
It has no effect on CPU, but on CUDA it will modify the LSTM's parameters part way through the computation which would break backward (leading to the exception). (reference: https://github.com/pytorch/pytorch/issues/7243)