0

So I want to make 3 layered bi directional LSTM and I'm using tensorflow tf.contrib.rnn.stack_bidirectional_dynamic_rn

It's a small code, I just wanna know where i'm going wrong.

learning_rate=0.01
batch_size=3187
annealing_rate=0.85
beta1=0.9
beta2=0.99
epsilon=10**(-8)
dropout=0.95
layer_size=3
lstm_size=1024
time_steps=34
num_hidden=1024

place_x=tf.placeholder("float",[85409,34,41])
place_y=tf.placeholder("float",[None,32,70])


cell_forw=[tf.nn.rnn_cell.BasicLSTMCell(num_hidden)]
cell_back=[tf.nn.rnn_cell.BasicLSTMCell(num_hidden)]
for i in range(1,3):
    cell_forw.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True,reuse=tf.AUTO_REUSE)]*i))
    cell_back.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True,reuse=tf.AUTO_REUSE)]*i))



# xf=tf.unstack(place_x,34,1)


def lstm_cell():
    return tf.contrib.rnn.BasicLSTMCell(lstm_size)
cell_f= tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(3)])
cell_b= tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(3)])



outputs,_,_=tf.contrib.rnn.stack_bidirectional_dynamic_rnn(cell_forw,cell_back,
                                           place_x,
                                           dtype=tf.float32,

                                          )

Here's the error i'm basically getting

ValueError: Dimensions must be equal, but are 2048 and 3072 for 'stack_bidirectional_rnn/cell_2/bidirectional_rnn/fw/fw/while/fw/multi_rnn_cell/cell_0/basic_lstm_cell/MatMul_1' (op: 'MatMul') with input shapes: [85409,2048], [3072,4096].

I tried using techniques over other stackoverflow questions like,

def lstm_cell():
    return tf.contrib.rnn.BasicLSTMCell(lstm_size)
cell_f= tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(3)])
cell_b= tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(3)])

Nothing seems to work, Thank you.

1
  • Can you show a full stack trace? I suspect that because 3072 = 1024 * 3 some place in your code is trying to concatenate the output of 3 cells instead of stacking them Commented May 15, 2018 at 19:37

1 Answer 1

1

It was a simple error guys,

change

for i in range(1,3):
    cell_forw.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True,reuse=tf.AUTO_REUSE)]*i))
    cell_back.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True,reuse=tf.AUTO_REUSE)]*i))

to

for i in range(1,3):
    cell_forw.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True)]))
    cell_back.append(tf.nn.rnn_cell.MultiRNNCell([tf.nn.rnn_cell.BasicLSTMCell(num_hidden,state_is_tuple=True)]))

Not the answer you're looking for? Browse other questions tagged or ask your own question.