0

I have been writing simple autoencoder using tflearn.

net = tflearn.input_data (shape=[None, train.shape [1]])   
net = tflearn.fully_connected (net, 500, activation  = 'tanh', regularizer = None, name = 'fc_en_1')

#hidden state
net = tflearn.fully_connected (net, 100, activation  = 'tanh', regularizer = 'L1', name = 'fc_en_2', weight_decay = 0.0001)    

net = tflearn.fully_connected (net, 500, activation  = 'tanh', regularizer = None, name = 'fc_de_1')    
net = tflearn.fully_connected (net, train.shape [1], activation  = 'linear', name = 'fc_de_2')       
net = tflearn.regression(net, optimizer='adam', learning_rate=0.01, loss='mean_square', metric='default')

model = tflearn.DNN (net)

Model is trained well, but after training I want to use separately encoder and decoder.

How can I do it? Right now I can restore input, and I want to be able to convert input to hidden representation and restore input from arbitrary hidden representation.

2 Answers 2

1

You can just save names of encoder and decoder inputs/outputs.

Namely (added INPUT, HIDDEN_STATE, OUTPUT):

net = tflearn.input_data (shape=[None, train.shape [1]])   
INPUT = net
net = tflearn.fully_connected (net, 500, activation  = 'tanh', regularizer = None, name = 'fc_en_1')

#hidden state
net = tflearn.fully_connected (net, 100, activation  = 'tanh', regularizer = 'L1', name = 'fc_en_2', weight_decay = 0.0001)    
HIDDEN_STATE = net

net = tflearn.fully_connected (net, 500, activation  = 'tanh', regularizer = None, name = 'fc_de_1')    
net = tflearn.fully_connected (net, train.shape [1], activation  = 'linear', name = 'fc_de_2')  
OUTPUT = net     
net = tflearn.regression(net, optimizer='adam', learning_rate=0.01, loss='mean_square', metric='default')

model = tflearn.DNN (net)

And then use such functions to encode/decode:

def encode (X):    
    if len (X.shape) < 2:
        X = X.reshape (1, -1)

    tflearn.is_training (False, model.session)
    res = model.session.run (HIDDEN_STATE, feed_dict={INPUT.name:X})    
    return res    

def decode (X):
    if len (X.shape) < 2:
        X = X.reshape (1, -1)

    #just to pass something to place_holder
    zeros = np.zeros ((X.shape [0], train.shape [1]))

    tflearn.is_training (False, model.session)
    res = model.session.run (OUTPUT, feed_dict={INPUT.name:zeros, HIDDEN_STATE.name:X})    
    return res
0

Thanks for your answer @discharged-spider. I just encoded/decoded 2,000 vectors of size 1,000 and reduced their dimension using the autoencoder mentioned above. However, whenever I try to find a mapping from the output of the decoder to the actual input, only on 1 vector it successfully maps the result of decoder output to the actual output. I'm not sure how I can increase the accuracy here. I use the euclidian distance to find the closest vector to the output of the decoder.

1
  • Training of network is not actually the point of this question. For your case, I suppose, this auto encoder is too big, try to decrease number of layers or add more regularizations. Also try to train it more. (more epochs) Commented Sep 28, 2016 at 9:02

Not the answer you're looking for? Browse other questions tagged or ask your own question.