0

Trying to use transfer learning (fine tuning) with InceptionV3, removing the last layer, keeping training for all the layers off, and adding a single dense layer. When I look at the summary again, I do not see my added layer, and getting expectation.

RuntimeError: You tried to call count_params on dense_7, but the layer isn't built. You can build it manually via: dense_7.build(batch_input_shape).

from keras import applications
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))

from keras.layers import Dense
for layer in pretrained_model.layers:
  layer.trainable = False

pretrained_model.layers.pop()

layer = (Dense(2, activation='sigmoid'))
pretrained_model.layers.append(layer)

Looking at summary again gives above exception.

pretrained_model.summary()

Wanted to train compile and fit model, but

pretrained_model.compile(optimizer=RMSprop(lr=0.0001), 
              loss = 'sparse_categorical_crossentropy', metrics = ['acc'])

Above line gives this error,

Could not interpret optimizer identifier:

1
  • You can't use pop() on layers attribute to modify the architecture. This or this might be helpful.
    – today
    Commented May 19, 2019 at 14:51

1 Answer 1

0

You are using pop to pop the fully connected layer like Dense at the end of the network. But this is already accomplished by the argument include top = False. So you just need to initialize Inception with include_top = False, add the final Dense layer. In addition, since it's InceptionV3, I suggest you to add GlobalAveragePooling2D() after output of InceptionV3 to reduce overfitting. Here is a code,

from keras import applications
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D

pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))


x = pretrained_model.output
x = GlobalAveragePooling2D()(x) #Highly reccomended

layer = Dense(2, activation='sigmoid')(x)

model = Model(input=pretrained_model.input, output=layer)

for layer in pretrained_model.layers:
  layer.trainable = False

model.summary()

This should give you the desired model to fine tune.

Not the answer you're looking for? Browse other questions tagged or ask your own question.