0
$\begingroup$

I have the following trained time series classification tensorflow model :

model = Sequential()
model.add(Masking(mask_value=0.0, input_shape=(90, 8)))
model.add(LSTM(100, return_sequences=True))
model.add(LSTM(70, return_sequences=True))
model.add(LSTM(70, return_sequences=False)) 
model.add(Dense(20, activation='relu'))
model.add(Dense(22, activation='softmax'))

enter image description here

  • The input to the model has the shape: (Batch_size, 90, 8)
  • 90 - time series length
  • 8 - number of features
  • The output of the model is 22 (classes)

I want to get the importance of the features so I'm using the following code:

first_layer_weights = model.layers[1].get_weights()[0]
feature_importance  = np.abs(first_layer_weights).sum(axis=1)
  1. Am I right that the values of feature_importance -> higher values means more importance than other features ?

  2. I want to check if the model give more attention (higher probability) to some classes. I'm using this code to get the weights of the last layer:

enter image description here

  • Is my suggestion for the features is true for the last layer - the values of feature_importance -> higher values means more importance than other features -> and thus some classes may get higher probability by the model to be selected ?
$\endgroup$

0