Similar to this question about MLPClassifier, I suspect the answer is 'no' but I will ask it anyway.
Is it possible to change the activation function of the output layer in an MLPRegressor neural network in scikit-learn?
I would like to use it for function approximation. I.e.
y = f(x)
where x is a vector of no more than 10 variables and y is a single continuous variable.
So I would like to change the output activation to linear or tanh. Right now it looks like sigmoid.
If not, I fail to see how you can use scikit-learn for anything other than classification which would be a shame.
Yes, I realise I could use tensorflow or PyTorch but my application is so basic I think scikit learn would be perfect fit (pardon the pun there).
Is it possible to build a more customized network with MultiLayerPerceptron or perhaps from individual layers (sknn.mlp)?
UPDATE:
In the documentation for MultiLayerPerceptron it does say:
For output layers, you can use the following layer types: Linear or Softmax.
But then further down it says:
When using the multi-layer perceptron, you should initialize a Regressor or a Classifier directly.
And there is no example of how to instantiate a MultiLayerPerceptron object.