0
$\begingroup$

I have a data set consisting of multivariate time series, e.g. a batch of my data has the shape (batch_size, timesteps, number_input_features) and I want to train a neural network on it to predict another set of multivariate time series with shape (batch_size, timesteps, number_output_features).

The timesteps are equidistant but the over all length from one sample to another varies. It can not vary within one sample, meaning for one sample all features were measured for the same amount of time.

The values of the features are in very different regions. Thus I use normalization with respect to the function values of the time series. I trained a small model to test if this normalization is useful and it is. But what I also did is that I chopped off every time series after a certain amount of time steps such that every time series has the same length. The reason is that for now I want to avoid the usage of a recurrent neural network. Now with this chopping off I certainly use some information and further I can only predict time series of this certain length.

So I wondered if it is legit to addionally scale all time series with respect to the x-axis such that they all are in the [0,1]-interval regarding y- and x-axis? Then I could still use some non-recurrent network and after prediction I could scale them back.

Eventually I would try to transform the [0,1]-time series into images and apply a convolutional neural network. I do not know whether this has some advantages but I think it would be exciting to try. Nevertheless it would already be very useful to know if scaling with respect to the x-axis is legit or how else I could avoid the usage of recurrent networks albeit I have time series of varying length.

$\endgroup$

0