I use two different sources of information as input to my neural model. The model takes a word as input and produces binary [1/0] output. I have represented each word by using its word embedding (1024 dimensional vector) and its Valence, Arousal, and Dominance Lexicon (3 dimensional vector). I concatenate them and obtain 1027 dimensional vector for each word.
My questions are:
Should I normalize/preprocess those vector before concatenating them? My lexicon values are between [0,1] and my embeddings are the ELMO embeddings produced depending on a context. Therefore I am not sure about the range of the embeddings.
If yes, should I preprocess one of them before merging, what should I do ?
Is there any good resource that might have an answer for my question?