Hello,
I am using Keras LSTM to predict the future target values (a regression problem and not classification). My current dataset has 270 rows, starting on t_0 and finishing on t_269, each row includes the current target value (the value I want to predict) and other 6 features at that time.
My goal is to predict how is the target value going to evolve for the next time step.
First of all i normalized the data using the Normalizer Node (Min-Max Normalization between 0 and 1).
I created the lags for the 7 columns (target and the other 6 features) making 14 lags for each with 1 as lag interval. I then used the column aggregator node to create a list containing the 98 values (14 lags x 7 features). partitioned in training and test sets and sent it to my Keras network.
So far my network is made from:
- Keras Input Layer with input shape 1,98
- Keras LSTM Layer, Hard Sigmoid Activation, 20 units
- Keras Dense Layer, Linear activation function
Then for the Keras Network Learner I am using 50 epochs and 32 as batch size with ADAM optimizer. And I am not shuffling the data before each epoch because I would like the LSTM to find dependencies between the sequences.
I am still trying to tune the Network using maybe different optimizer and activation functions and considering different number of units for the LSTM layer.
Right now I am using only one dataset of many that are available, for the same experiment but conducted in different locations. Basically I have other datasets with 270 rows and 7 columns (target column and 6 features). What I would like to do is to use the other datasets ,let’s say other 5 datasets, to help train my network.
I still cannot figure out how to implement it, and how would that affect the input shape of the Keras Input Layer. Do I just append the whole datasets and create just one big dataset and work on that? but wouldn’t this make the LSTM lose the sequential factor? Or is it enough to set the batch size of the Keras Network Learner to the number of rows provided by each dataset?
PS. ultimately I will be forecasting the next 10 values for each dataset, because i know the next 10 values of each one of the 6 features.
I hope I was clear enough explaining my problem.