Not sure if you’re still stuck on this. I asked one of our data scientists about it. His response:
I’m fairly sure the change would just be to the output shape of the LSTM layer. I believe in our examples that’s the dense layer that follows it, from (1) to (3). The training data would need to be reshaped accordingly to provide a target column that was of that shape - could do that with a lag column node.
If that’s not sufficient, let me know and I can do a bit more digging.
Thank you for your feedback! I really enjoyed your YouTube short. When can we expect a new one?
I definitely need your help digging a little deeper. I found some Python websites that confirm what you mentioned about the dense layer. They explain that each step in a multistep problem corresponds to each neuron in the dense layer. I tried to implement this in a workflow, but I only had partial success. I struggled with generating the number (N) of future steps/values of interest. Instead, my implementation produced N columns as output, not N values.
It would be amazing if you could create a template workflow for a multistep problem using an LSTM network. The whole community would greatly appreciate it.