Generate Text Using a Many-To-One LSTM Network (Training)

The workflow builds, trains, and saves an RNN with an LSTM layer to generate new fictive fairy tales. The brown nodes define the network structure. The "Pre-Processing" metanode reads fairy tales and index-encodes them, and creates semi-overlapping sequences. The Keras Network Learner node trains the network using the index-encoded fairy tales. Finally, the trained network is converted into a TensorFlow model, and saved to a file.


This is a companion discussion topic for the original entry at https://kni.me/w/XN7TlpNwSMYZ4LnA