Train RNN to generate piano music – KNIME Community Hub

This workflow uses preprocessed midi files to train a many to many RNN to generate music. The brown nodes in the upper part define the network architecture. The chosen network architecture has 5 inputs for - the notes - the duration - the offset difference to the previous note - the initial hidden states of the LSTM After an LSTM layer the network splitt into three, parallel, feedforward subnetworks with different activation functions: - one for the notes - one for the duration - one for the offset difference Afterwards the three subnetworks are collected. In the Keras Network Learner node the Loss function is defined by selecting a loss for each feedforward subnetwork. - Categorical Cross Entropy for the notes - MSE for the duration and th offset difference.


This is a companion discussion topic for the original entry at https://hub.knime.com/-/spaces/-/latest/~x5z6pcxPT6g3GhfF/

Hi,
is the preprocessed Midi file “training_notes_duration_offset_100.table” referenced in the Table Reader available for download?

best regards
Wilfried