The first workflow sets up the local big data environment and loads the CustomerData into Hive. The second workflow trains a chun prediction model in the a big data enviroment using the Spakling Water integration. The last workflow can be used as a deployment workflow on a KNIME Server and can be executed via a REST call.
This is a companion discussion topic for the original entry at https://kni.me/w/O_CJyabl8HkcRZG_