Active Learning - Uncertainty Sampling

This workflows shows examples of Active Learning with Uncertainty Sampling. Uncertainty sampling uses the model that is re-trained at each iteration. This sampling technique selects those predictions - made on still unlabeled rows - that are most uncertain. The intuition behind this technique is that the model will improve faster if the labels provided by the user are related to uncertain predictions. Uncertain predictions are related to column values which define the feature space around the class boundary. Legacy Example - before KNIME Analytics Platform 4.1 (Dec. 2019): This workflow shows an example of Active Learning. In this example we use scoring based on a previous prediction. and use an "Auto Active Learn Loop End" in order to choose the class for the best scoring sample. One can easily replace the "Auto Active Learn Loop End" with a default "Active Learn Loop End" to manually label the data. The "Auto Active Learn Loop End" should only be used for demonstration or benchmark purposes. Current Example - since KNIME AP 4.1 (Dec. 2019): With KNIME Analytics Platform 4.1 the Active Learning extension was updated to support interactive JavaScript views for KNIME WebPortal. In this example you can interactively label the instances using KNIME WebPortal. More infos on KNIME WebPortal in the link below. The new Active Learning Loop is quite similar to the Recursive Loop, but enhanced with ports and instructions for active learning.

This is a companion discussion topic for the original entry at