knime and spark job server standalone in docker

is it possible to use knime with spark standalone without hadoop
and how to use it with example

yes in general it should work but we do not support such a setup. You would need to install the Spark Job Server and set it up to work with Spark standalone. Another drawback is that most of the KNIME nodes that create Spark Data Frames rely on HDFS as input.
If you just want to try the node without bothering to setup a cluster you can also have a look at the nightly build which contains a new feature which is called “KNIME Extension for local big data environments”. This feature comes with a new node “Create Local Big Data Environment” which creates a Spark Contexts, creates an HDF based access to your local file system and that mimics a Hive database. Please note that this node is still in development and might change.

1 Like