Yes, I had maven put all depencies into a jar for spark solr 3.5.1 and 2.2.3.
Then, I declare the class paths for driver and executor in the configs of the spark context and add the jar to the context, too. The jars are loaded from HDFS to the job server.
The procedure works for another library in another case for spark jobs. And locally in a common java snippet it works for spark solr. However, the common snippet performs line to line, not “query-to-table” and I wrote a KNIME node using spark solr, which also works in principle. But the common KNIME output table I get cannot be connected to a Spark-RDD node, because it is incompatible to the Spark nodes.
Best regards,
Sven