I am learning spark and cuurently trying to implement some basic program using Spark nodes in KNIME. I got test license for these nodes.
I have created a sample table and using "File Reader" node i have read that table. Then that table is given as input to the node called "Table to Spark" node.
It is configured and while executing it shows an error called
"Execute failed: com.sun.jersey.spi.HeaderDelegateProvider: The class com.sun.jersey.core.impl.provider.header.LocaleProvider implementing provider interface com.sun.jersey.spi.HeaderDelegateProvider could not be instantiated: Cannot cast com.sun.jersey.core.impl.provider.header.LocaleProvider to com.sun.jersey.spi.HeaderDelegateProvider"
Please help me to get out of this problem. I have attached the sample csv file here too which i used for file reader input.
the csv file looks good. So it might be related to your setup. Could you share some more information e.g. what is your Hadoop distribution (Cloudera, Hortonworks, self installed), the Spark version, the operation system, have you installed the Spark job server, where is the error thrown (KNIME, Spark, Spark Job Server, ...)?
I have installed all extensions in KNIME related to Big data.
I am having CDH 5.5 with Spark 1.5.0 version(installed in another machine). Whether i need to install Spark Job Server in this distribution and if it so whether available Spark Job Server supports 1.5.0 Version in CDH?
Please help and clear me.
currently we only support Spark 1.2 and 1.3. We will support Spark 1.5 and 1.6 with the next version of the KNIME Spark Executor.
Yeah the current version works well with Spark 1.3. Thanks for your reply.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.