"Create Spark Context" ERR on Spark1.6

Hi knimers,

how can i use spark connector to work with hadoop2.4+spark1.6? Is there any mannals to help us to do some adaption? thanks..

2016-08-04 11:27:36,795 : DEBUG : main : ExecuteAction :  :  : Creating execution job for 1 node(s)...
2016-08-04 11:27:36,795 : DEBUG : main : NodeContainer :  :  : Create Spark Context 2:201 has new state: CONFIGURED_MARKEDFOREXEC
2016-08-04 11:27:36,795 : DEBUG : main : NodeContainer :  :  : Create Spark Context 2:201 has new state: CONFIGURED_QUEUED
2016-08-04 11:27:36,796 : DEBUG : main : NodeContainer :  :  : 01_Spark_MLlib_Decision_Tree 2 has new state: EXECUTING
2016-08-04 11:27:36,796 : DEBUG : KNIME-WFM-Parent-Notifier : NodeContainer :  :  : ROOT  has new state: EXECUTING
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : WorkflowManager : Create Spark Context : 2:201 : Create Spark Context 2:201 doBeforePreExecution
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : Create Spark Context 2:201 has new state: PREEXECUTE
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : WorkflowManager : Create Spark Context : 2:201 : Create Spark Context 2:201 doBeforeExecution
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : Create Spark Context 2:201 has new state: EXECUTING
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : WorkflowFileStoreHandlerRepository : Create Spark Context : 2:201 : Adding handler 37053755-b3eb-4c18-8b63-d15386fb2ddf (Create Spark Context 2:201: <no directory>) - 1 in total
2016-08-04 11:27:36,796 : DEBUG : KNIME-Worker-6 : LocalNodeExecutionJob : Create Spark Context : 2:201 : Create Spark Context 2:201 Start execute
2016-08-04 11:27:36,797 : INFO  : KNIME-Worker-6 : JobserverSparkContext : Create Spark Context : 2:201 : Spark context jobserver://10.8.0.1:8090/knimeSparkContext changed status from CONFIGURED to CONFIGURED
2016-08-04 11:27:36,798 : DEBUG : KNIME-Worker-6 : JobserverSparkContext : Create Spark Context : 2:201 : Checking if remote context exists. Name: knimeSparkContext
2016-08-04 11:27:37,259 : DEBUG : KNIME-Worker-6 : JobserverSparkContext : Create Spark Context : 2:201 : Remote context does not exist. Name: knimeSparkContext
2016-08-04 11:27:37,259 : DEBUG : KNIME-Worker-6 : JobserverSparkContext : Create Spark Context : 2:201 : Creating new remote Spark context. Name: knimeSparkContext
2016-08-04 11:27:37,930 : INFO  : KNIME-Worker-6 : JobserverSparkContext : Create Spark Context : 2:201 : Spark context jobserver://10.8.0.1:8090/knimeSparkContext changed status from CONFIGURED to CONFIGURED
2016-08-04 11:27:37,930 : DEBUG : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : reset
2016-08-04 11:27:37,930 : DEBUG : KNIME-Worker-6 : SparkNodeModel : Create Spark Context : 2:201 : In reset() of SparkNodeModel. Calling deleteRDDs.
2016-08-04 11:27:37,930 : ERROR : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : Execute failed: Failed to initialize Spark context (for details see View > Open KNIME log)
2016-08-04 11:27:37,932 : DEBUG : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : Execute failed: Failed to initialize Spark context (for details see View > Open KNIME log)
com.knime.bigdata.spark.core.exception.KNIMESparkException: Failed to initialize Spark context (for details see View > Open KNIME log)
    at com.knime.bigdata.spark.core.context.jobserver.request.CreateContextRequest.handleRequestSpecificFailures(CreateContextRequest.java:118)
    at com.knime.bigdata.spark.core.context.jobserver.request.CreateContextRequest.sendInternal(CreateContextRequest.java:77)
    at com.knime.bigdata.spark.core.context.jobserver.request.CreateContextRequest.sendInternal(CreateContextRequest.java:1)
    at com.knime.bigdata.spark.core.context.jobserver.request.AbstractJobserverRequest.send(AbstractJobserverRequest.java:73)
    at com.knime.bigdata.spark.core.context.jobserver.JobserverSparkContext.createRemoteSparkContext(JobserverSparkContext.java:455)
    at com.knime.bigdata.spark.core.context.jobserver.JobserverSparkContext.access$4(JobserverSparkContext.java:449)
    at com.knime.bigdata.spark.core.context.jobserver.JobserverSparkContext$1.run(JobserverSparkContext.java:238)
    at com.knime.bigdata.spark.core.context.jobserver.JobserverSparkContext.runWithResetOnFailure(JobserverSparkContext.java:331)
    at com.knime.bigdata.spark.core.context.jobserver.JobserverSparkContext.open(JobserverSparkContext.java:226)
    at com.knime.bigdata.spark.core.context.SparkContext.ensureOpened(SparkContext.java:57)
    at com.knime.bigdata.spark.node.util.context.create.SparkContextCreatorNodeModel.executeInternal(SparkContextCreatorNodeModel.java:155)
    at com.knime.bigdata.spark.core.node.SparkNodeModel.execute(SparkNodeModel.java:226)
    at org.knime.core.node.NodeModel.executeModel(NodeModel.java:566)
    at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1146)
    at org.knime.core.node.Node.execute(Node.java:933)
    at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:556)
    at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
    at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179)
    at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110)
    at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328)
    at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
    at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: java.lang.Throwable: org.apache.hadoop.security.AccessControlException: Permission denied: user=spark-job-server, access=WRITE, inode="/user":hdfs:hdfs:drwxrwxr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6497)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6449)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4251)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4221)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4194)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at <parseError>.<parseError>(<parseError>:0)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at <parseError>.<parseError>(<parseError>:0)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at <parseError>.<parseError>(<parseError>:0)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1817)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:597)
    at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:356)
    at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:722)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:142)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
    at spark.jobserver.context.DefaultSparkContextFactory$$anon$1.<init>(SparkContextFactory.scala:53)
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:53)
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:48)
    at spark.jobserver.context.SparkContextFactory$class.makeContext(SparkContextFactory.scala:37)
    at spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:48)
    at spark.jobserver.JobManagerActor.createContextFromConfig(JobManagerActor.scala:378)
    at spark.jobserver.JobManagerActor$$anonfun$wrappedReceive$1.applyOrElse(JobManagerActor.scala:122)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at ooyala.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorStack.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at ooyala.common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:26)
    at ooyala.common.akka.Slf4jLogging$class.ooyala$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:35)
    at ooyala.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogging.scala:25)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at ooyala.common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:24)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
    at ooyala.common.akka.InstrumentedActor.aroundReceive(InstrumentedActor.scala:8)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2016-08-04 11:27:37,932 : DEBUG : KNIME-Worker-6 : WorkflowManager : Create Spark Context : 2:201 : Create Spark Context 2:201 doBeforePostExecution
2016-08-04 11:27:37,932 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : Create Spark Context 2:201 has new state: POSTEXECUTE
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : WorkflowManager : Create Spark Context : 2:201 : Create Spark Context 2:201 doAfterExecute - failure
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : reset
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : SparkNodeModel : Create Spark Context : 2:201 : In reset() of SparkNodeModel. Calling deleteRDDs.
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : clean output ports.
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : WorkflowFileStoreHandlerRepository : Create Spark Context : 2:201 : Removing handler 37053755-b3eb-4c18-8b63-d15386fb2ddf (Create Spark Context 2:201: <no directory>) - 0 remaining
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : Create Spark Context 2:201 has new state: IDLE
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : SparkContextCreatorNodeModel : Create Spark Context : 2:201 : Reconfiguring old context with same ID.
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : Create Spark Context : Create Spark Context : 2:201 : Configure succeeded. (Create Spark Context)
2016-08-04 11:27:37,933 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : Create Spark Context 2:201 has new state: CONFIGURED
2016-08-04 11:27:37,934 : DEBUG : KNIME-Worker-6 : Table to Spark : Table to Spark : 2:171 : Configure succeeded. (Table to Spark)
2016-08-04 11:27:37,934 : DEBUG : KNIME-Worker-6 : Spark Category To Number : Spark Category To Number : 2:196 : Configure succeeded. (Spark Category To Number)
2016-08-04 11:27:37,934 : DEBUG : KNIME-Worker-6 : Spark Decision Tree Learner : Spark Decision Tree Learner : 2:197 : Configure succeeded. (Spark Decision Tree Learner)
2016-08-04 11:27:37,935 : DEBUG : KNIME-Worker-6 : Spark Predictor : Spark Predictor : 2:198 : Configure succeeded. (Spark Predictor)
2016-08-04 11:27:37,935 : DEBUG : KNIME-Worker-6 : Spark Number To Category (Apply) : Spark Number To Category (Apply) : 2:199 : Configure succeeded. (Spark Number To Category (Apply))
2016-08-04 11:27:37,935 : DEBUG : KNIME-Worker-6 : Spark Scorer : Spark Scorer : 2:200 : Configure succeeded. (Spark Scorer)
2016-08-04 11:27:37,935 : DEBUG : KNIME-Worker-6 : NodeContainer : Create Spark Context : 2:201 : 01_Spark_MLlib_Decision_Tree 2 has new state: CONFIGURED
2016-08-04 11:27:37,935 : DEBUG : KNIME-WFM-Parent-Notifier : NodeContainer :  :  : ROOT  has new state: IDLE

Hello,

the problem seems to be a rights issue in your secured Hadoop cluster according to the error message:

org.apache.hadoop.security.AccessControlException: Permission denied: user=spark-job-server, access=WRITE, inode="/user":hdfs:hdfs:drwxrwxr-x

Please have a look at the Kerberos section of the KNIME Spark Executor

 for details. The guide is also available in the installation section of the product page.

Bye

Tobias

thanks.