Hi
I have used couple of hive to Spark nodes followed by join and now I want ot store the result of this into HDFS in parquet format. But when I execute Spark to Parquet node I am getting following error. -
ERROR Spark to Parquet 4:74 org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 11.0 failed 4 times; aborting job
ERROR Spark to Parquet 4:74 org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation: Aborting job.
ERROR Spark to Parquet 4:74 org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201707111010_0000 aborted.
ERROR Spark to Parquet 4:74 Execute failed: Failed to create output path with name 'hdfs://cluster-01.example.com:8020/user/rghadge/temp/spark_test_parquet'. Reason: Job aborted.
Please advise.
Thanks,
Rahul G.