I used coworker’s workflow to study. The first two nodes are: Hive Connector, DB Query Reader. My coworker’s workflow runs successfully, then I changed Hive Connector’s credential and sslTrustStore to mine, it run successfully too, but the DB Query Reader failed with
ERROR DB Query Reader 0:6804 Execute failed: Error while compiling statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
@Annie510 the questions are not silly at all. In general big data systems can be quite complicated and to be honest the whole eco system as well as cloudera are still let’s say evolving.
You might want to check also the additional settings (under JDBC) in the node and see if you find settings like a queue that might be helpful.
We work with Hive and Impala nodes encapsulated in a meta node to make sure everyone would use the correct settings and if they change they would automatically be updated for all users. You would then have to handle authentication via Kerberos.
Maybe you can ask you Co worker and you admin to set something up.
Other than that you might post screenshots from your settings if they do not contain any sensitive information. @MichaelRespondek might also be able to weight in. You might provide a full log file to get more clues.
Most settings will be made on the side of the big data cluster so you have to make sure that knime and big data work together. So times in large enterprise environments special right have to be set per user to be allowed to impersonate a user via knime sever automation or access to HDFS file system.