Hive to Spark Node not working -<resolved>

Hi

Whenever I am running hive to spark nodes from example server it works fine but when I try to run it with new database tables created in hive it fails with an error message as--> 

Execute failed: Failed to execute Spark job: Table not found: <table name>

Please advise.

 

Thanks,

Rahul Ghadge

Hi guys,

I tried a solution --> In every node prior to Hive to Spark Node, wherever a tablename is mentioned, we nee dto prefix it with schema_name and '.'. eg. - select a,b,c from <schema_name>.<table_name> limt 20;

This approach resolves the issue because, <as read on the other post> Hive to Spark is designed to read from default hive database alone. Hence we get the error - table not found. If we mention both schema name and  table name in query then this error ets resolved.

Thank you.

-Rahul G

Hi,

yes, that is a known technical limitation of the way Spark itself works with Hive, there is no way we can tell Spark to use a certain database unfortunately.  We would have to generate special SQL for Hive to Spark to work around this, unfortunately this is currently also not possible.

- Björn

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.