I can’t read the table using the Database Reader (legacy) node and it says “Node created an empty data table”, even though there is data in the table. I’m using impala databases.
can anyone explain to me?
I can’t read the table using the Database Reader (legacy) node and it says “Node created an empty data table”, even though there is data in the table. I’m using impala databases.
can anyone explain to me?
Hi @alvoholic there might be data in the table, but do they satisfy your where conditions?
As a test, can you run it without the where statement, and perhaps add a limit so that it does not retrieve the whole table?
Yes, after I tried it, it didn’t work with the ‘where’, ‘limit’ and ‘cast’ functions.
even though this workflow I only duplicated the existing one and it worked, but this time for some reason it didn’t work.
any ideas to make it run or have to add another node?
Hi @alvoholic , I’m not sure what do you mean by “make it run”. It ran. You just don’t have any data that qualifies for the condition. You can’t force the data to qualify. It all depends on what you are trying to achieve and what your condition should be.
It also does not help that you are not sharing any data, and not even sharing the query as a text that I can copy from and write a suggested query from it - I’m not about to go read from the image and manually type what I see
One tip you can try is to still remove the where statement, but do add these 2 cast statements in the select to see what their values are and see if you are seeing what you expected to see (if I was able to copy your original query, I could have been able to show you what I meant).
The system will never lie to you. It’s returning what you are asking for. You are most probably expecting a different values from what these 2 cast statements are producing.
@alvoholic these things come to mind. First Impala has its own set of rules when it comes to date and time variables (Impala Date and Time Functions). You might want to check your syntax accordingly.
Impala can directly work with date formats that are stored as strings. “now()” would produce a timestamp (that is separate from unix timestamp)
SELECT
now() AS timestamp_now
, from_timestamp(now(), 'yyyy-MM-dd') AS today_as_date
, from_timestamp(DATE_SUB(now(), INTERVAL 3 days), 'yyyy-MM-dd') AS three_days_ago
To get an idea about the workings of big data you can check out this workflow. Please note: there are differences in date and time syntax between Impala and Hive.
Then you might want to make sure the statistics on your tables are up to date (Table and Column Statistics). If they are not the big data system might not be able to recognize the data/partitions even if they are physically there.
KNIME is great to handle big data systems - but you have to now the details about the inner workings of big data.
Finally I would recommend switching to the latest DB nodes (KNIME Big Data Extensions User Guide).
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.