Google BigQuery DB Writer Error - Unparseable query parameter `` in type 'TYPE_FLOAT64', Bad double value: null value: 'null'

Hi all, hope you all are doing well and staying safe and healthy.

I’m trying to upload data into Google BigQuery using DB Writer node and I’m getting this error.

Execute failed: Error while adding row #(Row0), reason:[Simba]BigQueryJDBCDriver Error executing query job. Message: Unparseable query parameter `` in type ‘TYPE_FLOAT64’, Bad double value: null value: ‘null’

I’m now testing with small sample set that I can QAd manually and visually before uploading but still getting errors. The connection is fine because I can query the database/tables. I just cannot upload it.

I then use DB Reader to read the same table then in turn use DB Writer to write back the data from the same table and I get this error.

image

Has anyone had the same issue? I’ve been googling and searching knime.com but no luck. Any help or kind guidance is greatly appreciated.

And BigQuery’s auto-format-detection is extremely hard to work with. It only looks at first 100 records at the most so it’s just not right. How do you work around that? I manually created the schema when creating the table but then getting this above error. I found this and will give it a try too. https://pypi.org/project/bigquery-schema-generator/

Thank you kindly in advance.

Hello,
unfortunately this is a long know but in the Simba JDBC driver provided by Google. It does not support the usage of NULL values. For more details please see this post.
If you only want to insert data into BigQuery I suggest you use the DB Loader node which uses another API and is much faster.
Bye
Tobias

1 Like

Hi @tobias.koetter,
Thank you for your reply and info. Got it. I started with DB Loader and had so many issues with data types with BQ auto schema detect. I manually uploaded schema several times and I think I finally got it. Thank you so much again, @tobias.koetter.

Is there anything else I should be aware of when interacting with BigQuery using KNIME?

BQ doesn’t have primary key and when I run KNIME, DB Loader turns green, but for whatever reason if I run it again, it’s going to insert the same data set. What’s the best practice around that? Is there anything on the KNIME side to keep track of it?

BQ doesn’t have insert timestamp either…maybe create a process to keep track of each batch upload and append that col in BQ? Can I create a permanent table in KNIME, or do I use Excel or something to use as a local table?

Thank you so much again and your support!! Stay safe.

Hi alabamian2,
sorry for the late reply. If you use only a single workflow that is executed locally you can create write a CSV File in the workflow data area or if running on the server using the Relative to-> Current Workflow file system where you always append the latest running id/timestamp or whatever is available to tell you which rows have already been processed. You could then use the CSV Reader node to read the file, sort it afterwards in descending order to get the latest entry which you then convert to a variable using the Table Row to Variable node which in turn can then be used to filter the data before writing to BigQuery.
Bye
Tobias

1 Like

Thank you, @tobias.koetter. I’ll try building out and incorporating your suggestion.
Thank you again.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.