Merging to .db sqlite3 databases, one table isn't coming over

I’m having trouble understanding why a dataset isn’t concatenating with another. I have 2 work flows from the same database Teradata.

  1. pulls the same query but for 2019-2024 (3GB dataset) to appease the DB admins.
  2. pulls >=2025 (just a date reference change)
  3. Both sequences are written to a .db file using sqlite connector and db writer.
  4. Then using concatenate to merge them using Union and writing as a final Master.db file

Keeping their databases separate up front as this is local to my laptop and if I mess something up I can for a lack of understanding knime and sql, work backwards to where I might have messed up.

so Historical.db (StaticIP table name) + Monthly.db (MonthlyIP table name) = Master.db

only difference is the table names, which I’m not sure how they are assigned otherwise I’d make them the same… hoping someone can help my understanding of what I’m missing or doing wrong.

@MichaelB what is happening at the concatenation? There is a warning there. Are there any data rows present. Are they OK.

What is the setting on the SQLite node. BTW: date values are tricky with SQLite

says that it filtered on 500k duplicates on the concatenate node, which I believe is correct. My row filter in that workflow is before writting the database (which I run weekly to pull updated information, so the row filter in my mind need to be moved after the sql is run and written to the db.

I fixed all table names to be the same now, and still get the same error. I check monthly, the rows are fine, and after concatenate they are just not present in the master.db.

What is the warning of the “Concatenate Node”? Hold your mouse pointer on the “!” in the traffic light.

Are you sure about the column names and types? You can check the table specification in the “spec tab” of the output of both DB reader nodes.

For testing I would subsample both tables entering the ConcatenateNodes down to 5 rows (use simple row filter). Check what’s happening then.

@MichaelB deleting duplicates by concatenating seems messy to be honest. Have you checked the SQLite database itself? In the screenshot the Table VIEW is just about the data before not the SQLite itself.

Maybe you can provide a screenshot of the SQLite settings.

the table view was after the DB Writer for StaticIP2.db for example was just be validating that I could see 2025 report month data. Below is the screenshot of the sqlite node settings. Since I only run the historical once, I’ll never run that again.

The same SQL is used in both queries, just the date range it’s pulling are different. Table names are the same in both DB Writers (StaticIP).

Since I’m running the Monthly query (lower one) weekly. I had it drop dup’s via the concatenate node as an option so that it’s not stacking weekly run data.

Am I doing that wrong? I set it up this was as trying to get two .db files to merge this was the only option I could find in the forums and or online in general.

@MichaelB you might want to check from the results from within SQLite at the end to see if the data is there. If the data is not being pulled from SQLite in the first place you will have to check the syntax and maybe do some calculation with SQL code (counts) to see if SQLite returns the data.

It is difficult to understand where there might be a problem. Next option would be to provide a full example that follows the structure of your data without spelling g any secrets and show where data is missing.

Hi.

I dont know, but it seams there is an issue with SQLlite connector for “Create Static IP.db”
Have you confirmed the settings?

Regards

1 Like

The individual DB’s pull the right information. Each sql flow is writing the right data.

Historical.db = 2019-2024
Current.db = >=2025

It’s at the concatenate stage where it finishes and the 2025 data is not in the new DB I’m creating called Master.db

I’m keeping the files separate so that if something goes wrong or I need to update the current year I don’t upset the database administrators with crushing them with compute resources. The historical alone is 3GB, 40+m rows each month for 5 years.


I updated my work flow with node name comments to help with the visual.

Bottom is 2025 and greater report month data, present up to the concatenate. after that, checking the table view after it there is no 2025 data.

@MichaelB maybe check at the database node?

Also have you just scrolled or did you try a proper filter? Also using a domain calculator might help.

Also the critical point seems to be the master.db where you could check the settings.

Also you could try a local H2 database that has better handling of date types.

1 Like