I have recently started to use KNIME database option. However, my biggest regret is that we cannot integrate an SQL request to export data into DB with "Database Writer" Node.
Currently, I want to create different tables with a duplicates management system from data entry. If my data is already exist in my table = NO ENTRY. If my data does not exist = ENTRY + Auto increment.
I already know how to do that with SQL and I haven't find the same possibility in KNIME.
All assistance is welcome.
Have a nice day !
Could it possibly be done with the Database Update node?
I tried and this solution works with a simple example but not on my data.
With SQL, I used SQL request :
- INSERT TO
- SELECT DISTINCT FROM
- WHERE NOT EXISTS
When I launch my job, my command line does :
Table IN : Table already exist (DB) : Table DB OUT :
Name | Value Name | Value |ID Name | Value | ID
toto | 3 titi | 1 | 1 titi | 1 | 1
toto | 1 tutu | 2 | 2 tutu | 2 | 2
toto | 1 toto | 3 | 3 toto | 3 | 3
tata | 2 toto | 1 | 4
tata | 2 | 5
Hi Dr. Van Nostrand,
I’m sorry for this late reply.
I have tried your solution : with SQL WHERE or SET, I’m unable to correctly manage my duplicates and generate auto-increment.
So, currently I cannot dirrectly use KNIME to manage data exports to my database.
I have found a solution : I generate SQL commands in KNIME to create an SQL table by executing command line via bash.
You could write the data to a temptable on your server and use the upsert (insert or update) sql command. This will probably depend on what database technology you're using. In SQL server you have the MERGE command, in MySQL you could use REPLACE or INSERT ... ON DUPLICATE KEY UPDATE.
Would be nice if there was a node to do this, though :)
Is there any update on this subject in reagrds to a node that has been developed for this purpose?
At present I'm struggling trying to get something working similar the the MERGE command on SQL Server 2014.
Second the request.
fyi we do the following at the moment:
- Set IGNORE_DUP_KEY so that the write does not fail. However, for some reason the duplicate rows increment the current value for the identity so that the identity field value gaps.
- Read what's already in the database and then use a reference row filter to remove the entries already present, and then write the rest.
For you info I have just opened a topic on the issues we are having...
Although we can get this working using T-SQL, OPENQUERY and MERGE stored procedures for the time being..
Thank you for the new topic! Much appreciated