Hi all,
in one of my workflows the standart "DB Reader" doesn't read all rows from the database.
The "DB Reader" reads from a MSSQL-DB-Server and is configured as follows:
- Access via MS-Driver "com.microsoft.sqlserver.jdbc.SQLServerDriver",
Server-Adress: jdbc:sqlserver://10.50.0.25\HCN_V8
user credentials
and SQL statement
SELECT X.XCMS_ID As ID, X.XCMS_GLOBALID As GLOBAL_ID, A.XADR_ZIP As PLZ,L.XCOU_LONGDESC As LAND FROM dbo.XCMS As X Left Outer Join dbo.XADR As A On X.XCMS_ID = A.XADR_XCMS_ID And A.XADR_PRIMARY = 1 Left Outer Join dbo.XCOU As L On A.XADR_XCOU_ID = L.XCOU_ID WHERE X.XCMS_GLOBALID IS NOT NULL AND X.XCMS_XCCA_ID IN (1,2)
- Job Manager Selection: default
- Memory Policy: Keep only small tables in memory
The knode outputs more than 300.000 rows.
Checking with the original content of the main table XCMS, I realized that the query didn't read 131 entries from this table. Trying it again another day it missed 3 rows.
This behaviours seems quite unpredictable to me and I'm afraid I'd always loose some rows.
Any help or workaround is very appreciated.
Bernd Langer