ArrayIndexOutOfBoundsException - while reading in data

Hi,
I am getting the below error messages while reading in data. Either with the new database nodes or the old. The error only seems to happen on certain tables though and I dont know why. I have tried uninstalling and reinstalling knime to no avail.

Database Reader legacy: ERROR Database Reader (legacy) 0:1 Execute failed: java.lang.ArrayIndexOutOfBoundsException: 8

DB Query Reader: ERROR DB Query Reader 0:4 Execute failed: (“ArrayIndexOutOfBoundsException”): 8

Here is the log file errors:

2019-08-28 07:57:04,218 : ERROR : KNIME-Worker-0 : : Node : Database Reader (legacy) : 0:1 : Execute failed: java.lang.ArrayIndexOutOfBoundsException: 8
java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 8
at org.knime.core.node.port.database.DatabaseConnectionSettings.execute(DatabaseConnectionSettings.java:691)
at org.knime.core.node.port.database.reader.DBReaderImpl.createTable(DBReaderImpl.java:239)
at org.knime.core.node.port.database.reader.DBReader.createTable(DBReader.java:121)
at org.knime.base.node.io.database.DBReaderNodeModel.getResultTable(DBReaderNodeModel.java:147)
at org.knime.base.node.io.database.DBReaderNodeModel.execute(DBReaderNodeModel.java:124)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:567)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1192)
at org.knime.core.node.Node.execute(Node.java:979)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 8
at oracle.jdbc.driver.T4CMAREngineNIO.buffer2Value(T4CMAREngineNIO.java:814)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalSB4(T4CMAREngineNIO.java:605)
at oracle.jdbc.driver.DynamicByteArray.unmarshalCLR(DynamicByteArray.java:219)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalBytes(T4CMarshaller.java:127)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalOneRow(T4CMarshaller.java:106)
at oracle.jdbc.driver.T4CVarcharAccessor.unmarshalOneRow(T4CVarcharAccessor.java:216)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1526)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1289)
at oracle.jdbc.driver.T4C8Oall.readRXD(T4C8Oall.java:850)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:543)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:213)
at oracle.jdbc.driver.T4CStatement.fetch(T4CStatement.java:1009)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3353)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:736)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:692)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:406)
at org.knime.core.node.port.database.reader.DBRowIteratorImpl.hasNext(DBRowIteratorImpl.java:168)
at org.knime.core.node.ExecutionContext.createBufferedDataTable(ExecutionContext.java:259)
at org.knime.core.node.port.database.reader.DBReaderImpl.lambda$2(DBReaderImpl.java:241)
at org.knime.core.node.port.database.DatabaseConnectionSettings.execute(DatabaseConnectionSettings.java:684)

Could you include some details like OS, KNIME version, database, database version, database JDBC driver version, actual anticipated row count retrieval cardinality for the read which is producing this specific exception, etc?

No worries,

Knime 4.0.0
OS: Windows 10 64 bit Version 10.0.17134 Build 17134
Oracle DB 12.2.0
URL: jdbc:oracle:thin:@OLRWRP_host:1521:OLRWRP
ojdbc 8
rowcount: 395,835

Thanks (you might want to upgrade to 4.0.1).

After a semi-quick survey of the code in the top few frames of that stack trace, i see that DBReaderImpl emits logging at the DEBUG level with each SQL statement it is attempting to execute.
Perhaps try turning on DEBUG level for the Console view (Preferences → KNIME → KNIME GUI) -
then you should see some logging that contains Executing SQL statement as execute: followed by the actual queries; maybe that will lead us somewhere helpful.

Thanks for the advice. I have turned on debug and this is now what i get:

DEBUG ExecuteAction Creating execution job for 1 node(s)…
DEBUG NodeContainer Database Reader (legacy) 0:1 has new state: CONFIGURED_MARKEDFOREXEC
DEBUG NodeContainer Database Reader (legacy) 0:1 has new state: CONFIGURED_QUEUED
DEBUG NodeContainer Server_Test 0 has new state: EXECUTING
DEBUG NodeContainer ROOT has new state: EXECUTING
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 doBeforePreExecution
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 has new state: PREEXECUTE
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 doBeforeExecution
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 has new state: EXECUTING
DEBUG Database Reader (legacy) 0:1 Adding handler 77b54c6a-6384-486d-aaf7-9e62fb78574a (Database Reader (legacy) 0:1: ) - 1 in total
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 Start execute
DEBUG Database Reader (legacy) 0:1 Workflow user found: operagn
DEBUG Database Reader (legacy) 0:1 Try to lock key to obtain connection: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Connection found for key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Invalid connection found in cache with key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Removing closed connection from cache with key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Create new connection for key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Database driver retrieved from user defined drivers: oracle.jdbc.driver.OracleDriverDriver info: Driver class name: oracle.jdbc.driver.OracleDriver major version: 12 minor version: 2 jdbc compliant: true
DEBUG Database Reader (legacy) 0:1 Opening database connection to “jdbc:oracle:thin:@olrwrp_host:1521:olrwrp”…
DEBUG Database Reader (legacy) 0:1 Add connection to map for key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Workflow user found: operagn
DEBUG Database Reader (legacy) 0:1 Try to lock key for stmt execution: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Check connection for key: ConnectionKey [db-user=operagn, db-name=jdbc:oracle:thin:@olrwrp_host:1521:olrwrp, wf-user=operagn]
DEBUG Database Reader (legacy) 0:1 Executing SQL statement as executeQuery: SELECT * FROM cases
DEBUG Database Reader (legacy) 0:1 Reading meta data from database ResultSet…
DEBUG Database Reader (legacy) 0:1 Parsing database ResultSet…
DEBUG Database Reader (legacy) 0:1 Using table format org.knime.core.data.container.DefaultTableStoreFormat
DEBUG Buffer Deleted temporary file “C:\Users\operagn\AppData\Local\Temp\knime_Server_Test17993\knime_container_20190828_2399738293791359991.bin.snappy”
DEBUG Database Reader (legacy) 0:1 reset
ERROR Database Reader (legacy) 0:1 Execute failed: java.lang.ArrayIndexOutOfBoundsException
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 doBeforePostExecution
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 has new state: POSTEXECUTE
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 doAfterExecute - failure
DEBUG Database Reader (legacy) 0:1 reset
DEBUG Database Reader (legacy) 0:1 clean output ports.
DEBUG Database Reader (legacy) 0:1 Removing handler 77b54c6a-6384-486d-aaf7-9e62fb78574a (Database Reader (legacy) 0:1: ) - 0 remaining
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 has new state: IDLE
DEBUG Database Reader (legacy) 0:1 Configure succeeded. (Database Reader (legacy))
DEBUG Database Reader (legacy) 0:1 Database Reader (legacy) 0:1 has new state: CONFIGURED
DEBUG Database Reader (legacy) 0:1 Server_Test 0 has new state: CONFIGURED
DEBUG NodeContainer ROOT has new state: IDLE
DEBUG NodeTimer$GlobalNodeStats Successfully wrote node usage stats to file: C:\Data\knime_workspace.metadata\knime\nodeusage_3.0.json
DEBUG Database Reader (legacy) 0:1 Buffer file (C:\Users\operagn\AppData\Local\Temp\knime_Server_Test17993\knime_container_20190828_7497062155375361126.bin.snappy) is 6.052MB in size

I have also updated to the latest version of Knime 4.0.1, but I am still having the same problem

Digesting the log information, it seems like perhaps the result rows are parsed to have a smaller number of columns that we interpret the metadata of the result set to dictate (and so then when we are iterating over rows and assume we have N columns, asking for the Nth column of the result set row throws and array bounds exception.)
You mentioned that you were having no problems with some tables - is there anything about the column types in those tables that stand out as being different than the column types of the tables for which you are having problems?

OK I figured out the problem.
One of the columns has a max length of 3,634 characters.
It looks like Knime doesnt like this because when I leave this column out of the query I dont receive an error

2 Likes

Interesting find - we’ll look into it - thanks!

Could you provide the CREATE statement for that column?

Thanks for being so helpful

No Problem, How do I do that?
The query that I’m running is a simple select * from table_name

I think in Oracle you want to use DBMS - so if you connect with sqlplus, you could do something like:

set long 50000;
select dbms_metadata.get_ddl( 'TABLE', 'your table name here', 'your database name here' ) from dual;

Or if you connect with something like TOAD - i think there is some sort of context menu item for getting the DDL, iirc.

Let me look into it.
Im using Oracle SQL developer but I only have access to a view of the table.
When I look at the DDL for that it’s just selecting the columns from the table so I cant actually see how the column was created

1 Like

Great - thank you again.

HI qualer,

This is the response that I got from one of the DBA’s here.
The problem was with the source_data column


I think the issue could be that someone has placed a ‘line feed’ chr(10) or ‘carriage return’ chr(13) character in the field and KNIME can’t handle it. You could try using the replace command in your SQL to get rid of them.

For example :
Select replace(replace(source_data, chr(13),’ '), chr(13), ’ ')
From cases

CREATE TABLE RCIS.CASES
(
CASE_ID NUMBER(10) NOT NULL,
PRO_PROJECT_ID NUMBER(10) NOT NULL,
CASE_STATUS VARCHAR2(1 BYTE) NOT NULL,
ALLOCATION_STATUS VARCHAR2(1 BYTE) NOT NULL,
CASE_TYPE VARCHAR2(1 BYTE) NOT NULL,
LETTER_SENT VARCHAR2(1 BYTE) DEFAULT ‘N’ NOT NULL,
CLI_CLIENT_ID NUMBER(9) NOT NULL,
CREATED_ON DATE NOT NULL,
CREATED_BY VARCHAR2(10 BYTE) NOT NULL,
USR_USER_ID VARCHAR2(8 BYTE),
CSR_SOURCE_ID VARCHAR2(3 BYTE),
CT_CORRESPONDENCE_TYPE_ID NUMBER(10) DEFAULT NULL,
CT_CORR_SEQUENCE_NO NUMBER(10) DEFAULT NULL,
CPP_PRIORITY_CODE VARCHAR2(1 BYTE),
EXPECTED_COMPLETION_DATE DATE,
DATE_FIRST_ALLOCATED DATE,
DATE_ALLOCATED DATE,
DATE_SUSPENDED DATE,
DATE_COMPLETED DATE,
DATE_DELETED DATE,
DATE_LETTER_SENT DATE DEFAULT NULL,
DATE_REMINDER_SENT DATE DEFAULT NULL,
DATE_RESPONSE_REQUIRED_BY DATE DEFAULT NULL,
DATE_REPLY_RECEIVED DATE DEFAULT NULL,
SOURCE_VALUE NUMBER(12,2) DEFAULT NULL,
SOURCE_REFERENCE VARCHAR2(20 BYTE) DEFAULT NULL,
SOURCE_YEAR VARCHAR2(4 BYTE) DEFAULT NULL,
FOUND_LIABLE CHAR(1 BYTE),
REGN_BY_COMPLIANCE CHAR(1 BYTE),
DOCUMENTS_CHECKED NUMBER(4),
DOCUMENTS_INCORRECT NUMBER(4),
CASE_FINALISED CHAR(1 BYTE),
MODIFIED_ON DATE,
MODIFIED_BY VARCHAR2(10 BYTE),
OLD_FILE_NO VARCHAR2(15 BYTE),
SUB_PROJ_ID NUMBER(10),
COMPLETION_CODE VARCHAR2(4 BYTE),
WIP_CODE VARCHAR2(4 BYTE),
WIP_START_DATE DATE,
WIP_END_DATE DATE,
WORK_TRAY_ID VARCHAR2(8 BYTE),
NO_OF_ASSESS NUMBER(5),
SOURCE_DATA VARCHAR2(4000 BYTE),
TRIM_IND VARCHAR2(1 BYTE),
VERSION_TIMESTAMP TIMESTAMP(6) DEFAULT SYSDATE,
NOI_DUE_DATE DATE,
COMPLIANCE_INTEREST NUMBER(18,2) DEFAULT NULL,
COMPLIANCE_PENALTY NUMBER(18,2) DEFAULT NULL
)

1 Like

Thank you for your investigation - we’ll find the bug and fix it.

Hi andyg,
so far I couldn’t reproduce the problem. Can you please try the same with the latest Oracle 12.x driver or 19c driver? It looks like a problem in the Oracle driver when fetching additional rows. Can you also please try the same query with the new database framework. With the new framework you can install several versions of the same driver as described here and then select the version you want to use in the connector node. The new framework also comes with a dedicated Oracle Connector node which makes it easier to connect to Oracle. However due to license restrictions from Oracle you still have to install the driver first to use the node.
If it also fails with the newest drivers and the new database framework please try to increase the fetch size to 400,000 rows to force the driver to read all rows in one go which prevents the call to the fetchMoreRows() method in the driver. To alter the fetch size open the new Oracle Connector node and go to the Advanced tab. In the tab scroll down and change the value of the Fetch size entry.

Bye
Tobias

Hi Tobias,
Sorry about the delay in replying.

Ok so far I have tried using the oracle connector node and the new database nodes and I still have the same problem.
I have also tried to increase the fetch size and I’m still having the same issue.
The Oracle driver is 12.2.0 (I think the latest is 12.2.2)

I’m not sure what else to try.

Thanks