Oracle issue - ORA-28000: The account is locked

I’m using KNIME AP 4.7.6, and oracle.jdbc.OracleDriver version 21.8.0.
We have almost a dozen Oracle databases that sync/replicate to an Oracle ODS (Operational Data Store).
Those databases have an account (with read-only permissions) that also syncs over (of course) from the primary.
I have an Oracle connector that uses an account on those databases to log in and run queries with a DB Query Reader. The username and password are stored in the Oracle Connector node.

I’ve had issues for months with accounts locking - I will run the same connector, same query (in a sequential loop) to connect to each database, and to pull the same data from the same tables.

Out of all the systems, two of them keep reporting an error when trying to log in.
ORA-28000: The account is locked

I have the DBA unlock the account, and then I can use it for maybe a few hours, then it locks again. Some of the other systems used to do this but are fine nowadays.

I have tried:
Upgrading KNIME (same happened on 4.5.2)
Upgrading the Oracle drivers (formerly 19.14.0)
Changing the account entirely (the new one continues to lock)

I’m able to connect to other servers just fine using the same connector settings, and never have an issue. The DB team is pointing at my setup as the cause but haven’t been able to say what to look for. So I haven’t found the root cause. They don’t have logging enabled. So I’m at loss as to how to troubleshoot.

I could understand if all systems were having issues, since they are clones of each other. But for it to be the same 2 systems each time is odd.

We have another DB on the ODS that I’ve had zero problems with from start to finish.

Any suggestions on how to proceed?

1 Like

Hi @ajackson -

Without logs from the DB side, I imagine it’s going to be tough to diagnose.

Just a random idea - could you give the DB Connection Closer a try at the end of your workflows if you’re not already? I wonder if your account is getting locked because of settings on those particular databases related to concurrent connections. Just a wild guess.

1 Like

You’re right :slight_smile: It has been very tough to diagnose. I’ve updated my connection component to use the DB Connection Closer – thanks for the idea! I don’t suppose it can hurt. I’m not sure if there are limits imposed on that.
The DB team may be open to adding logging temporarily, so we’ll see. The most I can get right now is the lock_date.
Thanks again, AJ.

To add, we were having these issues frequently as well and fixed it by disabling the auto connect feature in the db connector node (advanced tab).


1 Like

Thanks for that suggestion - I did turn that off at some point after this started up, but it didn’t seem to help. I’ll keep it off regardless.

Part of the difficulty is that I can run query after query, but if I leave and come back hours later and it’s locked. I’ve never had it where I’ve ran a query one minute, and then was locked the next.

Hi @ajackson, are you saying that if you return some time later without KNIME running it locks, or only when you leave a KNIME workflow running?

My workflows run very quickly (usually less than a minute) since I’m dealing with small row counts. When I run the Oracle queries, they start and complete while I’m sat watching them. I often run multiple queries simultaneously though. I created a component that connects to several databases (one after another using a loop; the databases have the same schema). And I re-use that component to pull data from different tables on that database. This all uses one single account (without write access). So I kick off the queries (often at the same time), watch them complete, and then start using the data 30 seconds or so later.

@ajackson if you use a loop is the initial database connection inside or outside the loop?

It has to be inside.
I can’t really do it with the connections on the outside - the number of databases can change (I have their connection details in a table).

I have a number of workflows that use the same component, to pull data from different tables for different reasons at different times.

So one workflow may have a component configured to SELECT id,foo,bar FROM XYZ for all databases. The SQL statement is fed in via a variable to the (Oracle) DB Query Reader. (That statement stays the same during the loop)
I have a listing of the connection details for all these different databases in a MSSQL table (far left). The component goes through each DB in that list, one by one, connects, runs the query, and disconnects (thanks to another comment here). The Multiport Loop End then provides me with all results from all databases for that one query, with a little meta info.

The same or another workflow then may have the same component again with a different query/table, and it will do 'SELECT num,this,that,theother FROM ABC for all databases.

This worked perfectly on a prior set of databases but they have been replaced with new ones, and this is when I ran into problems.

Here’s a sanitized version of a portion of the component:

The DBA’s profiles and settings are all over the place, so I’ve asked them to clean that up at least so it’ll be easier to troubleshoot.

@ajackson depending on the number and frequency of the loop this construct would mean that you would establish a large number of (new) connections to the oracle server in a short time. Some setting or security might not like that. I would place the connector outside the loop and try again. Or are these rally different servers altogether? In that case you could try and insert a wait node to wait a few seconds.

We have multiple separate primary servers that are synced over to a single Oracle Data Store (ODS) using Oracle DataGuard. I connect to that ODS and specify the database as a part of the connection settings. The Oracle connector inside the loop has:
–Connection Settings–
Hostname: This is actually the same for all databases.
Database name: This changes every loop to a different database.

So I have to make multiple connections, but they happen sequentially. I can’t put the connection outside the loop because the loop supplies the database name. Otherwise I end up having no loop, and multiple connector nodes, which wouldn’t work for us.

I only connect to the ODS, but the account locking is happening at the primary. I don’t know enough about ODS to know if the sync’d copy has a hand in this.

Operationally speaking - having multiple databases is a pain, but it’s to distribute the high load and to spread the operational risk.

As for the security - I would agree except it’s affecting the same few servers over and over, and not all of them. There is monitoring and I’ve spoken with the security team and what I’m doing isn’t going to be flagged on their systems.

As for the settings - The profiles and the passwords settings are not the same across all servers. I’m working with the DB team to get that standardized.

If I query a dozen systems sequentially it’s just a few in the middle that keep tripping.

Hi @ajackson ,

How is the password being supplied to the oracle connection? If it’s not the same across servers, then presumably it is being fed in dynamically within the loop along with host and database name?

The p/w is supplied in the Oracle connector’s Connection Settings / Authentication area as a “Username & password”. Nothing fancy there, no variables, etc. The username and password are identical across all servers. (It only has read permissions).

Ah, ok I misunderstood when you said “The profiles and the passwords settings are not the same across all servers”. I thought you meant the passwords themselves differed. So the passwords are the same.

I’m clutching at straws here but is it possible that on a couple of the servers the password change policy has been set incorrectly to a very short period. I don’t know if that is a realistic scenario.

Does your database connection component have any error/retry within it, or does it just fail if the connection attempt fails? I’m wondering what would happen if a password expired in the above (fictional) scenario when processing the loop. i.e. would it repeatedly retry the connection, and lock the account?

1 Like

My apologies for the confusion. Yeah I meant things like PASSWORD_LOCK_TIME and FAILED_LOGIN_ATTEMPTS, etc vary. The password can only be changed by the DBA and it hasn’t been changed recently.

The workflow makes one connection attempt per database inside of a Try/Catch loop. If it fails, it just records which system fails, and then moves on to the next database. It does not retry. As it attempts to connect, I get the “ORA-28000: The account is locked” error in red in the console.

I wish it was the expiration, but the account was created across all databases, and was locking within weeks whereas the policy is set to either default or 365. The ACCOUNT_STATUS was showing as ‘LOCKED’ in DBA_USERS as opposed to ‘EXPIRED & LOCKED’. We even tried a second account, and gave the username to nobody, and had the same issue.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.