workflow not executed on sheduled time..skiped few workflow and jumped to next..

sorry for so many log files in multiple post… i was unable to upload files.

Hi,

checked your log files and it seems that you have a bunch of expired licenses. Maybe you license was still valid until the 16th of August and then wasn’t valid anymore for the 17th and 18th.
Furthermore, there seems to be a license for which KNIME Server doesn’t have rights to read it, nameley /srv/knime_server/licenses/20200721_KNIMEServerSmall_SWT(EarlySalary).xml.
Please ensure that the user that runs KNIME Server has also read permissions for this file.

Otherwise, I couldn’t find any other error that would explain this behaviour.

Cheers,
Moritz

Thats not the case - we were using trial licence , i replace it same day . and issue is on 17 18 19 aug…

Hi,

do you know when you replaced it and how you did replace it, via SSH or via the WebPortal?
Could you check the permissions of the file via SSH and check if the user under which KNIME Server runs can also access/read the license file?
Did the scheduled jobs continue on the 20th? If so, did you do anything specifically, i.e. rescheduling the jobs?

As i checked on 16 only - it was showing valid licence file.
i replace it copy and paste in folder.
It worked till date…

that was long weekend so noone touch or login on that AWS instance
may be it was on sleep !

Hi,

it might have been that your trial license was still valid and used by the server. And if you copied it via copy and paste it could be that the renewed license file is somehow read protected, i.e. only readable by root/the user that copied it there. That would explain the error, that the server couldn’t read it due to a denied access.

Cheers,
Moritz

on 16 as changed licence , my team was able to upload and run WKF.
and
privously when issue was occured at that time licence was not changed

that why i am sure - it not licence issue

Hi,

yes that is rather weird. Are you currently able to execute jobs?
I’d suggest to delete all licenses on the server and upload the most current one via the WebPortal (you have to login as an admin). This way it guarantees that the server is able to read the license, as the logs you’ve sent me indicate that the license check during scheduling didn’t work out due to an access denied error. Thus, if the error occurs another time we can rule out that this was the fault.

Cheers,
Moritz

The error message in question (rather technical):

(EarlySalary).xml’: /srv/knime_server/licenses/20200721_KNIMEServerSmall_SWT(EarlySalary).xml
java.nio.file.AccessDeniedException: /srv/knime_server/licenses/20200721_KNIMEServerSmall_SWT(EarlySalary).xml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at com.knime.licenses.LicenseStore.readDirectory(LicenseStore.java:267)
at com.knime.licenses.LicenseStore.(LicenseStore.java:181)
at com.knime.enterprise.server.util.ServerLicenseHandlerImpl.reloadLicenses(ServerLicenseHandlerImpl.java:139)
at com.knime.enterprise.server.application.PeriodicLicenseChecker.checkLicenseExpiry(PeriodicLicenseChecker.java:78)
at com.knime.enterprise.server.application.PeriodicLicenseChecker.lambda$start$1(PeriodicLicenseChecker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Hello Team

I dont like to hold you guys
but as of now i am not facing issue

if same error occures i will post update on same topic.
can we put this on hold for some time…

recently we have made changes in AWS instance -( upgrade core and memory ), to improve performace ,so everthing is working smoothly

help is really appriciated

1 Like

Hello Team

I Am facing same issue again for particular hour , can you guys look into this… i am posting logs for same.

This happening in 4:00 AM IST. as my knime server is on AWS instance. server 4.9.
21.txt (34.6 KB)

catalina.2019-10-21.log (302 Bytes)

localhost.2019-10-21.log (338.0 KB)

Hi,

which workflows are you talking about and when should they have been executed?

Cheers,
Moritz

if you can look at screenshot… today’s date is 2019-10-21
given workflow are not run today.

I think screenshot is selfexplanatory

Could you also provide the logs from the executor?
It seems like that the workflows have been started/executed but they somehow disappeared from the executor:

21-Oct-2019 05:15:59.554 WARNING [KNIME-Job-Status-Updater_1] com.knime.enterprise.server.executor.rmi.RMIJobStatusUpdaterImpl.updateJobMap Job ‘/Es_analytics_datamart_daily_wrkf_server/perfios_personal (perfios_personal 2019-10-21 04.55.00; dbad2221-a9fe-42f4-b071-8c24bb207f83)’ disappeared from the executor!!

21.txt (34.6 KB)

Hi,

I had a look at your logs and it seems like that the executor may have shut down unexpectedly or that the server couldn’t ping it. Did you change anything concerning the default AWS installation (I assume that you are still running KNIME Server on AWS, please correct me if I’m wrong).
How much memory did you assign to the executor?
Also, could you adjust the logging level for the localhost logging? You can find the properties under ? <tomee-folder>/conf/logging.properties and you would have to set the following line to FINEST:

2localhost.org.apache.juli.FileHandler.level = FINEST

Cheers,
Moritz

Yes i changed to FINE to FINEST.

I have not done any changes in Aws setting , recently we upgrade knime server from 4.8 to 4.9.
memory allocation in knime/knime_4.0.1/knime.ini - 26gb . out of 32gb.

Hi @navinjadhav,

I’ve had a look in the logs, but there is actually not too much in there except for some hints that another process accesses and therefore blocks some files and some failing DB nodes. Furthermore there are warnings from some workflows as the executor does not have the needed extensions installed (e.g. Palladian).

Best,
Marten

1 Like