I have been really enjoying AutoML to understand the performance of a nice group of models. I made some changes so I could apply exact training and test sets from models I am also looking at outside of KNIME. Another addition I made was to add SVM to the other 9 models which are inside of AutoML.
I am finding that the runtime of the SVM learner is much longer than any other of the models even the neural models which I’m using Tensorflow and Keras cpu environments. For context the other models are running hours 2-4 and the SVM is running days for the same 4 train validation rounds. The input is classification for potency (0,1) and 40K rows of 1024 bitvectors from a morgan fingerprint.
Any experiences or ideas on how I can get comparable performance to the other models?
Thanks in advance!
Hi Jason @j_ochoada
you’re right, that is a known issue. We’ve experienced that ourselves, and it’s reported to the development team. Unfortunately I can’t state when it will be improved, but we’re on it. Thanks for posting anyway, feedback from users is always welcome! Hopefully I’ll have a more satisfying reply next time.
Forgot to mention, proposed workaround for now is to use LIBSVM if that helps you.
Thanks for the helpful reply. Do you off hand know what the closest analogous settings for LIBSVM would be? They seem to use different terminology.
unfortunately I have never used the LIBSVM with the Python integration myself and I’m also not profound with the settings right away, no. I’m sorry
@Alice_Krebs Thank you! I got it to run but even LIBSVM is slow compared to the other methods in AutoML which I’m guessing why it wasn’t included. I’m running on 40K rows with 1024 fingerprint columns and it took LIBSVM 68 hours to complete. The other models took between 1-6 hours.
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.