In the white paper on page 21, it is written that when debugging one should never reset the FileNameCreation nodes. However, by following this directive, it appears that at some point the workflow starts producing the same timestamp over and over again. Is there any way to fix this other than restarting workflow design from the template workflow ?
After giving it another thought, I’ve figured that the formulation in the white paper really is meant literally for debugging purposes only. Before running the model factory, the processes all have to be reset, until you’ll debugging again one day.
cool, you are using the model factory.
Yes this is really only for debugging. Afterwards the node needs to be reset so in every run you get a new timestamp. Otherwise the tables are overwriting each other, which is not going to work.
How is it going with the factory? Anything else which you found difficult to implement?
Thank you for the feedback.
I’ve decided to give the model factory a try. I like it a lot so far due to its modular approach and scalability. A single workflow for each model is indeed too burdensome to manage.
The white paper has been very helpful. Maybe it remains a little vague on setting up the configuration and process tables. Luckily, I’ve discovered the relevant workflow in the Metainfo folder
Instead of a hold-out strategy for evaluation, I’ve been able to work around the model factory to perform cross-validated evaluation. Finally, I’ve implemented another workflow similar to the model factory for applying the deployed models to fresh data. This allows for segregation between learning and actual prediction, while being able to recycle modules shared with the model factory.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.