xgboost parameter optimization

hello, i haev a doubt regarding the hyperparameters optimization in xgboost. i found a workflow who seems doing it pretty well( Parameter Optimization (Table) Component with Range Sliders on Gradient Boosted Trees – KNIME Community Hub) . i performed a “Bayesian optimization”, then i tried to reproduce the same model with the optimized hyperameters, but the results is a little bit different(but i have the same seed and the same statified sampling in the cross validation). How can i perform the same result on the same dataset just using the same model but with the new parameters? iwhen i try to reproduce i just use cross validation+xgboost with parameters found+ scorer. But the result is always wrong. is it due to the fact that the optimization workflow uses partitioning -+ cross validation?

Hi @Andrearossi ,
If for “results are wrong” you mean a different F-measure, I suspect it is due to randomness during the training phase. Could you please send your workflow so we can have a look?



This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.