Logistic Regression: KNIME uses a Bayesian formulation of the problem where you pick the prior distribution of the weights i.e. the distribution you expect the weights to be generated by. We implicitly set the mean of this distribution to 0 and you can control the variance via the variance parameter. This means that the larger the variance, the larger the weights are allowed to be. Therefore a larger variance corresponds to less regularization. The alternative formulation using Lambda works the other way around i.e. the larger the lambda the stronger the regularization. For more details on logistic regression and regularization, you can check out this blog post by Kathrin: https://www.knime.com/blog/regularization-for-logistic-regression-l1-l2-gauss-or-laplace
Random Forest: The Random Forest nodes set the mtry parameter to the square root of the number of features as it is the default suggested in the literature. If you want to have more control, you can always switch to the Tree Ensemble Learner node that will give you a lot more options to tweak. Among these, you will find the Attribute Sampling (Columns) option where you can select how many columns should be used for each split. Those options include all columns, the square root, a linear fraction or an absolute number of columns.