I like to demonstrate parameter optimization by using decision trees, because you can easily overfit them, i.e. make them perform poor on data points that were not part of the training partition.
Instead of focusing on how much you can improve your model trained with default parameters (they are chosen for a reason, likely because they are supposed to work OK in many cases), you can do a grid search and compare best and worst parameter combinations to show the possible impact of a hyperparameter optimization.
Just because I stumbled over this. There is a nice example of how to optimize a regression here which hints points at some data where I’d assume that optimization actually allows you to improve the regression: