Hyper Parameters Search

Search for the best hyper parameters for your model and data. There are available two approached for search. The Randomized Search that will draw combination of parameters and evaluate them. The Grid Search that will check each combination of parameters. The Randomized Search is faster because it is not checking all possible combinations of parameters.

Please select the model that you would like to tune and recipe will propose set of hyper parameters to tune. You can set cross validation strategy and evaluation metric. If verbose output is selected, each iteration will be printed. You can use best hyper parameters to train a model on full dataset.


Required packages

You need below packages to use the code generated by recipe. All packages are automatically installed in MLJAR Studio.


Interactive recipe

You can use below interactive recipe to generate code. This recipe is available in MLJAR Studio.

In the below recipe, we assume that you have following variables available in your notebook:

  • X (type DataFrame)
  • y (type Series)
  • tree_classifier (type DecisionTreeClassifier)
  • tree_regressor (type DecisionTreeRegressor)
  • forest_classifier (type RandomForestClassifier)
  • forest_regressor (type RandomForestRegressor)
  • knn_classifier (type KNeighborsClassifier)
  • knn_regressor (type KNeighborsRegressor)

Python code

# Python code will be here

Code explanation

  1. Create validation strategy.
  2. Setup grid with parameters that will be checked.
  3. Create search strategy object.
  4. Run hyper parameters search by fitting to the data.
  5. Display best performing score and parameters.

Please aware that this step might be time consuming because for each hyper parameters combination a model is fitted. What is more, model is fitted with cross validation, so the fit is called for each iteration of cross validation as well.