Automated Machine Learning
Compete Mode

Achieve the best predictive accuracy with AutoML.
State-of-the-art performance .

Compete Mode

The mljar AutoML framework can work in the Compete mode.
The Compete mode is perfect to get the best performing ML pipeline.


Compete Mode

Compete

The Compete mode provides the high accuracy ML pipeline. It is a perfect mode for solutions where small percent of improvemnt has big impact. The example use cases for Compete mode are trading or data science competitions.

Feature Engineering

Feature Engineering

Use advanced feature engineering techniques to improve the performance. Enhance your data with Golden Features, K-Means Features. Use Feature Selection to train ML models only on relevant features.

Algorithm Selection

Algorithms

The Compete mode use many different ML algorithms. It ensembles and stacks them to get the best score. Properly implemented to avoid overfitting.


Machine Learning Algorithms

The Compete mode uses the following Machine Learning algorithms:
Decision Tree, Random Forest, Extra Trees, Xgboost, LightGBM, CatBoost,
Neural Network, Nearest Neighbors, Ensemble, Stacked Ensemble.

Decision Tree Random Forest Extra Trees Xgboost LightGBM CatBoost Neural Network Ensemble Ensemble Stacked

The AutoML Code

Just set mode="Compete" during AutoML initialization and you are set!

"""AutoML Compete code """

from supervised import AutoML

# Initialize AutoML in Compete Mode
automl = AutoML(mode="Compete")
automl.fit(X, y)

Compete Advantages

Automated Validation Adjustement

The validation strategy is set automatically based on time budget and machine speed. The validation can be: 75%/25% train/test split, 5-fold Cross-Validation or 10-fold Cross Validation.

ML Task Detection

The AutoML can detect the type of Machine Learning task to be solved based on target feature values. The supported ML tasks are: binary, multi-class classification, and regression.

Automated Metric Selection

Based on selected ML task type the proper metric for optimization will be choosen. The classification tasks will optimize LogLoss, while the regression tasks will minimize RMSE.

Feature Engineering

In the Compete mode the AutoML will use feature engineering techniques like: Golden Features, K-Means Features. It improves the final model performance.

Hyperparameters Tuning

Tune ML models by checking different set of hyperparameters. The AutoML tune ML algorithms in three steps. Firstly, it checks the performance with a default set of HP. Then do a random-search over a defined set of HP. In the last step, it uses hill-climbing to fine-tune HP.

Boost On Errors

In this technique, the ML model is trained with sample weights boosted on previous best models errors. It is very similar to AdaBoost algorithm, except that there are only two iterations (one model with sample weights all equal 1, and second model with boosted sample weights on previous model errors).

Ensembling

The Compete uses Ensemble Averaging and Ensemble Stacking methods. They are only available for Cross-Validation.

Customization

You can further customize Compete settings. You can easily switch on/off selected procedures by setting proper variable during AutoML initialization.

Check more mljar features

Golden Features

K-Means Features

Model Ensembling

ML Explainability

Automatic Documentation