← Back to Physics

Physics

AI and AutoML in High Energy Physics: Collider Event Classification with mljar-supervised

  • AI in high energy physics
  • AutoML collider analysis
  • machine learning particle physics
  • mljar-supervised research
  • event classification deep learning
  • ROC AUC physics models
  • ensemble learning in physics
  • automated machine learning experiments
  • tabular data physics AI

MLJAR tools were used in the following publication.

Application of Deep Learning Technique to an Analysis of Hard Scattering Processes at Colliders

Lev Dudko, Petr Volkov, Georgii Vorotnikov, Andrei Zaborenko

Skobeltsyn Institute of Nuclear Physics, M.V. Lomonosov Moscow State University, Moscow, Russian Federation

This research explores the application of artificial intelligence and Automated Machine Learning (AutoML) to classification tasks in high energy collider physics. Using the mljar-supervised framework, the authors evaluated automated ensemble models against tuned deep neural networks for top-quark event identification and QCD background suppression. AutoML models achieved competitive ROC AUC performance and in some cases outperformed manually tuned DNNs on test datasets, demonstrating the viability of automated model selection in particle physics analysis. The study highlights how machine learning, ensemble methods, and AutoML pipelines can accelerate scientific discovery in large-scale physics experiments.

DLCP’21 – Deep Learning in Computational Physics Workshop • September 14, 2021

DOI: https://doi.org/10.48550/arXiv.2109.08520

Research Domains

Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.

Why Researchers and ML Engineers Choose MLJAR Studio

A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.

Reproducible Machine Learning Experiments

Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.

Local-First Execution & Data Control

Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.

Autonomous Model Benchmarking & Optimization

Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.

Build Research-Grade ML Workflows Locally

Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.