Education
Machine Learning and Explainable AI for Student Dropout Prediction: An AutoML Research Case Study in Higher Education
- machine learning
- artificial intelligence
- AutoML
- automated machine learning
- explainable AI
- educational data mining
- student dropout prediction
- counterfactual explanations
- SHAP
MLJAR tools were used in the following publication.
Dropout insight: Educational risk dashboard with counterfactual explanations
Muñoz-Muñoz, Marta, Luna, Christian, Lara, Juan A., Romero, Cristóbal
University of Córdoba, Department of Computer Science and Numerical Analysis, Spain
This peer-reviewed research explores machine learning and artificial intelligence for student dropout prediction in higher education. The authors developed an interactive dashboard that integrates AutoML, SHAP-based explainable AI, and counterfactual analysis to support predictive modeling and data-driven decision-making. The system automatically selects the best model and provides both individual and group-level explanations. The study highlights how AI and automated machine learning can transform educational data analysis into actionable insights.
SoftwareX • February 2, 2026
Research Domains
Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.
Why Researchers and ML Engineers Choose MLJAR Studio
A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.
Reproducible Machine Learning Experiments
Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.
Local-First Execution & Data Control
Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.
Autonomous Model Benchmarking & Optimization
Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.
Build Research-Grade ML Workflows Locally
Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.