← Back to Healthcare

Healthcare

AutoML and Ensemble Learning for Post-Heart Failure Survival Prediction Using MLJAR

  • AutoML heart failure
  • machine learning survival prediction
  • AI cardiology risk stratification
  • MLJAR AutoML research
  • ensemble learning healthcare
  • AUROC heart failure prediction
  • automated machine learning medicine
  • post heart failure mortality prediction
  • clinical risk modeling AI
  • state-of-the-art medical AI

MLJAR tools were used in the following publication.

Amalgamation of Auto Machine Learning and Ensemble Approaches to Achieve State-of-the-Art Post-Heart Failure Survival Predictions

Ali Haider Bangash, Ali Haider Shah, Arshiya Fatima, Saiqa Zehra, Syed Mohammad Mehmood Abbas, Hashir Fahim Khawaja, Muhammad Ashraf, Adil Baloch

Shifa College of Medicine, Shifa Tameer e Millat University, Islamabad, Pakistan

This study published in the American Heart Journal explores the use of MLJAR AutoML combined with ensemble approaches to predict post-heart failure mortality. Using a cohort of 299 heart failure patients, multiple classification algorithms were evaluated across different clinical scenarios. The best-performing ensemble models achieved AUROC up to 0.89, demonstrating that automated machine learning combined with ensemble methods can significantly improve survival prediction accuracy and risk stratification in heart failure management.

American Heart Journal • December 1, 2021

DOI: 10.1016/j.ahj.2021.10.021

Research Domains

Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.

Why Researchers and ML Engineers Choose MLJAR Studio

A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.

Reproducible Machine Learning Experiments

Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.

Local-First Execution & Data Control

Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.

Autonomous Model Benchmarking & Optimization

Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.

Build Research-Grade ML Workflows Locally

Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.