← Back to Manufacturing

Manufacturing

Integrating Domain Knowledge and AutoML for Manufacturing Cost Estimation: AI in Make-to-Order Production

  • machine learning
  • artificial intelligence
  • AutoML
  • manufacturing cost estimation
  • make-to-order production
  • text mining
  • CRISP-DM
  • industrial AI
  • ensemble models
  • predictive analytics in manufacturing

MLJAR tools were used in the following publication.

Intégration de connaissances du domaine et de l’apprentissage automatique pour l’estimation des paramètres de fabrication

Abdoul Rahime Diallo, Abdourahim Sylla

Arts et Métiers Institute of Technology, Université de Lorraine, LCFC, HESAM Université, Metz, France | Université Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, France

This research presents an AI-driven framework for estimating manufacturing parameters in make-to-order (MTO) production environments by combining structured ERP data, unstructured textual specifications, and expert domain knowledge. Based on the CRISP-DM methodology, the approach integrates text mining, feature engineering, and automated machine learning (AutoML) to improve fabrication time and cost prediction accuracy. Using real industrial data from over 23,000 past estimations, the study demonstrates how domain-informed data preparation and ensemble models can significantly enhance reliability and reduce expert workload. The work highlights how artificial intelligence and machine learning can transform cost estimation, bidding processes, and decision support in industrial manufacturing.

CIGI QUALITA MOSIM 2023 • June 1, 2023

DOI: 10.60662/pyvs-cj63

Research Domains

Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.

Why Researchers and ML Engineers Choose MLJAR Studio

A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.

Reproducible Machine Learning Experiments

Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.

Local-First Execution & Data Control

Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.

Autonomous Model Benchmarking & Optimization

Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.

Build Research-Grade ML Workflows Locally

Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.