← Back to Computer Vision

Computer Vision

AI and AutoML for Image-to-Image Translation Quality Assessment: Multi-Method Fusion in Computer Vision

  • AI in computer vision
  • AutoML image quality assessment
  • machine learning GAN evaluation
  • image-to-image translation
  • DISTS perceptual metric
  • Image Quality Assessment IQA
  • gradient boosting computer vision
  • MLJAR AutoML framework
  • model monitoring for generative AI
  • production AI validation

MLJAR tools were used in the following publication.

Paired Image-to-Image Translation Quality Assessment Using Multi-Method Fusion

Stefan Borasinski, Esin Yavuz, Sébastien Béhuret

Cyanapse Limited, Brighton, United Kingdom | ZEG.ai Ltd., London, United Kingdom

This research introduces an AI-driven Multi-Method Fusion (MMF) framework for automated quality assessment of image-to-image translation models in computer vision. By combining multiple Image Quality Assessment (IQA) metrics and training gradient-boosted ensembles (LightGBM, CatBoost, XGBoost) with MLJAR AutoML, the system predicts DISTS perceptual similarity scores without requiring ground truth images at inference time. The approach enables scalable, production-ready evaluation of GAN-generated images, including day-to-night and style-transfer tasks, achieving strong predictive correlation (R² up to 0.72). This work demonstrates how machine learning, ensemble modeling, and automated hyperparameter optimization can transform AI model monitoring and validation in real-world image synthesis pipelines.

arXiv • May 9, 2022

DOI: 10.48550/arXiv.2205.04186

Research Domains

Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.

Why Researchers and ML Engineers Choose MLJAR Studio

A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.

Reproducible Machine Learning Experiments

Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.

Local-First Execution & Data Control

Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.

Autonomous Model Benchmarking & Optimization

Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.

Build Research-Grade ML Workflows Locally

Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.