← Back to NLP

NLP

AutoML and Ensemble Transformers for Sentiment Analysis in Mexican Tourism Texts

  • AutoML sentiment analysis
  • mljar-supervised NLP
  • ensemble BERT models
  • machine learning Spanish texts
  • AI tourism sentiment analysis
  • AutoML transformers
  • Macro F1 sentiment analysis
  • MAE regression NLP
  • automated hyperparameter tuning NLP
  • IberLEF 2022 challenge

MLJAR tools were used in the following publication.

AutoML and Ensemble Transformers for Sentiment Analysis in Mexican Tourism Texts

Victor Gómez-Espinos, Victor Muñiz-Sanchez, Adrian Pastor López-Monroy

Mathematics Research Center (CIMAT), Monterrey, Mexico; Mathematics Research Center (CIMAT), Guanajuato, Mexico

This study presents a hybrid framework combining ensemble BERT transformers and mljar-supervised AutoML for sentiment analysis in Mexican tourism texts. High-level contextual embeddings were extracted using an ensemble of fine-tuned Spanish BERT models and subsequently optimized using automated machine learning techniques. The AutoML-driven ensemble improved classification and regression performance, achieving Macro F1-score up to 0.9888 for opinion type classification and MAE of 0.2440 for polarity prediction. The proposed methodology obtained third place in the IberLEF 2022 sentiment analysis challenge.

IberLEF 2022 (CEUR Workshop Proceedings) • September 1, 2022

DOI: https://ceur-ws.org/Vol-3202/restmex-paper5.pdf

Research Domains

Explore peer-reviewed and applied machine learning studies across diverse domains, including healthcare analytics, financial modeling, manufacturing optimization, and structured data classification problems.

Why Researchers and ML Engineers Choose MLJAR Studio

A private, AI-powered Python notebook designed for reproducible machine learning experiments, structured benchmarking, and applied research workflows - fully under your control.

Reproducible Machine Learning Experiments

Design structured pipelines, save experiment runs, and compare results across iterations with full transparency. Every validation setup, hyperparameter configuration, and model benchmark is recorded - making your research repeatable and defensible.

Local-First Execution & Data Control

Run all workflows directly on your machine. Sensitive datasets remain private, with no mandatory cloud uploads or external AI services required. Maintain full control over runtime environments and compliance requirements.

Autonomous Model Benchmarking & Optimization

Automatically compare candidate models, perform cross-validation, and run hyperparameter optimization while retaining full visibility into generated Python code and evaluation metrics. Accelerate experimentation without sacrificing methodological rigor.

Build Research-Grade ML Workflows Locally

Run automated model benchmarking, hyperparameter optimization, and autonomous experiments while keeping full control over your data.