MLJAR Studio Changelog

Version 1.0.0

Invalid Date

Conversational notebooks, AutoLab experiments, and flexible AI providers

MLJAR Studio 1.0.0 introduces conversational notebooks with your own AI Data Analyst, AutoLab experiments for autonomous machine learning optimization, and flexible LLM provider settings with support for MLJAR, OpenAI, and Ollama-based open models.

Highlights

Added conversational notebooks with AI Data Analyst
Added AutoLab experiments for autonomous ML pipeline optimization
Added external LLM provider configuration
Support for MLJAR, OpenAI, and Ollama-based models

Added

  • Added conversational notebooks with your own AI Data Analyst.
  • You can talk to the AI Data Analyst in natural language, and it returns Python code to help you analyze data.
  • The AI helps you get insights from the results while keeping the workflow reproducible.
  • Added AutoLab experiments, an autonomous AI system for optimizing machine learning pipelines.
  • AutoLab can improve feature engineering, construct better features, and help with model interpretation.
  • Added the option to configure an external LLM provider.
  • MLJAR is the default AI provider.
  • You can also configure OpenAI with your own API keys.
  • You can connect Ollama and use open models such as Kimi, GLM, Qwen, or GPT OSS.

Improved

  • Improved the path from asking questions about data to getting executable Python analysis and actionable results.
  • Improved the first public AI-assisted workflows for data analysis and machine learning experimentation.

Fixed

  • Stabilized the first public release for desktop usage across supported platforms.