Responsible AI POLICY

Last updated: April 7, 2026

At MLJAR, we build tools that help users work with data and machine learning in a practical and accessible way. We are committed to using artificial intelligence (AI) responsibly, with a focus on transparency, user control, and clear communication of limitations. This policy reflects our approach to responsible AI and is aligned with Regulation (EU) 2024/1689 (Artificial Intelligence Act).

This policy should be read together with our Terms and Conditions and Privacy Policy.

1. Scope

This policy applies to AI-powered features in MLJAR products, including but not limited to:

2. How AI Features Work

The AutoML engine automatically builds predictive machine learning models from user-provided tabular data. It applies, e.g., cross-validation, algorithm selection, and ensembling to produce models with measurable performance metrics. Supported tasks include binary classification, multiclass classification, and regression. The AI Assistant uses large language models (LLMs) to help users analyze data, generate or explain code, and produce workflow suggestions. The AI Assistant connects to external LLM providers configured by the user (such as OpenAI or locally hosted models). MLJAR does not develop or train the underlying language models. AI outputs are generated probabilistically and may be incomplete, inaccurate, or unsuitable for a specific use case. All outputs require human review before use.

3. Human Oversight and Responsibility

AI outputs are provided for assistance purposes only. MLJAR tools are designed to support users, not replace human judgment. Users are responsible for reviewing and validating all AI-generated outputs before relying on them in production, business, legal, medical, financial, or other high-impact contexts. AI responses should not be treated as professional advice. MLJAR tools are general-purpose. When used in sensitive or regulated domains — such as employment screening, credit scoring, healthcare, or law enforcement — users are responsible for assessing whether their use case qualifies as high-risk under applicable law (including Annex III of the EU AI Act) and for ensuring compliance with all relevant obligations.

4. Privacy and Data Handling

MLJAR Studio is designed for local, offline-first operation. By default, data processing occurs on the user’s machine and no data is transmitted externally. When users configure AI features to use external providers, the following applies:

5. Limitations

AI systems have inherent limitations. Users should be aware that:

6. Transparency

MLJAR informs users when AI is used in its products. AI-generated outputs are clearly indicated where applicable. Where practical, MLJAR makes AI-assisted workflows inspectable — for example, through visible generated code and notebook-based outputs. Users should treat inspectability as a key part of validation and reproducibility.

7. Prohibited Use

Users must not use MLJAR AI features to:

8. Continuous Improvement

We continuously improve our AI features based on user feedback and observed limitations. We monitor known risks and update this policy to reflect changes in our products and applicable regulations.

9. Incident Reporting

Users who identify incorrect, harmful, misleading, or unexpected outputs from any MLJAR AI feature can report them at contact@mljar.com. Reports are reviewed and addressed in a proportionate and timely manner.

10. Policy Updates

We may update this policy from time to time. Updated versions will be posted on this page with a revised “Last updated” date. Continued use of MLJAR AI features after changes are published constitutes acceptance of the updated policy.

11. Contact

If you have questions or concerns about AI in MLJAR, please contact us at contact@mljar.com.