Responsible AI POLICY
Last updated: April 7, 2026
At MLJAR, we build tools that help users work with data and machine learning in a practical and accessible way. We are committed to using artificial intelligence (AI) responsibly, with a focus on transparency, user control, and clear communication of limitations. This policy reflects our approach to responsible AI and is aligned with Regulation (EU) 2024/1689 (Artificial Intelligence Act).
This policy should be read together with our Terms and Conditions and Privacy Policy.
1. Scope
This policy applies to AI-powered features in MLJAR products, including but not limited to:
– AutoML engine — automated machine learning for building predictive models from tabular data, – AI Assistant — conversational interface for data analysis, code generation, and workflow support, – integrations with local and cloud-based language models configured by the user.
2. How AI Features Work
The AutoML engine automatically builds predictive machine learning models from user-provided tabular data. It applies, e.g., cross-validation, algorithm selection, and ensembling to produce models with measurable performance metrics. Supported tasks include binary classification, multiclass classification, and regression. The AI Assistant uses large language models (LLMs) to help users analyze data, generate or explain code, and produce workflow suggestions. The AI Assistant connects to external LLM providers configured by the user (such as OpenAI or locally hosted models). MLJAR does not develop or train the underlying language models. AI outputs are generated probabilistically and may be incomplete, inaccurate, or unsuitable for a specific use case. All outputs require human review before use.
3. Human Oversight and Responsibility
AI outputs are provided for assistance purposes only. MLJAR tools are designed to support users, not replace human judgment. Users are responsible for reviewing and validating all AI-generated outputs before relying on them in production, business, legal, medical, financial, or other high-impact contexts. AI responses should not be treated as professional advice. MLJAR tools are general-purpose. When used in sensitive or regulated domains — such as employment screening, credit scoring, healthcare, or law enforcement — users are responsible for assessing whether their use case qualifies as high-risk under applicable law (including Annex III of the EU AI Act) and for ensuring compliance with all relevant obligations.
4. Privacy and Data Handling
MLJAR Studio is designed for local, offline-first operation. By default, data processing occurs on the user’s machine and no data is transmitted externally. When users configure AI features to use external providers, the following applies: – Local model (e.g., Ollama): data does not leave the user’s machine. – External provider with the user’s own API key (e.g., OpenAI): prompts and inputs are transmitted to that provider and processed according to their own terms and privacy policy. MLJAR does not receive or store this data. – MLJAR AI add-on (cloud service): data may be processed on MLJAR infrastructure. Details are described in the Privacy Policy at mljar.com/privacy. MLJAR does not use customer data to train its own machine learning models unless explicitly agreed with the user. Users are responsible for configuring AI providers in line with their organization’s privacy, security, and compliance requirements, and for ensuring they have a lawful basis for any data they submit to AI features.
5. Limitations
AI systems have inherent limitations. Users should be aware that: – AutoML model quality depends on the quality and representativeness of input data. Models may not generalize to new data. Training data may contain bias that is reflected in model outputs. – AI Assistant outputs may be incorrect, incomplete, or misleading. Generated code may contain errors and should be reviewed before use in any environment. – MLJAR does not control the behavior or availability of third-party LLM providers. AI functionality may change over time due to model updates, provider API changes, product improvements, or safety and reliability adjustments. MLJAR does not guarantee uninterrupted availability of any specific model, provider, or AI capability.
6. Transparency
MLJAR informs users when AI is used in its products. AI-generated outputs are clearly indicated where applicable. Where practical, MLJAR makes AI-assisted workflows inspectable — for example, through visible generated code and notebook-based outputs. Users should treat inspectability as a key part of validation and reproducibility.
7. Prohibited Use
Users must not use MLJAR AI features to: – violate applicable laws or regulations, – infringe third-party intellectual property or other rights, – generate malicious code or harmful content, – process data without a lawful basis, – misrepresent AI-generated output as verified fact or professional advice. Where MLJAR provides cloud-based AI services, MLJAR may limit or suspend access to those services in cases of abuse, misuse, or policy violations.
8. Continuous Improvement
We continuously improve our AI features based on user feedback and observed limitations. We monitor known risks and update this policy to reflect changes in our products and applicable regulations.
9. Incident Reporting
Users who identify incorrect, harmful, misleading, or unexpected outputs from any MLJAR AI feature can report them at contact@mljar.com. Reports are reviewed and addressed in a proportionate and timely manner.
10. Policy Updates
We may update this policy from time to time. Updated versions will be posted on this page with a revised “Last updated” date. Continued use of MLJAR AI features after changes are published constitutes acceptance of the updated policy.
11. Contact
If you have questions or concerns about AI in MLJAR, please contact us at contact@mljar.com.