A new open AI foundation model has entered the European landscape, and it comes with a bold promise: full transparency, sovereign control, and regulatory compliance out of the box. Meet APERTVS , a collaborative effort by the Swiss AI Initiative, bringing together EPFL (École Polytechnique Fédérale de Lausanne), a world-leading public research university in Lausanne, Switzerland consistently ranked in the QS top 15, renowned for its pioneering work in AI, robotics, and computer science; ETH Zurich (Swiss Federal Institute of Technology), one of the world’s top science and engineering universities in Zurich, ranked in the QS top 10 and home to 22 Nobel laureates including Albert Einstein; and the Swiss National Supercomputing Centre (CSCS) , which provided over 10 million GPU hours on its “Alps” supercomputer to train the models.
What is APERTVS?
APERTVS is a large language model built on the principles of open weights, open data, and open science. Every aspect of the model (training data, code, weights, methods, and alignment) is documented and fully reproducible. The project’s tagline captures it well: “APERTVS is to AI as Open is to Source.”
Available at both 8B and 70B parameter scales, APERTVS is competitive with leading open models while being multilingual from inception, trained on over 1,000 languages. It’s designed to serve as a global foundation that organizations can build upon, fine-tune, and deploy for their specific needs.
Why it matters for Europe
What sets APERTVS apart from other open models is its deliberate focus on EU AI Act compliance. The model is built to respect data opt-outs, remove personally identifiable information (PII), and prevent memorization of training data, three critical requirements under the EU’s regulatory framework.
For European companies, this is significant. As the EU AI Act enforcement timelines approach, organizations face increasing pressure to ensure their AI systems meet compliance standards. Using a model that was designed with these requirements in mind (rather than retrofitted) can dramatically reduce regulatory risk and legal overhead.
Sovereign AI for a digital Europe
APERTVS also plays into the broader European push for digital sovereignty. Rather than relying on foundation models controlled by US-based tech giants, European businesses and institutions can adopt a model that is transparent, auditable, and free from proprietary lock-in.
This matters for sectors like healthcare, finance, and government, where data sensitivity and regulatory scrutiny are highest. With APERTVS, organizations can inspect exactly what data the model was trained on, how it was aligned, and verify compliance claims independently.
What developers should know
- Open weights at 8B and 70B parameter scales, ready for fine-tuning
- Multilingual from day one, with support for 1,000+ languages
- Reproducible: all training code, data pipelines, and alignment methods are documented
- EU AI Act aligned: built-in PII removal, opt-out compliance, and memorization prevention
- Already being used for specialized applications including translation services and research initiatives
What APERTVS is producing
Beyond the headlines, APERTVS is delivering a concrete set of products and technical innovations that make it a practical platform for developers and organizations.
Foundation models
The project offers 8B and 70B parameter models released under a permissive open-source license that allows research, education, and commercial use. Both model sizes are designed to be competitive with leading open alternatives while offering full transparency into their training process.
Training corpus
The models were trained on a massive corpus of 15 trillion tokens spanning over 1,000 languages. Roughly 40% of the training data is non-English, with deliberate inclusion of underserved languages like Swiss German and Romansh, making APERTVS one of the most linguistically diverse foundation models available.
Fine-tuned variants
The consortium is already producing specialized derivatives. One notable example is “Apertus for Ticino”, a translation model tailored for the Ticino region of Switzerland, demonstrating how the base model can be adapted for specific linguistic and regional needs.
Novel technical components
APERTVS has produced several technical innovations that benefit the broader open-source AI ecosystem:
- xIELU activation function: a new activation function used in the model architecture
- FP8 GEMM training: efficient mixed-precision training at scale
- AdEMAMix optimizer: an advanced optimizer for large-scale model training
- Mixtera data plane: a system for managing and mixing multilingual training data
- Parity-aware byte-pair encoding: improved tokenization for multilingual text
- INCLUDE benchmark: a new evaluation benchmark for multilingual models
- ConLID: a language identification tool for the training pipeline
Distribution channels
APERTVS models are available through multiple channels to maximize accessibility:
- Hugging Face : the standard open-source model hub
- Swisscom’s sovereign Swiss AI platform: for organizations requiring data residency in Switzerland
- A public AI inference utility: providing direct API access for experimentation
- Amazon SageMaker AI: for enterprise cloud deployment
Future roadmap
The consortium has outlined plans for additional model sizes, including smaller and faster variants optimized for edge deployment. Domain-specific adaptations are planned for law, climate, health, and education. Potential multimodal capabilities (vision, audio) are also on the horizon, which would extend APERTVS beyond text-only use cases.
Looking ahead
APERTVS represents a meaningful step toward a future where powerful AI models are not just open, but also accountable and regulation-ready. For EU-based companies evaluating their AI strategy, it offers a compelling foundation: the performance of a top-tier open model with the compliance guarantees that European regulation demands.
Whether you’re building internal AI tools, customer-facing products, or research applications, APERTVS is worth watching closely. Explore the project at apertvs.ai .
