You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MixtureVitae Dataset

MixtureVitae: A Permissive, High-Performance, Open-Access Pretraining Dataset

Overview

MixtureVitae is an open-source, permissive, high-quality dataset designed for pretraining large language models (LLMs) across a wide variety of modalities, domains, and languages. The goal of MixtureVitae is to accelerate the development of transparent, open-access AI while lowering legal uncertainty around copyright and data provenance. See our blog.

  • Please note this dataset is still being uploaded in parts. More shards will appear over time. Please be patient.

Features

  • 1 Trillion+ Tokens: MixtureVitae includes over 1 trillion tokens of diverse text and multimodal content, carefully filtered for copyright-permissiveness and enriched with high-quality synthetic data.
  • Cross-Modality: Includes textual, visual, and auditory elements; sourced and generated to support multimodal and multilingual LLM training.
  • Transparent and Open: Based on publicly available data, permissive licenses (e.g. CC-BY, MIT, Apache), and public domain sources. Built with rigorous filtering and legal and ethical considerations.
  • Diversity & Balance: Includes multimodal, narrative, conversational, instructive, educational, legal, scientific, and programming content across multiple domains and languages.

Data Components

MixtureVitae comprises three main categories:

Web-Based Open Datasets (Filtered)

  • Nemotron-CC, Cosmopedia, FineWeb-Edu, TxT360, Cultura-Y, etc.
  • Global deduplication and permissive heuristic filtering applied (e.g. .gov domains, CC-BY keywords, spam/obscenity filtering).

Curated Datasets

  • Includes subsets and cleanups from Open License Corpus, PG-19, Freelaw, Stack v1, Euro-Pat, USPTO, Wikipedia, arXiv, OpenWebMath, Megawika, Europarl, HackerNews, and more.
  • Covers legal, scientific, technical, conversational, and multilingual data.

Synthetic Data

  • Math textbooks, Tiny-stories style narratives, Cross-language code translation, MCQ generation, Multimodal grounding, Multilingual translations, and more.

Preprocessing & Filtering

  • Permissive Filtering: Heuristic and keyword filtering to retain CC-BY, public domain, and .gov sources while excluding unsafe/unclear cases.
  • Light Global Deduplication: Prefix-based matching due to deduplication already performed in source corpora.
  • Sentence Deduplication: Low-information duplicate detection with WordNet substitution.
  • FastText Filtering & Classification:
    • Domain Classifier (based on FineWeb & Pile)
    • Genre/Register Classifier (TurkuNLP)
    • Math/Education quality Rankers (inspired by DeepSeekMath & Phi-3)
    • Red Pajama quality rankers
  • Quality Upsampling: Classification and rank allows users to apply targeted upsampling of diverse content types.

Dataset Size & Format

  • Over 1 trillion tokens total, not including multimodal data.
  • Multimodal shards include aligned image captions, audio transcripts, and instruction-style text.
  • Currently releasing only mostly english text shards, but will slowly release multimodal and transaltions.
  • Sharded and deduplicated to enable scalable training on clusters or cloud.

Links To Component Datsets

  • TBD: List component datasets such as MixtureVitae-atomic_2024, and other MixtureVitae-* datasets.

Legal Considerations

MixtureVitae is designed with legal caution, transparency, and fair-use alignment:

  • Heavy reliance on public domain, open licenses, and US federal government content.
  • Filtering for third-party copyrighted content.
  • Ethical justifications and fair use arguments applied to .gov content.
  • We do not guarantee non-infringement and disclaim legal liability — researchers are advised to consult legal experts before commercial use.

Intended Uses

  • Pretraining LLMs across text and multimodal domains.
  • Research into legal-compliant open model development.
  • Instruction tuning, alignment training, and multilingual or cross-domain generalization.

Licensing

We license our own contributions and annotaitons under CC-BY-SA. MixtureVitae itself includes sources under their own individual licenses:

  • Creative Commons (CC-BY, CC-BY-SA)
  • Public domain or governmental data (.gov, .mil)
  • Permissive software/data licenses (MIT, BSD, Apache)
    However, as with any large corpus: Use at your own legal discretion.

Contributors

This dataset was created by Ontocord.AI, with support from collaborators and references from open AI research ecosystems. Built as part of the Aurora-M2 project. We thank the contributors of datasets like Nemotron-CC, Cosmopedia, FineWeb, Open License Corpus, and many others.


How to Cite

@misc{txt360data2024,
      title={MixtureVitae: A Fully Permissive, High-Performance, Open-Access Pretraining Dataset}, 
      author={Harsh Raj, Huu Nguyen, Ken Tsui, Diganta Misra, Victor May, Vu Minh Chien},
      year={2025}
}
Downloads last month
6,486