mlx7-two-tower-data / README.md
Azuremis's picture
Add dataset card
27cf2a3 verified
metadata
language:
  - en
license: mit
tags:
  - two-tower
  - semantic-search
  - document-retrieval
  - information-retrieval
  - dual-encoder

mlx7-two-tower-data

This repository contains datasets used for training Two-Tower (Dual Encoder) models for document retrieval.

Dataset Description

The datasets provided here are structured for training dual encoder models with various sampling strategies:

  • classic_triplets: 48.2 MB
  • intra_query_neg: 47.6 MB
  • multi_pos_multi_neg: 126.5 MB

Dataset Details

  • classic_triplets.parquet: Standard triplet format with (query, positive_document, negative_document)
  • intra_query_neg.parquet: Negative examples selected from within the same query batch
  • multi_pos_multi_neg.parquet: Multiple positive and negative examples per query

Usage

import pandas as pd

# Load a dataset
df = pd.read_parquet("classic_triplets.parquet")

# View the schema
print(df.columns)

# Example of working with the data
queries = df["q_text"].tolist()
positive_docs = df["d_pos_text"].tolist()
negative_docs = df["d_neg_text"].tolist()

Data Source and Preparation

These datasets are derived from the MS MARCO passage retrieval dataset, processed to create effective training examples for two-tower models.

Dataset Structure

The datasets follow a common schema with the following fields:

  • q_text: Query text
  • d_pos_text: Positive (relevant) document text
  • d_neg_text: Negative (non-relevant) document text

Additional fields may be present in specific datasets.

Citation

If you use this dataset in your research, please cite the original MS MARCO dataset:

@article{msmarco,
  title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
  author={Nguyen, Tri and Rosenberg, Matthew and Song, Xia and Gao, Jianfeng and Tiwary, Saurabh and Majumder, Rangan and Deng, Li},
  journal={arXiv preprint arXiv:1611.09268},
  year={2016}
}