Datasets:
rlhn
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
rlhn-680K / README.md
nthakur's picture
updated README.md
d5323fa verified
metadata
dataset_info:
  features:
    - name: query_id
      dtype: string
    - name: query
      dtype: string
    - name: positive_passages
      list:
        - name: docid
          dtype: string
        - name: text
          dtype: string
        - name: title
          dtype: string
    - name: negative_passages
      list:
        - name: docid
          dtype: string
        - name: text
          dtype: string
        - name: title
          dtype: string
    - name: subset
      dtype: string
  splits:
    - name: train
      num_bytes: 10961970907
      num_examples: 648766
  download_size: 6447294919
  dataset_size: 10961970907
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-sa-4.0
task_categories:
  - question-answering
language:
  - en
pretty_name: RLHN-680K
size_categories:
  - 100K<n<1M

Dataset Card for RLHN-680K

Dataset Description

Repository | Paper | ArXiv

RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.

This Tevatron dataset (680K training pairs) contains the queries, positives + relabeled hard negatives, remaining hard negatives for 7 datasets in the BGE training collection.

This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.

The original dataset (bad quality; containing false negatives) can be found at rlhn/default-680K.

Note: RLHN datasets are not new training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!

Dataset Structure

To access the data using HuggingFace datasets:

rlhn = datasets.load_dataset('rlhn/rlhn-680K')

# training set:
for data in freshstack['train']:
    query_id = data["query_id"]                            # md5 hash of the query_id
    query = data["query"]                                  # query text
    subset = data["subset"]                                # training dataset, e.g., fiqa or msmarco_passage

    # positive passages
    for positive_passage in data["positive_passages"]:
        doc_id = positive_passage["docid"]
        title = positive_passage["title"]                  # title is usually empty, added in text
        text = positive_passage["text"]                    # contains both the title & text

    # hard negative passages
    for negative_passage in data["negative_passages"]:
        doc_id = negative_passage["docid"]
        title = negative_passage["title"]                  # title is usually empty, added in text
        text = negative_passage["text"]                    # contains both the title & text

Original Dataset Statistics

The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.

Dataset 100K splits 250K splits 400K splits 680K splits
arguana 4,065 4,065 4,065 4,065
fever 28,755 28,755 28,755 28,755
fiqa 5,500 5,500 5,500 5,500
hotpotqa 10,250 30,000 84,516 84,516
msmarco_passage 49,571 145,000 210,000 485,823
nq 6,110 30,000 58,568 58,568
scidocsrr 12,654 12,654 12,654 12,654
total 96,167 255,974 404,058 679,881

License

The RLHN dataset is made available with the CC-BY-SA 4.0 license.

Hashing & IDs

We generate the md5 hash as the unique identifier (ID) for both the query & documents, using the code below:

import hashlib

def get_md5_hash(text):
  """Calculates the MD5 hash of a given string.
  Args:
    text: The string to hash.
  Returns:
    The MD5 hash of the string as a hexadecimal string.
  """
  text_bytes = text.encode('utf-8')  # Encode the string to bytes
  md5_hash = hashlib.md5(text_bytes).hexdigest()
  return md5_hash

Citation

@misc{thakur2025relabel,
      title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval}, 
      author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
      year={2025},
      eprint={2505.16967},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2505.16967}, 
}