Datasets:

Adding the train split

#15
by michal-stefanik - opened
Language Technology Research Group at the University of Helsinki org

@albertvillanova @lhoestq (as recent contributors to Helsinki-NLP datasets, please point me to someone else if needed),

Hi, to make Tatoeba easier to use for everyone, we would like to extend this dataset with the train split, and I am thinking of how to move this forward most elegantly.

The ideal outcome would be to have the dataset segmented into subsets and splits and searchable within Dataset Viewer (see my example dataset here), but this will require uploading the whole ~7TB collection to HF and breaking backward compatibility for the users of this dataset.
Alternatively, we can create a new Dataset and keep them both running for a while.

Thank you for any thoughts!

Hi ! sorry for the late reply

First I noticed the dataset is still based on a dataset loading script so it would be great to remove it and only have Parquet files instead (great for compression and filtering !)
There is a CLI tool to open a Pull Request and convert a script based dataset to Parquet without breaking backward compatibility

datasets-cli convert_to_parquet Helsinki-NLP/tatoeba_mt --trust_remote_code

The using datasets and push_to_hub() you can upload additional splits :)

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment