license: cc-by-4.0
WanJuan·SiLu Multimodal Multilingual Corpus
🌏Dataset Introduction
The newly upgraded "Wanjuan·Silk Road Multimodal Corpus" brings the following three core improvements:
- The number of languages has been significantly expanded: Based on the five open-source languages of "Wanjuan·Silk Road", namely Arabic, Russian, Korean, Vietnamese, and Thai, "Wanjuan·Silk Road Multimodal" has added three scarce corpus data of Serbian, Hungarian, and Czech, and uses the above eight key languages to help global multilingual applications.
- Data modality has been fully upgraded: Different from the pure text data of the first version of "Wanjuan·Silk Road", "Wanjuan·Silk Road Multimodal" provides rich four modal data of pictures-text, audio-text, video-text, and special instruction fine-tuning SFT for all eight languages, covering the entire link of multimodal research; the total amount of data exceeds 11.5 million, and the audio and video duration exceeds 26,000 hours, which greatly meets the needs of various research tasks.
- Ultra-fine data, applicable to multiple scenarios: After mature data production pipelines and security reinforcement, combined with machine and local experts' manual fine-grained labeling and quality inspection, "Wanjuan·Silk Road Multimodal" reaches industrial-grade data quality standards, including more than 20 kinds of fine-grained multi-dimensional classification labels and detailed text descriptions, suitable for different scenarios such as cultural tourism, commercial trade, science and technology education, and can be used out of the box, helping developers reduce their burden and focus on value creation.
🚩Open source content
- More than 2 million pictures and texts have been open sourced;
- more than 1,600 hours of audio and text have been open sourced;
- more than 25,000 hours of audio and text have been open sourced;
- 180,000 SFT data have been open sourced;
📚Open source data details:
Language name | Picture and text module data volume (number of pictures) | Audio module duration (hours) | Video module duration (hours) | SFT module data volume |
---|---|---|---|---|
Arabic | 220,000 | 200 | 1738 | 23,000 |
Russian | 250,000 | 212 | 3491 | 23,000 |
Korean | 530,000 | 202 | 3412 | 23,000 |
Vietnamese | 450,000 | 205 | 2901 | 23,000 |
Thai | 100,000 | 201 | 5684 | 23,000 |
Serbian | 80,000 | 206 | 2578 | 23,000 |
Hungarian | 220,000 | 208 | 3470 | 23,000 |
Czech | 270,000 | 202 | 2453 | 23,000 |
⚠⚠⚠ This repository contains multimodal data resources for 3 languages(Serbian, Hungarian, and Czech). To access these resources, click the "Apply" button on the dataset file page. The data will be available for download after author approval.
- For the other 5 languages (Arabic, Russian, Korean, Vietnamese, Thai), visit this link to download directly (no application required): https://opendatalab.com/OpenDataLab/WanJuanSiLu2O
- The first batch of open source plain text corpora in five languages can be visited on this page and downloaded directly after logging in:
- WanJuan-Thai:https://opendatalab.com/OpenDataLab/WanJuan-Thai
- WanJuan-Russian:https://opendatalab.com/OpenDataLab/WanJuan-Russian
- WanJuan-Korean:https://opendatalab.com/OpenDataLab/WanJuan-Korean
- WanJuan-Vietnamese:https://opendatalab.com/OpenDataLab/WanJuan-Vietnamese
- WanJuan-Arabic:https://opendatalab.com/OpenDataLab/WanJuan-Arabic
Data processing features:
📸Image-text data:
- Balanced coverage of multiple fields: high-quality image-text data from Wikipedia, Wikiquote, encyclopedia and mainstream media news from eight language countries;
- Double annotation innovation: Alt-text basic description + visual model generation extended description to improve information richness;
- Evenly distributed in 10 high-interest areas to avoid data skew; Label composition: outdoor scenes, indoor scenes, urban scenes, rural scenes, text technology, natural scenery, folk traditions, adults, food;
🎥Audio-text data:
- Dual ASR verification of audio ensures ultra-high quality: This dataset uses audio-text data transcribed from the main streaming video media platform, cross-validated by Google and Microsoft dual commercial ASR engines to ensure high-precision text annotation, and combined with environmental noise elimination technology to improve sound quality;
- Real scene voice: natural conversation data containing environmental noise, close to actual applications, compared with other similar datasets, this dataset has multi-language coverage, conversation authenticity and annotation quality. It has obvious advantages;
- 4 big data categories: social humanities, entertainment media, knowledge education, life culture;
📞Video-text data:
- Rich language categories, filling data gaps: the total amount of videos in 8 languages (including Hungarian/Serbian, etc.) exceeds 16,000 hours; compared with similar data sets, this data set includes many low-resource languages, filling the gaps of these languages in video data sets, and is a valuable resource for multimodal research and low-resource language processing;
- Multimodal annotation system, constructing fine-grained labels and descriptions: It provides three forms of video screen annotation, subtitle annotation, and video screen and subtitle integrated annotation at the same time, providing more comprehensive information support for the research and development of multimodal models; It provides 17 types of multidimensional labels to meet diverse needs;
- Label composition
First-level label Secondary tags General Technology and Strategy Culture Movies and Animation Travel Characters Characters Animal Interviews Scenes Music Games News Tutorials Sports Others Others
🤖Featured instructions for fine-tuning SFT data:
- Cultural adversarial samples: Contains culturally relevant question-answer pairs designed by local residents to detect cultural bias in models
- Hybrid quality inspection process: Rules + model scoring to filter translation data and reduce noise in low-resource languages
- Provide non-English cultural corpus (such as local life/traditional customs) to alleviate stereotypes dominated by English data
- Five major tags: culture, code, local life, AI4S, mathematics
License
WanJuan·SiLu Multimodal dataset adopts CC BY 4.0 license agreement as a whole. You can freely share and adapt this dataset, but you must follow the following conditions:
- Attribution: You must appropriately indicate the author, provide a link to this agreement, and indicate whether (the original dataset) has been modified. You can do this in any reasonable way, but you cannot imply that the licensor agrees with you or your use in any way.
- No additional restrictions: You may not use legal terms or technical measures to restrict others from performing any operations permitted by the license. For the full agreement, please visit the full text of CC BY 4.0 agreement.
Special Notes
Please note that some subsets of this dataset may be subject to other agreement provisions. Before using a specific subset, please be sure to read the relevant agreement carefully to ensure compliance. For more detailed agreement information, please check the relevant documents or metadata of the specific subset.
As a non-profit organization, OpenDataLab advocates a harmonious and friendly open source communication environment. If you find any content infringing your legal rights in the open source dataset, you can send an email to ([email protected]). Please write a detailed description of the infringement facts in the email and provide us with relevant ownership proof materials. We will initiate the investigation and handling mechanism within 3 working days and take necessary measures to deal with it (such as listing the relevant data). However, you should ensure the authenticity of your complaint, otherwise the adverse consequences of taking measures shall be borne by you independently.
Citations
Using Wanjuan·Silk Road Multimodality, please add the following citation:
@misc{he2024opendatalabempoweringgeneralartificial,
title={OpenDataLab: Empowering General Artificial Intelligence with Open Datasets},
author={Conghui He and Wei Li and Zhenjiang Jin and Chao Xu and Bin Wang and Dahua Lin},
year={2024},
eprint={2407.13773},
archivePrefix={arXiv},
primaryClass={cs.DL},
url={https://arxiv.org/abs/2407.13773},
}
- Downloads last month
- 75