Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
JSONSchemaBench / README.md
Saibo-creator's picture
Update README.md
42f9ff9 verified
|
raw
history blame
6.34 kB
metadata
dataset_info:
  features:
    - name: filename
      dtype: string
    - name: json_schema
      dtype: string
  splits:
    - name: WashingtonPost
      num_bytes: 2703506
      num_examples: 125
    - name: Snowplow
      num_bytes: 1649284
      num_examples: 420
    - name: Kubernetes
      num_bytes: 25517031
      num_examples: 1087
    - name: Github
      num_bytes: 54238882
      num_examples: 6335
    - name: Handwritten
      num_bytes: 346644
      num_examples: 197
    - name: Synthesized
      num_bytes: 199867
      num_examples: 450
    - name: full
      num_bytes: 84655214
      num_examples: 8614
    - name: Github_trivial
      num_bytes: 2970886
      num_examples: 570
    - name: Github_easy
      num_bytes: 2329597
      num_examples: 2035
    - name: Github_medium
      num_bytes: 8878874
      num_examples: 2121
    - name: Github_hard
      num_bytes: 23223445
      num_examples: 1405
    - name: Github_ultra
      num_bytes: 16836080
      num_examples: 204
  download_size: 44754499
  dataset_size: 223549310
configs:
  - config_name: default
    data_files:
      - split: WashingtonPost
        path: data/WashingtonPost-*
      - split: Snowplow
        path: data/Snowplow-*
      - split: Kubernetes
        path: data/Kubernetes-*
      - split: Github
        path: data/Github-*
      - split: Handwritten
        path: data/Handwritten-*
      - split: Synthesized
        path: data/Synthesized-*
      - split: full
        path: data/full-*
      - split: Github_trivial
        path: data/Github_trivial-*
      - split: Github_easy
        path: data/Github_easy-*
      - split: Github_medium
        path: data/Github_medium-*
      - split: Github_hard
        path: data/Github_hard-*
      - split: Github_ultra
        path: data/Github_ultra-*

JSON Schema Witness Generation Dataset

Overview

This dataset is a collection of JSON schema witnesses generated from various sources, aimed at facilitating the study and evaluation of JSON schema transformations, validations, and other related operations. The dataset consists of several subsets, each derived from a different domain or source, and is structured to allow for comprehensive testing and benchmarking of tools and algorithms dealing with JSON schemas.

Dataset Structure

The dataset is organized into the following subsets:

1. Washington Post (wp)

  • Samples: 125
  • Description: This subset contains JSON schema witnesses from the Washington Post dataset. The schemas have been processed and formatted for easy integration and analysis.
  • Original Directory: Washington Post

2. Snowplow (dg)

  • Samples: 420
  • Description: The Snowplow subset includes 420 JSON schema witnesses, each representing a structured data format used within the Snowplow event data pipeline.
  • Original Directory: Snowplow

3. Kubernetes (sat)

  • Samples: 1087
  • Description: This subset contains JSON schema witnesses from the Kubernetes ecosystem. It is particularly useful for analyzing configuration and deployment schemas.
  • Original Directory: Kubernetes

4. Github (sat)

  • Samples: 6335
  • Description: The largest subset in the dataset, containing 6335 samples from GitHub repositories. These schemas are ideal for studying open-source project configurations and automation.
  • Original Directory: Github

5. Handwritten (sat)

  • Samples: 197
  • Description: The Handwritten subset consists of 197 manually crafted JSON schema witnesses, designed to test edge cases and complex schema configurations.
  • Original Directory: Handwritten

6. Synthesized (sat)

  • Samples: 450
  • Description: This subset is synthesized from various sources to cover a broad range of JSON schema use cases, particularly focusing on containment and validation.
  • Original Directory: Synthesized

7. Github Trivial

  • Samples: 570
  • Description: Contains GitHub JSON schemas categorized as "trivial" with sizes less than 10. This split is designed to handle extremely simple and minimal schemas.

8. Github Easy

  • Samples: 2035
  • Description: Contains GitHub JSON schemas categorized as "easy," with sizes between 10 and 30. These schemas offer moderate complexity and are useful for testing general tools.

9. Github Medium

  • Samples: 2121
  • Description: Contains GitHub JSON schemas categorized as "medium," with sizes between 30 and 100. These schemas reflect intermediate complexity.

10. Github Hard

  • Samples: 1405
  • Description: Contains GitHub JSON schemas categorized as "hard," with sizes between 100 and 500. These are complex schemas that are challenging to process and validate.

11. Github Ultra

  • Samples: 204
  • Description: Contains GitHub JSON schemas categorized as "ultra," with sizes greater than 500. These schemas represent the most complex and large-scale structures, ideal for stress testing tools.

Citation

The datasets listed above are featured in the paper "Witness Generation for JSON Schema." We extend our gratitude to the authors for their valuable contributions. For more detailed information about the datasets and the methods used, please refer to the paper: Witness Generation for JSON Schema.

License

This dataset is provided under the MIT License. Please ensure that you comply with the license terms when using or distributing this dataset.

Acknowledgements

We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.