bryandts's picture
Update README.md
a580885 verified
---
license: mit
task_categories:
- image-to-image
- text-to-image
language:
- en
pretty_name: Robotic Action Prediction Dataset using RoboTwin
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: current_frame
dtype: image
- name: instruction
dtype: string
- name: future_frame
dtype: image
splits:
- name: train
num_bytes: 10494869755.67105
num_examples: 73042
- name: test
num_bytes: 2623790321.13095
num_examples: 18261
download_size: 13119010103
dataset_size: 13118660076.801998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Robotic Action Prediction Dataset
## Dataset Description
This dataset contains triplets of (current observation, action instruction, future observation) for training models to predict future frames of robotic actions.
## Dataset Structure
### Data Fields
- `current_frame`: Input image (RGB) of the current observation
- `instruction`: Textual description of the action to perform
- `future_frame`: Target image (RGB) showing the expected outcome 50 frames later
### Data Splits
The dataset contains:
- Total samples: ~91,303 (300 episodes × 304 frames average per episode)
- Tasks:
- `block_hammer_beat`: "beat the block with the hammer"
- `block_handover`: "handover the blocks"
- `blocks_stack_easy`: "stack blocks"
### Dataset Statistics
| Task | Episodes | Frames per Episode |
|---------------------|----------|--------------------|
| block_hammer_beat | 100 | 200-300 |
| block_handover | 100 | 400-500 |
| blocks_stack_easy | 100 | 400-500 |
## Dataset Creation
### Source Data
- **Simulation Environment:** RoboTwin
- **Image Resolution:** At least 128×128 pixels
- **Frame Offset:** 50 frames between input and target