Datasets:
DSFL Dataset - AMI Disfluency Laughter Events
This dataset contains segmented audio and video clips from AMI Meeting Corpus, which only consisted of disfluencies and laughter events, segmented in both audio and visual modality.
This dataset, along with hhoangphuoc/ami-av is created for my research related to Audio-Visual Speech Recognition, which I currently developed at: https://github.com/hhoangphuoc/AVSL
For reproducing the work I've done to create this dataset, checkout the documentations: https://github.com/hhoangphuoc/AVSL/blob/main/docs/Preprocess.md
Summary of the Dataset
- Number of recordings: 35,731
- Has audio: True
- Has video: True
- Has lip video: True
Dataset({
features: ['id', 'meeting_id', 'speaker_id', 'start_time', 'end_time', 'duration', 'has_audio', 'has_video', 'has_lip_video', 'disfluency_type', 'transcript', 'audio', 'video', 'lip_video'],
num_rows: 35731
})
The dataset contains ~35k recordings of disfluency and laughter event in AMI Corpus. However, not all the recording contains all audio/video sources. In details, the number of recordings corresponding to each sources are:
- #audio: 35,731 items
- #video: 35,536 items
- #lip_videos: ~28,263 items
Specific to the disfluency and laughter events:
- #laughter: Total 469 laughs events, all have audio sources, 450/469 have video sources while only 346 items have
lip_video
- #disfluency: Contains 10 types of disfluencies (annotated in AMI transcripts). All have disfluent words and audio available, however,
#video
and#lip_video
are varied.
Using the dataset
The overall information of the dataset have been reported in dsfl-segments-info.csv
files. This including:
id
: unique segment_id of the eventmeeting_id
: the meeting where this event belong tospeaker_id
: the corresponding speaker of such eventsstart_time
: start timestamp of the eventend_time
: stop timestamp of the eventdisfluency_type
: type of disfluency corresponding to that event. Type islaugh
if it is laughtertranscript
: the corresponding disfluent word. Annotated as<laugh>
if it is laughter event.audio | video | lip_video
: Path to the corresponding sources
Similarly, metadata.jsonl
file contains the information of each recordings items in JSON line format, i.e. each line refered to each segment_id
.
To use the dataset, following these steps:
Download the dataset using
load_dataset("hhoangphuoc/ami-disfluency")
Alternatively, if load dataset doesn't include the audio/video recording resources. Download the following files manually:
audio_segments.tar.gz
: Including all audio segmentsvideo_segments.tar.gz
: Including both original_video (inoriginal_video
folder) and thelips
video (lips
folder)
or you can do it by using wget:
wget https://huggingface.co/datasets/hhoangphuoc/ami-disfluency/resolve/main/audio_segments.tar.gz wget https://huggingface.co/datasets/hhoangphuoc/ami-disfluency/resolve/main/video_segments.tar.gz
The folder structure where you can store the data can be as following example:
path/to/folder(ami-disfluency) /
|_ dsfl-segments-info.csv
|_ audio_segments
|_ ES2001a-0.00-0.01-audio.wav
|_ ...
|_ video_segments
|_ original_videos
|_ ES2001a-0.00-0.01-laugh-video.mp4
|_ ...
|_ lips
|_ ES2001a-0.00-0.01-laugh-lip_video.mp4
|_ ...
|_ ...
- Downloads last month
- 108