xxayt commited on
Commit
5703584
·
1 Parent(s): 44f7eaf
README.md CHANGED
@@ -1,3 +1,130 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: CC BY-NC 4.0
3
+ ---
4
+
5
+
6
+ ## Music Grounding by Short Video E-commerce (MGSV-EC) Dataset
7
+
8
+ 📄 [[Paper]](https://arxiv.org/abs/2408.16990v2)
9
+
10
+
11
+ ### 📝 Dataset Summary
12
+
13
+ **MGSV-EC** is a large-scale dataset for the new task of **Music Grounding by Short Video (MGSV)**, which aims to localize a specific music segment that best serves as the background music (BGM) for a given query short video.
14
+ Unlike traditional video-to-music retrieval (V2MR), MGSV requires both identifying the relevant music track and pinpointing a precise moment from the track.
15
+
16
+ The dataset contains **53,194 short e-commerce videos** paired with **35,393 music moments**, all derived from **4,050 unique music tracks**. It supports evaluation in two modes:
17
+
18
+ - **Single-music Grounding (SmG)**: the relevant music track is known, and the task is to detect the correct segment.
19
+ - **Music-set Grounding (MsG)**: the model must retrieve the correct music track and its corresponding segment.
20
+
21
+
22
+ ### 📐 Evaluation Protocol
23
+
24
+ | Mode | Sub-task | Metric |
25
+ |:--------------|:----------------------|:-------------------------------------------------|
26
+ | *Single-music* | Grounding (SmG) | mIoU |
27
+ | *Music-set* | Video-to-Music Retrieval (V2MR) | R$k$ |
28
+ | *Music-set* | Grounding (MsG) | MoR$k$ |
29
+
30
+
31
+
32
+ ---
33
+
34
+ ### 📊 Dataset Statistics
35
+
36
+ | **Split** | **#Music Tracks** | *Avg. Music Duration(sec)* | #Query Videos | *Avg. Video Duration(sec)* | **#Moments** |
37
+ |---------|----------------|----------------------|---------|----------------------|-----------|
38
+ | Total | 4,050 | 138.9 ± 69.6 | 53,194 | 23.9 ± 10.7 | 35,393 |
39
+ | *Train* | 3,496 | 138.3 ± 69.4 | 49,194 | 24.0 ± 10.7 | 31,660 |
40
+ | *Val* | 2,000 | 139.6 ± 70.0 | 2,000 | 22.8 ± 10.8 | 2,000 |
41
+ | *Test* | 2,000 | 139.9 ± 70.1 | 2,000 | 22.6 ± 10.7 | 2,000 |
42
+
43
+ - 🎵 Music type ratio: **~60% songs**, **~40% instrumental**
44
+ - 📹 Frame rate: 34 FPS; resolution: 1080×1920
45
+
46
+
47
+
48
+ ### 📁 Data Format
49
+
50
+ Each row in the CSV file represents a query video paired with a music track and a localized music moment. The meaning of each column is as follows:
51
+
52
+ | Column Name | Description |
53
+ |:-------------|--------------|
54
+ | video_id | Unique identifier for the short query video. |
55
+ | music_id | Unique identifier for the associated music track. |
56
+ | video_start | Start time of the video segment in full video. |
57
+ | video_end | End time of the video segment in full video. |
58
+ | music_start | Start time of the music segment in full track. |
59
+ | music_end | End time of the music segment in full track. |
60
+ | music_total_duration | Total duration of the music track. |
61
+ | video_segment_duration | Duration of the video segment. |
62
+ | music_segment_duration | Duration of the music segment. |
63
+ | music_path | Relative path to the music track file. |
64
+ | video_total_duration | Total duration of the video. |
65
+ | video_width | Width of the video frame. |
66
+ | video_height | Height of the video frame. |
67
+ | video_total_frames | Total number of frames in the video. |
68
+ | video_frame_rate | Frame rate of the video. |
69
+ | video_category | Category label of the video content (e.g., "美妆", "美食"). |
70
+
71
+
72
+
73
+ ### 🧩 Feature Directory Structure
74
+
75
+ For each video-music pair, we provide pre-extracted visual and audio features for efficient training in [MGSV_feature.zip](./MGSV_feature.zip). The features are stored in the following directory structure:
76
+
77
+ ```shell
78
+ [Your data feature path]
79
+ .
80
+ ├── ast_feature2p5
81
+ │ ├── ast_feature/ # Audio segment features extracted by AST (Audio Spectrogram Transformer)
82
+ │ └── ast_mask/ # Segment-level masks indicating valid audio positions
83
+ └── vit_feature1
84
+ ├── vit_feature/ # Frame-level visual features extracted by CLIP-ViT (ViT-B/32)
85
+ └── vit_mask/ # Frame-level masks indicating valid visual positions
86
+ ```
87
+ Each .pt file corresponds to a single sample and includes:
88
+ - frame_feats: shape `[B, max_v_frames, 512]`
89
+ - frame_masks: shape `[B, max_v_frames]`, where 1 indicates valid frames, 0 for padding, used for padding control during batching
90
+ - segment_feats: shape `[B, max_snippet_num, 768]`
91
+ - segment_masks: shape `[B, max_snippet_num]`, indicating valid audio segments
92
+
93
+ Note:
94
+ - These pre-extracted features are compatible with our released PyTorch dataloader [dataloader_MGSV_EC_feature.py](./dataloader/dataloader_MGSV_EC_feature.py).
95
+ - Feature file paths are not stored in the CSV. Instead, users should specify the base directories via the following arguments:
96
+ - frame_frozen_feature_path: `[Your data feature path]/vit_feature1`
97
+ - music_frozen_feature_path: `[Your data feature path]/ast_feature2p5`
98
+
99
+
100
+
101
+
102
+
103
+ ---
104
+
105
+ ### 📖 Citation
106
+
107
+ If you use this dataset in your research, please cite:
108
+
109
+ ```bibtex
110
+ @article{xin2024mgsv,
111
+ title={Music Grounding by Short Video},
112
+ author={Xin, Zijie and Wang, Minquan and Liu, Jingyu and Chen, Quan and Ma, Ye and Jiang, Peng and Li, Xirong},
113
+ journal={arXiv preprint arXiv:2408.16990},
114
+ year={2024}
115
+ }
116
+ ```
117
+
118
+ ### 📜 License
119
+
120
+ License: **CC BY-NC 4.0**
121
+ It is intended **for non-commercial academic research and educational purposes only**.
122
+ For commercial licensing or any use beyond research, please contact the authors.
123
+
124
+ 📥 **Raw Vidoes/Music-tracks Access**
125
+ The raw video and music files are not publicly available due to copyright and privacy constraints.
126
+ Researchers interested in obtaining the full media content can contact **Kuaishou Technology** at: [[email protected]](mailto:[email protected]).
127
+
128
+ 📬 **Contact for Issues**
129
+ For any dataset-related questions or problems (e.g., corrupted files or loading errors), please reach out to: [[email protected]](mailto:[email protected])
130
+
dataloader/data_dataloaders_feature.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.utils.data import DataLoader
3
+ from dataloaders.dataloader_MGSV_EC_feature import MGSV_EC_DataLoader
4
+
5
+
6
+ def dataloader_MGSV_EC_train(args):
7
+ MGSV_EC_trainset = MGSV_EC_DataLoader(
8
+ csv_path=args.train_csv,
9
+ args=args,
10
+ )
11
+ train_sampler = torch.utils.data.distributed.DistributedSampler(MGSV_EC_trainset, num_replicas=args.world_size, rank=args.rank)
12
+ dataloader = DataLoader(
13
+ MGSV_EC_trainset,
14
+ batch_size=args.batch_size_train // args.gpu_num,
15
+ num_workers=args.num_workers,
16
+ shuffle=(train_sampler is None),
17
+ sampler=train_sampler,
18
+ drop_last=True,
19
+ pin_memory=True,
20
+ )
21
+ return dataloader, len(MGSV_EC_trainset), train_sampler
22
+
23
+ def dataloader_MGSV_EC_val(args):
24
+ MGSV_EC_valset = MGSV_EC_DataLoader(
25
+ csv_path=args.val_csv,
26
+ args=args,
27
+ )
28
+ val_sampler = torch.utils.data.distributed.DistributedSampler(MGSV_EC_valset, num_replicas=args.world_size, rank=args.rank)
29
+ dataloader = DataLoader(
30
+ MGSV_EC_valset,
31
+ batch_size=args.batch_size_val // args.gpu_num,
32
+ num_workers=args.num_workers,
33
+ shuffle=(val_sampler is None),
34
+ sampler=val_sampler,
35
+ drop_last=False,
36
+ )
37
+ return dataloader, len(MGSV_EC_valset), val_sampler
38
+
39
+ def dataloader_MGSV_EC_test(args):
40
+ MGSV_EC_testset = MGSV_EC_DataLoader(
41
+ csv_path=args.val_csv,
42
+ args=args,
43
+ )
44
+ test_sampler = torch.utils.data.distributed.DistributedSampler(MGSV_EC_testset, num_replicas=args.world_size, rank=args.rank)
45
+ dataloader = DataLoader(
46
+ MGSV_EC_testset,
47
+ batch_size=args.batch_size_val // args.gpu_num,
48
+ num_workers=args.num_workers,
49
+ shuffle=(test_sampler is None),
50
+ sampler=test_sampler,
51
+ drop_last=False,
52
+ )
53
+ return dataloader, len(MGSV_EC_testset), test_sampler
54
+
55
+
56
+
57
+ DATALOADER_DICT = {}
58
+ DATALOADER_DICT["kuai50k_uni"] = {
59
+ "train": dataloader_MGSV_EC_train,
60
+ "val": dataloader_MGSV_EC_val,
61
+ "test": dataloader_MGSV_EC_test
62
+ }
63
+ # DATALOADER_DICT["kuai50k_vmr"] = {
64
+ # "train": dataloader_MGSV_EC_train,
65
+ # "val": dataloader_MGSV_EC_val,
66
+ # "test": dataloader_MGSV_EC_test
67
+ # }
68
+ # DATALOADER_DICT["kuai50k_mr"] = {
69
+ # "train": dataloader_MGSV_EC_train,
70
+ # "val": dataloader_MGSV_EC_val,
71
+ # "test": dataloader_MGSV_EC_test
72
+ # }
dataloader/dataloader_MGSV_EC_feature.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from torch.utils.data import Dataset
3
+ import torch
4
+ import pandas as pd
5
+
6
+ class MGSV_EC_DataLoader(Dataset):
7
+ def __init__(
8
+ self,
9
+ csv_path,
10
+ args=None,
11
+ ):
12
+ self.args = args
13
+ self.csv_data = pd.read_csv(csv_path)
14
+
15
+ def __len__(self):
16
+ return len(self.csv_data)
17
+
18
+ def get_cw_propotion(self, gt_spans, max_m_duration):
19
+ '''
20
+ Inputs:
21
+ gt_spans: [1, 2]
22
+ max_m_duration: float
23
+ '''
24
+ gt_spans[:, 1] = torch.clamp(gt_spans[:, 1], max=max_m_duration)
25
+ center_propotion = (gt_spans[:, 0] + gt_spans[:, 1]) / 2.0 / max_m_duration # [1]
26
+ width_propotion = (gt_spans[:, 1] - gt_spans[:, 0]) / max_m_duration # [1]
27
+ return torch.stack([center_propotion, width_propotion], dim=-1) # [1, 2]
28
+
29
+ def __getitem__(self, idx):
30
+ # id
31
+ video_id = self.csv_data['video_id'].to_numpy()[idx]
32
+ music_id = self.csv_data['music_id'].to_numpy()[idx]
33
+ # duration
34
+ # v_duration = self.csv_data['video_total_duration'].to_numpy()[idx]
35
+ m_duration = self.csv_data['music_total_duration'].to_numpy()[idx]
36
+ m_duration = float(m_duration)
37
+ # video moment st, ed
38
+ video_start_time = self.csv_data['video_start'].to_numpy()[idx]
39
+ video_end_time = self.csv_data['video_end'].to_numpy()[idx]
40
+ # music moment
41
+ music_start_time = self.csv_data['music_start'].to_numpy()[idx]
42
+ music_end_time = self.csv_data['music_end'].to_numpy()[idx]
43
+ gt_windows_list = [(music_start_time, music_end_time)]
44
+ gt_windows = torch.Tensor(gt_windows_list) # [1, 2]
45
+ # time map
46
+ meta_map = {
47
+ "video_id": str(video_id),
48
+ "music_id": str(music_id),
49
+ "v_duration": torch.tensor(video_end_time - video_start_time),
50
+ "m_duration": torch.tensor(m_duration),
51
+ "gt_moment": gt_windows, # [1, 2]
52
+ }
53
+ # target spans
54
+ spans_target = self.get_cw_propotion(gt_windows, self.args.max_m_duration) # [1, 2]
55
+
56
+ # extract features
57
+ video_feature_path = os.path.join(self.args.frame_frozen_feature_path, 'vit_feature', f'{video_id}.pt')
58
+ video_mask_path = os.path.join(self.args.frame_frozen_feature_path, 'vit_mask', f'{video_id}.pt')
59
+ frame_feats = torch.load(video_feature_path, map_location='cpu')
60
+ frame_mask = torch.load(video_mask_path, map_location='cpu')
61
+ frame_feats = frame_feats.masked_fill(frame_mask.unsqueeze(-1) == 0, 0) # [bs, max_frame_num, 512]
62
+
63
+ music_feature_path = os.path.join(self.args.music_frozen_feature_path, 'ast_feature', f'{music_id}.pt')
64
+ music_mask_path = os.path.join(self.args.music_frozen_feature_path, 'ast_mask', f'{music_id}.pt')
65
+ segment_feats = torch.load(music_feature_path, map_location='cpu')
66
+ segment_mask = torch.load(music_mask_path, map_location='cpu')
67
+ segment_feats = segment_feats.masked_fill(segment_mask.unsqueeze(-1) == 0, 0) # [bs, max_snippet_num, 768]
68
+
69
+ data_map = {
70
+ "frame_feats": frame_feats,
71
+ "frame_mask": frame_mask,
72
+ "segment_feats": segment_feats,
73
+ "segment_mask": segment_mask,
74
+ }
75
+ return data_map, meta_map, spans_target
dataset/MGSV-EC/test_data.csv ADDED
The diff for this file is too large to render. See raw diff
 
dataset/MGSV-EC/train_data.csv ADDED
The diff for this file is too large to render. See raw diff
 
dataset/MGSV-EC/val_data.csv ADDED
The diff for this file is too large to render. See raw diff