Tony Fang commited on
Commit
900cef8
·
1 Parent(s): d67ed13

added identification benchmark

Browse files
.gitignore CHANGED
@@ -1,4 +1,5 @@
1
  *.png
 
2
  *.txt
3
  *.out
4
  *.pt
@@ -10,4 +11,6 @@ object_detector_benchmark/yolo_benchmark/*
10
  object_detector_benchmark/8_calves_arrow/
11
  object_detector_benchmark/8_calves_coco/
12
  object_detector_benchmark/transformer_benchmark/runs
13
- object_detector_benchmark/transformer_benchmark/__pycache__
 
 
 
1
  *.png
2
+ *.jpg
3
  *.txt
4
  *.out
5
  *.pt
 
11
  object_detector_benchmark/8_calves_arrow/
12
  object_detector_benchmark/8_calves_coco/
13
  object_detector_benchmark/transformer_benchmark/runs
14
+ *__pycache__
15
+ !requirement.txt
16
+ *.mdb
README.md CHANGED
@@ -66,11 +66,14 @@ df = pd.read_pickle("pmfeed_4_3_16_bboxes_and_labels.pkl")
66
 
67
  ## Usage
68
  ### Dataset Download:
 
 
69
 
70
- Step 1: install git-lfs:
 
71
  `git lfs install`
72
 
73
- Step 2:
74
  `git clone [email protected]:datasets/tonyFang04/8-calves`
75
 
76
  ### Object Detection
@@ -86,14 +89,13 @@ Step 2:
86
  | Test | 67,760 | Final evaluation |
87
 
88
  ### Benchmarking YOLO Models:
89
- Step 1: install albumentations. Check out [Albumentations' website](https://www.albumentations.ai/docs/) for more information.
90
 
91
- Step 2:
92
  `cd 8-calves/object_detector_benchmark`. Run
93
  `./create_yolo_dataset.sh` and
94
  `create_yolo_testset.py`. This creates a YOLO dataset with the 500/100/67760 train/val/test split recommended above.
95
 
96
- Step 3: install ultralytics. Check out [Ultralytics's website](https://github.com/ultralytics/ultralytics) for more information. Find the `Albumentations` class in the `data/augment.py` file in ultralytics source code. And replace the default transforms to:
97
 
98
  ```
99
  # Transforms
@@ -109,8 +111,8 @@ T = [
109
  ]
110
  ```
111
 
112
- Step 4:
113
- run the yolo detectors following the following steps:
114
 
115
  ```
116
  cd yolo_benchmark
@@ -120,6 +122,22 @@ Model_Name=yolov9t
120
  yolo cfg=experiment.yaml model=$Model_Name.yaml name=$Model_Name
121
  ```
122
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
  ### Identity Classification
125
  - Use `tracklet_id` (1-8) from the PKL file as labels.
 
66
 
67
  ## Usage
68
  ### Dataset Download:
69
+ Step 1: install conda environment from
70
+ `requirement.txt`
71
 
72
+
73
+ Step 2: install git-lfs:
74
  `git lfs install`
75
 
76
+ Step 3:
77
  `git clone [email protected]:datasets/tonyFang04/8-calves`
78
 
79
  ### Object Detection
 
89
  | Test | 67,760 | Final evaluation |
90
 
91
  ### Benchmarking YOLO Models:
 
92
 
93
+ Step 1:
94
  `cd 8-calves/object_detector_benchmark`. Run
95
  `./create_yolo_dataset.sh` and
96
  `create_yolo_testset.py`. This creates a YOLO dataset with the 500/100/67760 train/val/test split recommended above.
97
 
98
+ Step 2: find the `Albumentations` class in the `data/augment.py` file in ultralytics source code. And replace the default transforms to:
99
 
100
  ```
101
  # Transforms
 
111
  ]
112
  ```
113
 
114
+ Step 3:
115
+ run the yolo detectors following the following commands:
116
 
117
  ```
118
  cd yolo_benchmark
 
122
  yolo cfg=experiment.yaml model=$Model_Name.yaml name=$Model_Name
123
  ```
124
 
125
+ ### Benchmark Transformer Based Models:
126
+
127
+ Step 1: run the following commands to load the data into yolo format, then into coco, then into arrow:
128
+ ```
129
+ cd 8-calves/object_detector_benchmark
130
+ ./create_yolo_dataset.sh
131
+ python create_yolo_testset.py
132
+ python yolo_to_coco.py
133
+ python data_wrangling.py
134
+ ```
135
+
136
+ Step 2: run the following commands to train:
137
+ ```
138
+ cd transformer_benchmark
139
+ python train.py --config Configs/conditional_detr.yaml
140
+ ```
141
 
142
  ### Identity Classification
143
  - Use `tracklet_id` (1-8) from the PKL file as labels.
identification_benchmark/big_model_inference/data_loading.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import lmdb
3
+ import io
4
+ import re
5
+ from PIL import Image
6
+ import torch
7
+ from torch.utils.data import Dataset, DataLoader
8
+ from torchvision import transforms
9
+ import unittest
10
+ from tqdm import tqdm
11
+ torch.multiprocessing.set_sharing_strategy('file_system')
12
+
13
+
14
+ class LMDBImageDataset(Dataset):
15
+ def __init__(self, lmdb_path, transform=None, limit=None):
16
+ """
17
+ Args:
18
+ lmdb_path (str): Path to the LMDB directory.
19
+ transform (callable, optional): Optional transform to be applied on an image.
20
+ limit (int or float, optional): If a float between 0 and 1, keeps that fraction of keys.
21
+ If an int, keeps that many keys.
22
+ """
23
+ # Open the LMDB environment in read-only mode.
24
+ self.env = lmdb.open(lmdb_path, readonly=True, lock=False, readahead=False)
25
+ self.transform = transform
26
+
27
+ # Retrieve all keys from the LMDB database.
28
+ with self.env.begin() as txn:
29
+ keys = [key.decode('utf-8') for key, _ in txn.cursor()]
30
+
31
+ # Define a sort key function that extracts frame number and cow id from the filename.
32
+ def sort_key(filename):
33
+ # Expected pattern: "pmfeed_4_3_16_frame_10000_cow_1.jpg"
34
+ match = re.search(r'frame_(\d+)_cow_(\d+)', filename)
35
+ if match:
36
+ frame = int(match.group(1))
37
+ cow = int(match.group(2))
38
+ return (frame, cow)
39
+ return (float('inf'), float('inf'))
40
+
41
+ # Sort the keys using the defined sort key function.
42
+ keys = sorted(keys, key=sort_key)
43
+
44
+ # Apply the limit if provided.
45
+ if limit is not None:
46
+ if isinstance(limit, float):
47
+ if 0 <= limit <= 1:
48
+ cutoff = int(len(keys) * limit)
49
+ keys = keys[:cutoff]
50
+ else:
51
+ raise ValueError("If limit is a float, it must be between 0 and 1.")
52
+ elif isinstance(limit, int):
53
+ keys = keys[:limit]
54
+ else:
55
+ raise TypeError("limit must be either a float or an integer.")
56
+
57
+ self.keys = keys
58
+
59
+ def __getitem__(self, index):
60
+ # Get the key and image data
61
+ key_str = self.keys[index]
62
+ key = key_str.encode('utf-8')
63
+ with self.env.begin() as txn:
64
+ image_bytes = txn.get(key)
65
+
66
+ # Convert binary image data to a PIL Image.
67
+ image = Image.open(io.BytesIO(image_bytes)).convert('RGB')
68
+
69
+ if self.transform:
70
+ image = self.transform(image)
71
+
72
+ # Extract the cow id from the filename.
73
+ match = re.search(r'frame_(\d+)_cow_(\d+)', key_str)
74
+ if match:
75
+ cow_id = int(match.group(2))
76
+ else:
77
+ cow_id = -1 # Use -1 or any default value if not found
78
+
79
+ return image, cow_id
80
+
81
+ def __len__(self):
82
+ return len(self.keys)
83
+
84
+
85
+ class TestLMDBImageDataset(unittest.TestCase):
86
+ def test_dataset_length(self):
87
+ # Example transform: resize and convert to tensor.
88
+ transform = transforms.Compose([
89
+ transforms.Resize((256, 256)),
90
+ transforms.ToTensor(),
91
+ ])
92
+
93
+ # Path to your LMDB directory.
94
+ lmdb_path = '../lmdb_all_crops_pmfeed_4_3_16'
95
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform, limit=20)
96
+ self.assertEqual(len(dataset), 20)
97
+ self.assertEqual(dataset.keys, ['pmfeed_4_3_16_frame_1_cow_1.jpg', 'pmfeed_4_3_16_frame_1_cow_2.jpg', 'pmfeed_4_3_16_frame_1_cow_3.jpg', 'pmfeed_4_3_16_frame_1_cow_4.jpg', 'pmfeed_4_3_16_frame_1_cow_5.jpg', 'pmfeed_4_3_16_frame_1_cow_6.jpg', 'pmfeed_4_3_16_frame_1_cow_7.jpg', 'pmfeed_4_3_16_frame_1_cow_8.jpg', 'pmfeed_4_3_16_frame_2_cow_1.jpg', 'pmfeed_4_3_16_frame_2_cow_2.jpg', 'pmfeed_4_3_16_frame_2_cow_3.jpg', 'pmfeed_4_3_16_frame_2_cow_4.jpg', 'pmfeed_4_3_16_frame_2_cow_5.jpg', 'pmfeed_4_3_16_frame_2_cow_6.jpg', 'pmfeed_4_3_16_frame_2_cow_7.jpg', 'pmfeed_4_3_16_frame_2_cow_8.jpg', 'pmfeed_4_3_16_frame_3_cow_1.jpg', 'pmfeed_4_3_16_frame_3_cow_2.jpg', 'pmfeed_4_3_16_frame_3_cow_3.jpg', 'pmfeed_4_3_16_frame_3_cow_4.jpg'])
98
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform, limit=100)
99
+ self.assertEqual(len(dataset), 100)
100
+ self.assertEqual(dataset.keys[-10:], ['pmfeed_4_3_16_frame_12_cow_3.jpg', 'pmfeed_4_3_16_frame_12_cow_4.jpg', 'pmfeed_4_3_16_frame_12_cow_5.jpg', 'pmfeed_4_3_16_frame_12_cow_6.jpg', 'pmfeed_4_3_16_frame_12_cow_7.jpg', 'pmfeed_4_3_16_frame_12_cow_8.jpg', 'pmfeed_4_3_16_frame_13_cow_1.jpg', 'pmfeed_4_3_16_frame_13_cow_2.jpg', 'pmfeed_4_3_16_frame_13_cow_3.jpg', 'pmfeed_4_3_16_frame_13_cow_4.jpg'])
101
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform)
102
+ self.assertEqual(len(dataset), 537908)
103
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform, limit=0.5)
104
+ self.assertEqual(len(dataset), 268954)
105
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform, limit=0.3)
106
+ self.assertEqual(len(dataset), 161372)
107
+
108
+ def test_data_loading(self):
109
+ # Example transform: resize and convert to tensor.
110
+ transform = transforms.Compose([
111
+ transforms.Resize((256, 256)),
112
+ transforms.ToTensor(),
113
+ ])
114
+
115
+ # Path to your LMDB directory.
116
+ lmdb_path = '../lmdb_all_crops_pmfeed_4_3_16'
117
+
118
+ # Create the dataset:
119
+ # For example, if you want to keep the first 20 keys:
120
+ dataset = LMDBImageDataset(lmdb_path=lmdb_path, transform=transform)
121
+ # Or, if you want to keep the first 50% of the keys:
122
+ # Create a DataLoader.
123
+ dataloader = DataLoader(
124
+ dataset,
125
+ batch_size=256,
126
+ shuffle=False,
127
+ num_workers=8,
128
+ )
129
+
130
+ # Example: Iterate over one batch.
131
+ ground_truths = []
132
+ for images, cow_ids in tqdm(dataloader, unit='batch'):
133
+ # print(images.shape) # e.g., torch.Size([32, 3, 256, 256])
134
+ # print(cow_ids) # Tensor of cow IDs corresponding to each image.
135
+ ground_truths.append(cow_ids)
136
+
137
+ ground_truths = torch.cat(ground_truths, dim=0)
138
+ self.assertEqual(len(ground_truths), 537908)
139
+ self.assertEqual(set(ground_truths.tolist()), {1, 2, 3, 4, 5, 6, 7, 8})
140
+
141
+
142
+ if __name__ == "__main__":
143
+ unittest.main()
identification_benchmark/big_model_inference/inference_resnet.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torchvision import transforms, models
3
+ from data_loading import LMDBImageDataset
4
+ from torch.utils.data import DataLoader
5
+ from tqdm import tqdm
6
+ import argparse
7
+
8
+ torch.multiprocessing.set_sharing_strategy('file_system')
9
+
10
+ def main():
11
+ # Parse command line arguments.
12
+ parser = argparse.ArgumentParser(description="Compute ResNet embeddings")
13
+ parser.add_argument('--resnet_type', type=str, default='resnet152',
14
+ help="Type of ResNet model to use (e.g., resnet18, resnet34, resnet50, resnet101, resnet152)")
15
+ parser.add_argument('--lmdb_path', type=str, default='../lmdb_all_crops_pmfeed_4_3_16',
16
+ help="Path to the LMDB image dataset")
17
+ args = parser.parse_args()
18
+
19
+ transform = transforms.Compose([
20
+ transforms.Resize((224, 224)),
21
+ transforms.ToTensor(),
22
+ ])
23
+
24
+ # Create the dataset and dataloader.
25
+ dataset = LMDBImageDataset(
26
+ lmdb_path=args.lmdb_path,
27
+ transform=transform,
28
+ limit=None
29
+ )
30
+ dataloader = DataLoader(
31
+ dataset,
32
+ batch_size=128,
33
+ shuffle=False,
34
+ num_workers=8,
35
+ )
36
+
37
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
38
+
39
+ # Dynamically load the specified ResNet model.
40
+ resnet_constructor = getattr(models, args.resnet_type)
41
+ model = resnet_constructor(weights='IMAGENET1K_V1')
42
+ # Remove the last fully-connected layer to obtain embeddings.
43
+ model = list(model.children())[:-1]
44
+ model = torch.nn.Sequential(*model)
45
+ model.to(device)
46
+ model.eval()
47
+
48
+ all_embeddings = []
49
+ all_cow_ids = []
50
+
51
+ # Loop through the dataset and compute embeddings.
52
+ with torch.no_grad():
53
+ for images, cow_ids in tqdm(dataloader, unit='batch'):
54
+ images = images.to(device)
55
+ image_features = model(images)
56
+ image_features = image_features.squeeze()
57
+ all_embeddings.append(image_features.cpu())
58
+ all_cow_ids.append(cow_ids)
59
+
60
+ # Concatenate and save all embeddings.
61
+ embeddings = torch.cat(all_embeddings, dim=0)
62
+ torch.save(embeddings, f"{args.resnet_type}_embeddings.pt")
63
+ all_cow_ids = torch.cat(all_cow_ids, dim=0)
64
+ torch.save(all_cow_ids, f"all_cow_ids.pt")
65
+
66
+ if __name__ == '__main__':
67
+ main()
identification_benchmark/big_model_inference/inference_transformers.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import argparse
3
+ import re
4
+ from transformers import AutoProcessor, AutoModel
5
+ from torchvision import transforms
6
+ from data_loading import LMDBImageDataset
7
+ from torch.utils.data import DataLoader
8
+ from tqdm import tqdm
9
+ import time
10
+
11
+ torch.multiprocessing.set_sharing_strategy('file_system')
12
+
13
+ def infer_image_size(model_name):
14
+ """
15
+ Infer image size from the model name.
16
+ Looks for a trailing hyphen followed by digits (e.g., "-336").
17
+ Defaults to 224 if not found.
18
+ """
19
+ match = re.search(r'-([0-9]+)$', model_name)
20
+ if match:
21
+ return int(match.group(1))
22
+ else:
23
+ return 224
24
+
25
+ def collate_fn(batch):
26
+ images, labels = zip(*batch)
27
+ return list(images), list(labels)
28
+
29
+ def main():
30
+ parser = argparse.ArgumentParser(description="Compute embeddings for a Hugging Face model")
31
+ parser.add_argument('--model_name', type=str, default="facebook/vit-mae-base",
32
+ help="Hugging Face model name, e.g., facebook/vit-mae-base or openai/clip-vit-base-patch14-336")
33
+ parser.add_argument('--lmdb_path', type=str, default='../lmdb_all_crops_pmfeed_4_3_16', help="Path to the LMDB image dataset")
34
+ parser.add_argument('--batch_size', type=int, default=128)
35
+ parser.add_argument('--num_workers', type=int, default=8)
36
+ args = parser.parse_args()
37
+
38
+ # Infer image size from the model name
39
+ image_size = infer_image_size(args.model_name)
40
+ print(f"Inferred image size: {image_size}")
41
+
42
+ transform = transforms.Compose([
43
+ transforms.Resize((image_size, image_size)),
44
+ ])
45
+
46
+ # Create the dataset and dataloader.
47
+ dataset = LMDBImageDataset(
48
+ lmdb_path=args.lmdb_path,
49
+ transform=transform,
50
+ limit=None
51
+ )
52
+ dataloader = DataLoader(
53
+ dataset,
54
+ batch_size=args.batch_size,
55
+ shuffle=False,
56
+ num_workers=args.num_workers,
57
+ collate_fn=collate_fn
58
+ )
59
+
60
+ # Load the model and processor.
61
+ model_name = args.model_name
62
+ processor = AutoProcessor.from_pretrained(model_name, do_normalize=False)
63
+ model = AutoModel.from_pretrained(model_name)
64
+ print(f"Using model: {model_name}")
65
+
66
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
67
+ model.to(device)
68
+ model.eval()
69
+
70
+ all_embeddings = []
71
+ all_cow_ids = []
72
+
73
+ # Loop through the dataset and compute embeddings.
74
+ with torch.no_grad():
75
+ for images, cow_ids in tqdm(dataloader, unit='batch'):
76
+ inputs = processor(images=images, return_tensors="pt")
77
+ inputs = inputs.to(device)
78
+ # Get the mean of the last hidden state as the image embedding.
79
+ if "clip-vit" in model_name:
80
+ image_features = model.get_image_features(**inputs)
81
+ elif "vit-mae" in model_name:
82
+ image_features = model(**inputs).last_hidden_state.mean(dim=1)
83
+ else:
84
+ image_features = model(**inputs).pooler_output
85
+ # image_features = model(**inputs).last_hidden_state.mean(dim=1) # mae model
86
+ # image_features = model.get_image_features(**inputs) # clip model
87
+ # image_features = model(**inputs).pooler_output # everything else
88
+ all_embeddings.append(image_features.cpu())
89
+ all_cow_ids.extend(cow_ids)
90
+
91
+ # Concatenate and save the embeddings.
92
+ embeddings = torch.cat(all_embeddings, dim=0)
93
+ output_file = f"{model_name.replace('/', '_')}_embeddings.pt"
94
+ torch.save(embeddings, output_file)
95
+ print(f"Embeddings saved to {output_file}")
96
+
97
+ if __name__ == '__main__':
98
+ main()
identification_benchmark/classification/data_loading.py ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.utils.data import TensorDataset
3
+ import tensorflow as tf
4
+ import tensorflow_datasets as tfds
5
+ import jax.numpy as jnp
6
+
7
+
8
+ def get_datasets(
9
+ features_path='../big_model_inference/resnet18_embeddings.pt',
10
+ labels_path='../big_model_inference/all_cow_ids.pt'
11
+ ):
12
+ embeddings_np = torch.load(features_path)
13
+ all_cow_ids = torch.load(labels_path) - 1
14
+
15
+
16
+ # Set the seed for reproducibility
17
+ seed = 42
18
+ torch.manual_seed(seed)
19
+
20
+ # Assume embeddings_np and all_cow_ids are already loaded as PyTorch tensors
21
+ num_samples = len(embeddings_np)
22
+ indices = torch.randperm(num_samples)
23
+
24
+ # Calculate split indices for 70/20/10 split
25
+ train_end = int(0.001 * num_samples)
26
+ val_end = int(0.2 * num_samples)
27
+ train_indices = indices[:train_end]
28
+ val_indices = indices[train_end:val_end]
29
+ test_indices = indices[val_end:]
30
+
31
+ # print(train_indices[:10])
32
+ # print(val_indices[:10])
33
+ # print(test_indices[:10])
34
+
35
+ # assert torch.equal(train_indices[:10], torch.tensor([292622, 37548, 42432, 353497, 379054, 301165, 47066, 353666, 409458,
36
+ # 454581]))
37
+ # assert torch.equal(val_indices[:10], torch.tensor([219340, 495317, 522025, 36026, 490924, 179563, 533196, 263518, 139048,
38
+ # 72363]))
39
+ # assert torch.equal(test_indices[:10], torch.tensor([192226, 477583, 210506, 265639, 82907, 246325, 335726, 395405, 497690,
40
+ # 388675]))
41
+ # Create datasets for each split
42
+ train_dataset = TensorDataset(embeddings_np[train_indices], all_cow_ids[train_indices])
43
+ val_dataset = TensorDataset(embeddings_np[val_indices], all_cow_ids[val_indices])
44
+ test_dataset = TensorDataset(embeddings_np[test_indices], all_cow_ids[test_indices])
45
+
46
+ print(f"Train set: {len(train_dataset)} samples")
47
+ print(f"Validation set: {len(val_dataset)} samples")
48
+ print(f"Test set: {len(test_dataset)} samples")
49
+ return train_dataset, val_dataset, test_dataset
50
+
51
+
52
+
53
+ def get_time_series(
54
+ features_path='../big_model_inference/resnet18_embeddings.pt',
55
+ labels_path='../big_model_inference/all_cow_ids.pt'
56
+ ):
57
+ embeddings_np = torch.load(features_path)
58
+ all_cow_ids = torch.load(labels_path) - 1
59
+
60
+
61
+ num_samples = len(embeddings_np)
62
+
63
+ train_end = int(0.33 * num_samples)
64
+ val_end = int(0.66 * num_samples)
65
+
66
+ # Create datasets for each split
67
+ train_dataset = TensorDataset(embeddings_np[:train_end], all_cow_ids[:train_end])
68
+ val_dataset = TensorDataset(embeddings_np[train_end:val_end], all_cow_ids[train_end:val_end])
69
+ test_dataset = TensorDataset(embeddings_np[val_end:], all_cow_ids[val_end:])
70
+
71
+ print(f"Train set: {len(train_dataset)} samples")
72
+ print(f"Validation set: {len(val_dataset)} samples")
73
+ print(f"Test set: {len(test_dataset)} samples")
74
+ return train_dataset, val_dataset, test_dataset
75
+
76
+
77
+
78
+ def get_time_series_tf(
79
+ features_path='../big_model_inference/resnet18_embeddings.pt',
80
+ labels_path='../big_model_inference/all_cow_ids.pt'
81
+ ):
82
+ embeddings_np = torch.load(features_path)
83
+ all_cow_ids = torch.load(labels_path) - 1
84
+ embeddings_np = embeddings_np.numpy()
85
+ all_cow_ids = all_cow_ids.numpy()
86
+
87
+ num_samples = len(embeddings_np)
88
+
89
+ train_end = int(0.33 * num_samples)
90
+ val_end = int(0.66 * num_samples)
91
+
92
+ # Create datasets for each split
93
+ train_dataset = tf.data.Dataset.from_tensor_slices((embeddings_np[:train_end], all_cow_ids[:train_end]))
94
+ val_dataset = tf.data.Dataset.from_tensor_slices((embeddings_np[train_end:val_end], all_cow_ids[train_end:val_end]))
95
+ test_dataset = tf.data.Dataset.from_tensor_slices((embeddings_np[val_end:], all_cow_ids[val_end:]))
96
+
97
+ print(f"Train set: {len(train_dataset)} samples")
98
+ print(f"Validation set: {len(val_dataset)} samples")
99
+ print(f"Test set: {len(test_dataset)} samples")
100
+
101
+ batch_size = 32
102
+
103
+ train_dataset = train_dataset.shuffle(len(train_dataset)).batch(
104
+ batch_size,
105
+ num_parallel_calls=tf.data.AUTOTUNE
106
+ ).prefetch(tf.data.AUTOTUNE)
107
+
108
+ val_dataset = val_dataset.batch(
109
+ batch_size,
110
+ num_parallel_calls=tf.data.AUTOTUNE
111
+ ).prefetch(tf.data.AUTOTUNE)
112
+
113
+ test_dataset = test_dataset.batch(
114
+ batch_size,
115
+ num_parallel_calls=tf.data.AUTOTUNE
116
+ ).prefetch(tf.data.AUTOTUNE)
117
+
118
+ train_dataset = tfds.as_numpy(train_dataset)
119
+ val_dataset = tfds.as_numpy(val_dataset)
120
+ test_dataset = tfds.as_numpy(test_dataset)
121
+ return train_dataset, val_dataset, test_dataset, len(embeddings_np[0])
122
+
123
+ if __name__ == "__main__":
124
+ train_dataset, val_dataset, test_dataset, in_features = get_time_series_tf(features_path='../big_model_inference/facebook_dinov2_base_embeddings.pt')
125
+ print(f"in features : {in_features}")
126
+ for batch in train_dataset:
127
+ batch = {
128
+ 'feature' : jnp.array(batch[0]),
129
+ "label" : jnp.array(batch[1])
130
+ }
131
+ print(batch)
132
+ break
133
+ for batch in val_dataset:
134
+ print(batch)
135
+ break
136
+ for batch in test_dataset:
137
+ print(batch)
138
+ break
identification_benchmark/classification/knn_evaluation.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import faiss
2
+ import numpy as np
3
+ import torch
4
+ import os, glob
5
+
6
+ def get_results(features_path):
7
+ print(features_path)
8
+ embeddings_np = torch.load(features_path).numpy()
9
+ all_cow_ids = torch.load("../big_model_inference/all_cow_ids.pt").numpy()
10
+
11
+
12
+ mid_point = len(embeddings_np) // 2
13
+ # print(f"mid_point : {mid_point}")
14
+ embeddings_np_first_half = embeddings_np[:mid_point]
15
+ embeddings_np_second_half = embeddings_np[mid_point:]
16
+
17
+ all_cow_ids_first_half = all_cow_ids[:mid_point]
18
+ all_cow_ids_second_half = all_cow_ids[mid_point:]
19
+
20
+ # # Assuming embeddings_np is your numpy array of shape (N, 512) and dtype=np.float32
21
+ d = embeddings_np_first_half.shape[1] # Dimensionality (512)
22
+ nlist = 100 # Number of clusters (you can tune this parameter)
23
+
24
+
25
+ m = 8 # Number of subquantizers (must be a divisor of d)
26
+ nbits = 8 # Bits per subquantizer
27
+
28
+ flat_index = faiss.IndexFlatL2(d)
29
+ index_ivf = faiss.IndexIVFPQ(flat_index, d, nlist, m, nbits)
30
+ index_ivf.nprobe = 10
31
+ index_ivf.train(embeddings_np_first_half)
32
+ index_ivf.add(embeddings_np_first_half)
33
+ # flat_index.add(embeddings_np_first_half)
34
+ k = 6
35
+ distances, indices = index_ivf.search(embeddings_np_second_half, k)
36
+
37
+
38
+ # print("Nearest neighbors (indices) for the first 10 images:")
39
+ # print(indices[-10:])
40
+ # print("Corresponding distances:")
41
+ # print(distances[-10:])
42
+
43
+
44
+ # Calculate top-1 and top-5 accuracy
45
+ top1_correct = 0
46
+ top5_correct = 0
47
+
48
+ for i, indices_row in enumerate(indices):
49
+ query_id = all_cow_ids_second_half[i]
50
+
51
+ # Get cow IDs for the retrieved results
52
+ retrieved_ids = [all_cow_ids_first_half[idx] for idx in indices_row]
53
+
54
+ # Top-1: Check if the first result matches the query ID
55
+ if retrieved_ids[0] == query_id:
56
+ top1_correct += 1
57
+
58
+ # Top-5: Check if any of the first 5 results match the query ID
59
+ if query_id in retrieved_ids[:5]:
60
+ top5_correct += 1
61
+
62
+ # Calculate accuracy rates
63
+ top1_accuracy = top1_correct / len(embeddings_np_second_half)
64
+ top5_accuracy = top5_correct / len(embeddings_np_second_half)
65
+
66
+ print(f"Top-1 Accuracy: {top1_accuracy:.4f}")
67
+ print(f"Top-5 Accuracy: {top5_accuracy:.4f}")
68
+
69
+ directory = '../big_model_inference' # replace with your directory path
70
+ pattern = os.path.join(directory, '*.pt')
71
+ exclude_file = 'all_cow_ids.pt'
72
+ for features_path in glob.glob(pattern):
73
+ if os.path.basename(features_path) != exclude_file:
74
+ get_results(features_path)
75
+ # print(features_path)
identification_benchmark/classification/model.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flax import nnx # The Flax NNX API.
2
+
3
+ class LinearClassifier(nnx.Module):
4
+ def __init__(self, in_features, out_features, rngs: nnx.Rngs):
5
+ self.linear = nnx.Linear(in_features, out_features, rngs=rngs)
6
+
7
+ def __call__(self, x):
8
+ return self.linear(x)
9
+
10
+
11
+ if __name__ == "__main__":
12
+ # Instantiate the model.
13
+ model = LinearClassifier(
14
+ in_features=512,
15
+ out_features=8,
16
+ rngs=nnx.Rngs(0)
17
+ )
18
+ # Visualize it.
19
+ nnx.display(model)
identification_benchmark/classification/train.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import glob
2
+ import os
3
+ # os.environ["JAX_PLATFORMS"] = "cpu" # Must be set before importing jax
4
+ from model import LinearClassifier
5
+ from flax import nnx
6
+ import optax
7
+ from data_loading import get_time_series_tf
8
+ import jax.numpy as jnp
9
+ import numpy as np
10
+ from copy import deepcopy
11
+
12
+
13
+ def loss_fn(model: LinearClassifier, batch):
14
+ logits = model(batch['feature'])
15
+ loss = optax.softmax_cross_entropy_with_integer_labels(
16
+ logits=logits, labels=batch['label']
17
+ ).mean()
18
+ return loss, logits
19
+
20
+ @nnx.jit
21
+ def train_step(model: LinearClassifier, optimizer: nnx.Optimizer, batch):
22
+ """Train for a single step."""
23
+ grad_fn = nnx.value_and_grad(loss_fn, has_aux=True)
24
+ (loss, logits), grads = grad_fn(model, batch)
25
+ optimizer.update(grads) # In-place updates.
26
+
27
+ @nnx.jit
28
+ def eval_step(model: LinearClassifier, metrics: nnx.MultiMetric, batch):
29
+ loss, logits = loss_fn(model, batch)
30
+ metrics.update(loss=loss, logits=logits, labels=batch['label']) # In-place updates.
31
+
32
+
33
+
34
+ def get_results(features_path):
35
+ print(features_path)
36
+
37
+ train_loader, val_loader, test_loader, in_features = get_time_series_tf(
38
+ features_path=features_path
39
+ )
40
+
41
+ model = LinearClassifier(
42
+ in_features=in_features,
43
+ out_features=8,
44
+ rngs=nnx.Rngs(0)
45
+ )
46
+ # nnx.display(model)
47
+
48
+ learning_rate = 0.005
49
+ optimizer = nnx.Optimizer(
50
+ model,
51
+ optax.adamw(learning_rate=learning_rate)
52
+ )
53
+ # nnx.display(optimizer)
54
+
55
+ metrics = nnx.MultiMetric(
56
+ accuracy=nnx.metrics.Accuracy(),
57
+ loss=nnx.metrics.Average('loss'),
58
+ )
59
+
60
+ epochs = 100
61
+
62
+ best_accuracy = 0.0
63
+ best_model = deepcopy(model)
64
+ patience = 0
65
+ # train and validation goes here
66
+ for epoch in range(epochs):
67
+ if patience == 10:
68
+ break
69
+ for batch in train_loader:
70
+ batch = {
71
+ 'feature' : jnp.array(batch[0]),
72
+ 'label' : jnp.array(batch[1])
73
+ }
74
+ train_step(model, optimizer, batch)
75
+ for batch in val_loader:
76
+ batch = {
77
+ 'feature' : jnp.array(batch[0]),
78
+ 'label' : jnp.array(batch[1])
79
+ }
80
+ eval_step(model, metrics, batch)
81
+ # Log the test metrics.
82
+ results = metrics.compute()
83
+
84
+ accuracy = results['accuracy'].item()
85
+ if accuracy > best_accuracy:
86
+ best_accuracy = accuracy
87
+ best_model = deepcopy(model)
88
+ patience = 0
89
+ else:
90
+ patience += 1
91
+ metrics.reset() # Reset the metrics for the next training epoch.
92
+
93
+ print(f"best eval accuracy: {best_accuracy}")
94
+
95
+ # testing goes here
96
+ for batch in test_loader:
97
+ batch = {
98
+ 'feature' : jnp.array(batch[0]),
99
+ 'label' : jnp.array(batch[1])
100
+ }
101
+ eval_step(best_model, metrics, batch)
102
+ # Log the test metrics.
103
+
104
+ results = metrics.compute()
105
+ accuracy = results['accuracy'].item()
106
+ print(f"test accuracy: {accuracy}")
107
+
108
+
109
+ directory = '../big_model_inference' # replace with your directory path
110
+ pattern = os.path.join(directory, '*.pt')
111
+ exclude_file = 'all_cow_ids.pt'
112
+ for features_path in glob.glob(pattern):
113
+ if os.path.basename(features_path) != exclude_file:
114
+ get_results(features_path)
115
+
116
+ # get_results('../big_model_inference/facebook_dinov2_base_embeddings.pt')
identification_benchmark/construct_lmdb.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import lmdb
3
+ import re
4
+ import multiprocessing
5
+ from tqdm import tqdm
6
+
7
+ def sort_key(filename):
8
+ # Extract frame number and cow id from filenames like:
9
+ # "pmfeed_4_3_16_frame_10000_cow_1.jpg"
10
+ match = re.search(r'frame_(\d+)_cow_(\d+)', filename)
11
+ if match:
12
+ frame_number = int(match.group(1))
13
+ cow_id = int(match.group(2))
14
+ return (frame_number, cow_id)
15
+ return (float('inf'), float('inf'))
16
+
17
+ def read_image(args):
18
+ image_folder, image_name = args
19
+ image_path = os.path.join(image_folder, image_name)
20
+ try:
21
+ with open(image_path, 'rb') as f:
22
+ image_data = f.read()
23
+ return (image_name, image_data)
24
+ except Exception as e:
25
+ print(f"Error reading {image_name}: {e}")
26
+ return None
27
+
28
+ def main():
29
+ # Define your pathscon
30
+ image_folder = 'all_crops_pmfeed_4_3_16'
31
+ lmdb_path = 'lmdb_all_crops_pmfeed_4_3_16'
32
+
33
+ # Create LMDB directory if it doesn't exist
34
+ if not os.path.exists(lmdb_path):
35
+ os.makedirs(lmdb_path)
36
+
37
+ # List and sort JPEG files
38
+ image_files = [f for f in os.listdir(image_folder) if f.endswith('.jpg')]
39
+ sorted_files = sorted(image_files, key=sort_key)
40
+
41
+ # For sanity check, take the first 20 images
42
+ sanity_files = sorted_files
43
+
44
+ # Prepare arguments for multiprocessing
45
+ args = [(image_folder, image_name) for image_name in sanity_files]
46
+
47
+ # Use multiprocessing Pool to read images concurrently
48
+ with multiprocessing.Pool(processes=multiprocessing.cpu_count()) as pool:
49
+ results = list(tqdm(pool.imap(read_image, args), total=len(args), desc="Reading images"))
50
+
51
+ # Filter out any failed reads
52
+ results = [res for res in results if res is not None]
53
+
54
+ # Open LMDB environment with an appropriate map size (e.g., 10GB)
55
+ map_size = 10 * 1024 * 1024 * 1024 # 10GB in bytes
56
+ env = lmdb.open(lmdb_path, map_size=map_size)
57
+
58
+ # Write the results into LMDB using a single write transaction
59
+ with env.begin(write=True) as txn:
60
+ for key, value in tqdm(results, desc="Writing to LMDB"):
61
+ txn.put(key.encode('utf-8'), value)
62
+
63
+ print("LMDB database creation complete for all images!")
64
+
65
+ if __name__ == '__main__':
66
+ main()
identification_benchmark/crop_pmfeed_4_3_16.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import pandas as pd
3
+ import pickle
4
+ import os
5
+
6
+ # Files
7
+ pickle_filename = "../pmfeed_4_3_16_bboxes_and_labels.pkl"
8
+ video_filename = "../pmfeed_4_3_16.mp4"
9
+ output_dir = "all_crops_pmfeed_4_3_16"
10
+
11
+ # Create output directory if it doesn't exist
12
+ os.makedirs(output_dir, exist_ok=True)
13
+
14
+ # Load the bounding boxes DataFrame from the pickle file
15
+ with open(pickle_filename, "rb") as f:
16
+ df = pickle.load(f)
17
+
18
+ # Open the video file
19
+ cap = cv2.VideoCapture(video_filename)
20
+ if not cap.isOpened():
21
+ raise IOError(f"Cannot open video file {video_filename}")
22
+
23
+ # Get video dimensions
24
+ frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
25
+ frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
26
+ print(f"Video dimensions: {frame_width}x{frame_height}")
27
+
28
+ # Initialize sliding window pointers and frame counter
29
+ num_rows = len(df)
30
+ i = 0
31
+ frames_processed = 0
32
+ # max_frames = 3 # only process first 3 frames
33
+
34
+ while i < num_rows:
35
+ # Get the current frame_id for this sliding window
36
+ current_frame_id = int(df.iloc[i]["frame_id"])
37
+ j = i
38
+ # Move j until the frame_id changes
39
+ while j < num_rows and df.iloc[j]["frame_id"] == current_frame_id:
40
+ j += 1
41
+
42
+ # Set the video to the appropriate frame (frame_id is assumed to be 1-indexed)
43
+ cap.set(cv2.CAP_PROP_POS_FRAMES, current_frame_id - 1)
44
+ ret, frame = cap.read()
45
+ if not ret:
46
+ print(f"Warning: Could not read frame {current_frame_id}")
47
+ i = j
48
+ continue
49
+
50
+ # Process all bounding boxes for this frame (from row i to j-1)
51
+ for index in range(i, j):
52
+ row = df.iloc[index]
53
+ # Assuming coordinates are normalized: (center x, center y, width, height)
54
+ x_center = row["x"]
55
+ y_center = row["y"]
56
+ bbox_width = row["w"]
57
+ bbox_height = row["h"]
58
+
59
+ # Convert normalized coordinates to absolute pixel values
60
+ left = int((x_center - bbox_width / 2) * frame_width)
61
+ top = int((y_center - bbox_height / 2) * frame_height)
62
+ right = int((x_center + bbox_width / 2) * frame_width)
63
+ bottom = int((y_center + bbox_height / 2) * frame_height)
64
+
65
+ # Clamp the coordinates to within the frame dimensions
66
+ left = max(left, 0)
67
+ top = max(top, 0)
68
+ right = min(right, frame_width)
69
+ bottom = min(bottom, frame_height)
70
+
71
+ # Skip if resulting crop dimensions are invalid
72
+ if right - left <= 0 or bottom - top <= 0:
73
+ print(f"Warning: Invalid crop dimensions for frame {current_frame_id}, tracklet {row['tracklet_id']}")
74
+ continue
75
+
76
+ # Crop the image
77
+ crop_img = frame[top:bottom, left:right]
78
+
79
+ # Save crop image with filename format: "pmfeed_4_3_16_frame_<frame_id>_cow_<tracklet_id>.jpg"
80
+ filename = f"pmfeed_4_3_16_frame_{current_frame_id}_cow_{int(row['tracklet_id'])}.jpg"
81
+ output_path = os.path.join(output_dir, filename)
82
+ cv2.imwrite(output_path, crop_img)
83
+ print(f"Saved crop: {output_path}")
84
+
85
+ frames_processed += 1
86
+ i = j
87
+
88
+ # Release video capture
89
+ cap.release()
90
+ print("Cropping all frames completed.")
requirement.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # packages in environment at /user/work/xf16910/.conda/envs/py310:
2
+ #
3
+ # Name Version Build Channel
4
+ _libgcc_mutex 0.1 conda_forge conda-forge
5
+ _openmp_mutex 4.5 2_gnu conda-forge
6
+ absl-py 2.2.1 pypi_0 pypi
7
+ accelerate 1.5.2 pypi_0 pypi
8
+ aiohappyeyeballs 2.6.1 pypi_0 pypi
9
+ aiohttp 3.11.15 pypi_0 pypi
10
+ aiosignal 1.3.2 pypi_0 pypi
11
+ albucore 0.0.23 pypi_0 pypi
12
+ albumentations 2.0.5 pypi_0 pypi
13
+ annotated-types 0.7.0 pypi_0 pypi
14
+ array-record 0.7.1 pypi_0 pypi
15
+ astunparse 1.6.3 pypi_0 pypi
16
+ async-timeout 5.0.1 pypi_0 pypi
17
+ attrs 25.3.0 pypi_0 pypi
18
+ bzip2 1.0.8 h4bc722e_7 conda-forge
19
+ c-ares 1.34.5 hb9d3cd8_0 conda-forge
20
+ ca-certificates 2025.1.31 hbcca054_0 conda-forge
21
+ certifi 2025.1.31 pypi_0 pypi
22
+ charset-normalizer 3.4.1 pypi_0 pypi
23
+ chex 0.1.89 pypi_0 pypi
24
+ contourpy 1.3.1 pypi_0 pypi
25
+ curl 7.88.1 hdc1c0ab_1 conda-forge
26
+ cycler 0.12.1 pypi_0 pypi
27
+ datasets 3.5.0 pypi_0 pypi
28
+ dill 0.3.8 pypi_0 pypi
29
+ dm-tree 0.1.9 pypi_0 pypi
30
+ docstring-parser 0.16 pypi_0 pypi
31
+ einops 0.8.1 pypi_0 pypi
32
+ etils 1.12.2 pypi_0 pypi
33
+ expat 2.7.0 h5888daf_0 conda-forge
34
+ faiss-gpu 1.7.2 pypi_0 pypi
35
+ filelock 3.18.0 pypi_0 pypi
36
+ flatbuffers 25.2.10 pypi_0 pypi
37
+ flax 0.10.4 pypi_0 pypi
38
+ fonttools 4.56.0 pypi_0 pypi
39
+ frozenlist 1.5.0 pypi_0 pypi
40
+ fsspec 2024.12.0 pypi_0 pypi
41
+ gast 0.6.0 pypi_0 pypi
42
+ gettext 0.23.1 h5888daf_0 conda-forge
43
+ gettext-tools 0.23.1 h5888daf_0 conda-forge
44
+ git 2.45.2 pl5340h9abc3c3_1 anaconda
45
+ git-lfs 1.6 pypi_0 pypi
46
+ google-pasta 0.2.0 pypi_0 pypi
47
+ grpcio 1.71.0 pypi_0 pypi
48
+ h5py 3.13.0 pypi_0 pypi
49
+ huggingface-hub 0.29.3 pypi_0 pypi
50
+ humanize 4.12.2 pypi_0 pypi
51
+ idna 3.10 pypi_0 pypi
52
+ immutabledict 4.2.1 pypi_0 pypi
53
+ importlib-resources 6.5.2 pypi_0 pypi
54
+ jax 0.5.3 pypi_0 pypi
55
+ jax-cuda12-pjrt 0.5.3 pypi_0 pypi
56
+ jax-cuda12-plugin 0.5.3 pypi_0 pypi
57
+ jaxlib 0.5.3 pypi_0 pypi
58
+ jinja2 3.1.6 pypi_0 pypi
59
+ joblib 1.4.2 pypi_0 pypi
60
+ keras 3.9.1 pypi_0 pypi
61
+ keyutils 1.6.1 h166bdaf_0 conda-forge
62
+ kiwisolver 1.4.8 pypi_0 pypi
63
+ krb5 1.20.1 h81ceb04_0 conda-forge
64
+ ld_impl_linux-64 2.43 h712a8e2_4 conda-forge
65
+ libasprintf 0.23.1 h8e693c7_0 conda-forge
66
+ libasprintf-devel 0.23.1 h8e693c7_0 conda-forge
67
+ libclang 18.1.1 pypi_0 pypi
68
+ libcurl 7.88.1 hdc1c0ab_1 conda-forge
69
+ libedit 3.1.20250104 pl5321h7949ede_0 conda-forge
70
+ libev 4.33 hd590300_2 conda-forge
71
+ libexpat 2.7.0 h5888daf_0 conda-forge
72
+ libffi 3.4.6 h2dba641_0 conda-forge
73
+ libgcc 14.2.0 h767d61c_2 conda-forge
74
+ libgcc-ng 14.2.0 h69a702a_2 conda-forge
75
+ libgettextpo 0.23.1 h5888daf_0 conda-forge
76
+ libgettextpo-devel 0.23.1 h5888daf_0 conda-forge
77
+ libgomp 14.2.0 h767d61c_2 conda-forge
78
+ liblzma 5.6.4 hb9d3cd8_0 conda-forge
79
+ liblzma-devel 5.6.4 hb9d3cd8_0 conda-forge
80
+ libnghttp2 1.58.0 h47da74e_1 conda-forge
81
+ libnsl 2.0.1 hd590300_0 conda-forge
82
+ libsqlite 3.46.0 hde9e2c9_0 conda-forge
83
+ libssh2 1.11.0 h0841786_0 conda-forge
84
+ libstdcxx 14.2.0 h8f9b012_2 conda-forge
85
+ libstdcxx-ng 14.2.0 h4852527_2 conda-forge
86
+ libuuid 2.38.1 h0b41bf4_0 conda-forge
87
+ libxcrypt 4.4.36 hd590300_1 conda-forge
88
+ libzlib 1.2.13 h4ab18f5_6 conda-forge
89
+ lightning-utilities 0.14.2 pypi_0 pypi
90
+ lmdb 1.6.2 pypi_0 pypi
91
+ markdown 3.7 pypi_0 pypi
92
+ markdown-it-py 3.0.0 pypi_0 pypi
93
+ markupsafe 3.0.2 pypi_0 pypi
94
+ matplotlib 3.10.1 pypi_0 pypi
95
+ mdurl 0.1.2 pypi_0 pypi
96
+ ml-dtypes 0.5.1 pypi_0 pypi
97
+ mpmath 1.3.0 pypi_0 pypi
98
+ msgpack 1.1.0 pypi_0 pypi
99
+ multidict 6.3.0 pypi_0 pypi
100
+ multiprocess 0.70.16 pypi_0 pypi
101
+ namex 0.0.8 pypi_0 pypi
102
+ ncurses 6.5 h2d0b736_3 conda-forge
103
+ nest-asyncio 1.6.0 pypi_0 pypi
104
+ networkx 3.4.2 pypi_0 pypi
105
+ numpy 1.26.4 pypi_0 pypi
106
+ nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
107
+ nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
108
+ nvidia-cuda-nvcc-cu12 12.8.93 pypi_0 pypi
109
+ nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
110
+ nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
111
+ nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
112
+ nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
113
+ nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
114
+ nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
115
+ nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
116
+ nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
117
+ nvidia-nccl-cu12 2.21.5 pypi_0 pypi
118
+ nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
119
+ nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
120
+ opencv-python 4.11.0.86 pypi_0 pypi
121
+ opencv-python-headless 4.11.0.86 pypi_0 pypi
122
+ openssl 3.5.0 h7b32b05_0 conda-forge
123
+ opt-einsum 3.4.0 pypi_0 pypi
124
+ optax 0.2.4 pypi_0 pypi
125
+ optree 0.14.1 pypi_0 pypi
126
+ orbax-checkpoint 0.11.10 pypi_0 pypi
127
+ packaging 24.2 pypi_0 pypi
128
+ pandas 2.2.3 pypi_0 pypi
129
+ pcre2 10.42 hcad00b1_0 conda-forge
130
+ perl 5.32.1 7_hd590300_perl5 conda-forge
131
+ pillow 11.1.0 pypi_0 pypi
132
+ pip 25.0.1 pyh8b19718_0 conda-forge
133
+ promise 2.3 pypi_0 pypi
134
+ propcache 0.3.1 pypi_0 pypi
135
+ protobuf 3.20.3 pypi_0 pypi
136
+ psutil 7.0.0 pypi_0 pypi
137
+ py-cpuinfo 9.0.0 pypi_0 pypi
138
+ pyarrow 19.0.1 pypi_0 pypi
139
+ pycocotools 2.0.8 pypi_0 pypi
140
+ pydantic 2.11.1 pypi_0 pypi
141
+ pydantic-core 2.33.0 pypi_0 pypi
142
+ pygments 2.19.1 pypi_0 pypi
143
+ pyparsing 3.2.3 pypi_0 pypi
144
+ python 3.10.14 hd12c33a_0_cpython conda-forge
145
+ python-dateutil 2.9.0.post0 pypi_0 pypi
146
+ pytz 2025.2 pypi_0 pypi
147
+ pyyaml 6.0.2 pypi_0 pypi
148
+ readline 8.2 h8c095d6_2 conda-forge
149
+ regex 2024.11.6 pypi_0 pypi
150
+ requests 2.32.3 pypi_0 pypi
151
+ rich 13.9.4 pypi_0 pypi
152
+ safetensors 0.5.3 pypi_0 pypi
153
+ scikit-learn 1.6.1 pypi_0 pypi
154
+ scipy 1.15.2 pypi_0 pypi
155
+ seaborn 0.13.2 pypi_0 pypi
156
+ setuptools 75.8.2 pyhff2d567_0 conda-forge
157
+ simple-parsing 0.1.7 pypi_0 pypi
158
+ simplejson 3.20.1 pypi_0 pypi
159
+ simsimd 6.2.1 pypi_0 pypi
160
+ six 1.17.0 pypi_0 pypi
161
+ stringzilla 3.12.3 pypi_0 pypi
162
+ sympy 1.13.1 pypi_0 pypi
163
+ tensorboard 2.19.0 pypi_0 pypi
164
+ tensorboard-data-server 0.7.2 pypi_0 pypi
165
+ tensorflow 2.19.0 pypi_0 pypi
166
+ tensorflow-datasets 4.9.8 pypi_0 pypi
167
+ tensorflow-io-gcs-filesystem 0.37.1 pypi_0 pypi
168
+ tensorflow-metadata 1.16.1 pypi_0 pypi
169
+ tensorstore 0.1.73 pypi_0 pypi
170
+ termcolor 2.5.0 pypi_0 pypi
171
+ tf-keras 2.19.0 pypi_0 pypi
172
+ threadpoolctl 3.6.0 pypi_0 pypi
173
+ timm 1.0.15 pypi_0 pypi
174
+ tk 8.6.14 h39e8969_0 anaconda
175
+ tokenizers 0.21.1 pypi_0 pypi
176
+ toml 0.10.2 pypi_0 pypi
177
+ toolz 1.0.0 pypi_0 pypi
178
+ torch 2.6.0 pypi_0 pypi
179
+ torchmetrics 1.7.0 pypi_0 pypi
180
+ torchvision 0.21.0 pypi_0 pypi
181
+ tqdm 4.67.1 pypi_0 pypi
182
+ transformers 4.50.3 pypi_0 pypi
183
+ treescope 0.1.9 pypi_0 pypi
184
+ triton 3.2.0 pypi_0 pypi
185
+ typing-extensions 4.13.0 pypi_0 pypi
186
+ typing-inspection 0.4.0 pypi_0 pypi
187
+ tzdata 2025.2 pypi_0 pypi
188
+ ultralytics 8.3.99 pypi_0 pypi
189
+ ultralytics-thop 2.0.14 pypi_0 pypi
190
+ urllib3 2.3.0 pypi_0 pypi
191
+ werkzeug 3.1.3 pypi_0 pypi
192
+ wheel 0.45.1 pyhd8ed1ab_1 conda-forge
193
+ wrapt 1.17.2 pypi_0 pypi
194
+ xxhash 3.5.0 pypi_0 pypi
195
+ xz 5.6.4 hbcc6ac9_0 conda-forge
196
+ xz-gpl-tools 5.6.4 hbcc6ac9_0 conda-forge
197
+ xz-tools 5.6.4 hb9d3cd8_0 conda-forge
198
+ yarl 1.18.3 pypi_0 pypi
199
+ zipp 3.21.0 pypi_0 pypi
200
+ zlib 1.2.13 h4ab18f5_6 conda-forge
201
+ zstd 1.5.6 ha6fb4c9_0 conda-forge