tayhan commited on
Commit
4c609ee
·
1 Parent(s): 127ee62
This view is limited to 50 files because it contains too many changes.   See raw diff
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ . filter=lfs diff=lfs merge=lfs -text
37
+ *.pdf filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
39
+ figs/*.png filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ FROM conda/miniconda3
2
+ RUN apt-get install -y git
3
+ RUN git clone https://github.com/Vision-CAIR/MiniGPT-4.git
4
+ WORKDIR /MiniGPT-4
5
+ RUN conda env create -f environment.yml
6
+ RUN conda activate minigpt4
LICENSE.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ BSD 3-Clause License
2
+
3
+ Copyright 2023 Deyao Zhu
4
+ All rights reserved.
5
+
6
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
9
+
10
+ 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
11
+
12
+ 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
13
+
14
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
LICENSE_Lavis.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ BSD 3-Clause License
2
+
3
+ Copyright (c) 2022 Salesforce, Inc.
4
+ All rights reserved.
5
+
6
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
9
+
10
+ 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
11
+
12
+ 3. Neither the name of Salesforce.com nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
13
+
14
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
MiniGPT_4.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6d3843b238d5cceb7fd1f6d07582196e18fafa3ed02b65d9fbf089532819d1c
3
+ size 6616060
PrepareVicuna.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## How to Prepare Vicuna Weight
2
+ Vicuna is an open-source LLAMA-based LLM that has a performance close to ChatGPT.
3
+ We currently use the v0 version of Vicuna-13B.
4
+
5
+ To prepare Vicuna’s weight, first download Vicuna’s **delta** weight from [https://huggingface.co/lmsys/vicuna-13b-delta-v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0).
6
+ In case you have git-lfs installed (https://git-lfs.com), this can be done by
7
+
8
+ ```
9
+ git lfs install
10
+ git clone https://huggingface.co/lmsys/vicuna-13b-delta-v0 # more powerful, need at least 24G gpu memory
11
+ # or
12
+ git clone https://huggingface.co/lmsys/vicuna-7b-delta-v0 # smaller, need 12G gpu memory
13
+ ```
14
+
15
+ Note that this is not directly the working weight, but the difference between the working weight and the original weight of LLAMA-13B. (Due to LLAMA’s rules, we cannot distribute the weight of LLAMA.)
16
+
17
+ Then, you need to obtain the original LLAMA-7B or LLAMA-13B weights in the HuggingFace format
18
+ either following the instruction provided by HuggingFace
19
+ [here](https://huggingface.co/docs/transformers/main/model_doc/llama) or from the Internet.
20
+
21
+ When these two weights are ready, we can use tools from Vicuna’s team to create the real working weight.
22
+ First, Install their library that is compatible with v0 Vicuna by
23
+
24
+ ```
25
+ pip install git+https://github.com/lm-sys/[email protected]
26
+ ```
27
+
28
+ Then, run the following command to create the final working weight
29
+
30
+ ```
31
+ python -m fastchat.model.apply_delta --base /path/to/llama-13bOR7b-hf/ --target /path/to/save/working/vicuna/weight/ --delta /path/to/vicuna-13bOR7b-delta-v0/
32
+ ```
33
+
34
+ Now you are good to go!
35
+
README.md CHANGED
@@ -1,10 +1,170 @@
1
- ---
2
- title: Minigpt Final
3
- emoji: 📊
4
- colorFrom: indigo
5
- colorTo: blue
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
2
+ [Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
3
+
4
+ **King Abdullah University of Science and Technology**
5
+
6
+ <a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2304.10592'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> <a href='https://huggingface.co/spaces/Vision-CAIR/minigpt4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a> <a href='https://huggingface.co/Vision-CAIR/MiniGPT-4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://www.youtube.com/watch?v=__tftoxpBAw&feature=youtu.be)
7
+
8
+
9
+ ## News
10
+ We now provide a pretrained MiniGPT-4 aligned with Vicuna-7B! The demo GPU memory consumption now can be as low as 12GB.
11
+
12
+
13
+ ## Online Demo
14
+
15
+ Click the image to chat with MiniGPT-4 around your images
16
+ [![demo](figs/online_demo.png)](https://minigpt-4.github.io)
17
+
18
+
19
+ ## Examples
20
+ | | |
21
+ :-------------------------:|:-------------------------:
22
+ ![find wild](figs/examples/wop_2.png) | ![write story](figs/examples/ad_2.png)
23
+ ![solve problem](figs/examples/fix_1.png) | ![write Poem](figs/examples/rhyme_1.png)
24
+
25
+ More examples can be found in the [project page](https://minigpt-4.github.io).
26
+
27
+
28
+
29
+ ## Introduction
30
+ - MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
31
+ - We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
32
+ - To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
33
+ - The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
34
+ - MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.
35
+
36
+
37
+ ![overview](figs/overview.png)
38
+
39
+
40
+ ## Getting Started
41
+ ### Installation
42
+
43
+ **1. Prepare the code and the environment**
44
+
45
+ Git clone our repository, creating a python environment and ativate it via the following command
46
+
47
+ ```bash
48
+ git clone https://github.com/Vision-CAIR/MiniGPT-4.git
49
+ cd MiniGPT-4
50
+ conda env create -f environment.yml
51
+ conda activate minigpt4
52
+ ```
53
+
54
+
55
+ **2. Prepare the pretrained Vicuna weights**
56
+
57
+ The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
58
+ Please refer to our instruction [here](PrepareVicuna.md)
59
+ to prepare the Vicuna weights.
60
+ The final weights would be in a single folder in a structure similar to the following:
61
+
62
+ ```
63
+ vicuna_weights
64
+ ├── config.json
65
+ ├── generation_config.json
66
+ ├── pytorch_model.bin.index.json
67
+ ├── pytorch_model-00001-of-00003.bin
68
+ ...
69
+ ```
70
+
71
+ Then, set the path to the vicuna weight in the model config file
72
+ [here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
73
+
74
+ **3. Prepare the pretrained MiniGPT-4 checkpoint**
75
+
76
+ Download the pretrained checkpoints according to the Vicuna model you prepare.
77
+
78
+ | Checkpoint Aligned with Vicuna 13B | Checkpoint Aligned with Vicuna 7B |
79
+ :------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------:
80
+ [Downlad](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link) | [Download](https://drive.google.com/file/d/1RY9jV0dyqLX-o38LrumkKRh6Jtaop58R/view?usp=sharing)
81
+
82
+
83
+ Then, set the path to the pretrained checkpoint in the evaluation config file
84
+ in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
85
+
86
+
87
+
88
+ ### Launching Demo Locally
89
+
90
+ Try out our demo [demo.py](demo.py) on your local machine by running
91
+
92
+ ```
93
+ python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
94
+ ```
95
+
96
+ To save GPU memory, Vicuna loads as 8 bit by default, with a beam search width of 1.
97
+ This configuration requires about 23G GPU memory for Vicuna 13B and 11.5G GPU memory for Vicuna 7B.
98
+ For more powerful GPUs, you can run the model
99
+ in 16 bit by setting low_resource to False in the config file
100
+ [minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
101
+
102
+ Thanks [@WangRongsheng](https://github.com/WangRongsheng), you can also run our code on [Colab](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing)
103
+
104
+
105
+ ### Training
106
+ The training of MiniGPT-4 contains two alignment stages.
107
+
108
+ **1. First pretraining stage**
109
+
110
+ In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
111
+ to align the vision and language model. To download and prepare the datasets, please check
112
+ our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
113
+ After the first stage, the visual features are mapped and can be understood by the language
114
+ model.
115
+ To launch the first stage training, run the following command. In our experiments, we use 4 A100.
116
+ You can change the save path in the config file
117
+ [train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
118
+
119
+ ```bash
120
+ torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
121
+ ```
122
+
123
+ A MiniGPT-4 checkpoint with only stage one training can be downloaded
124
+ [here (13B)](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link) or [here (7B)](https://drive.google.com/file/d/1HihQtCEXUyBM1i9DQbaK934wW3TZi-h5/view?usp=share_link).
125
+ Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
126
+
127
+
128
+ **2. Second finetuning stage**
129
+
130
+ In the second stage, we use a small high quality image-text pair dataset created by ourselves
131
+ and convert it to a conversation format to further align MiniGPT-4.
132
+ To download and prepare our second stage dataset, please check our
133
+ [second stage dataset preparation instruction](dataset/README_2_STAGE.md).
134
+ To launch the second stage alignment,
135
+ first specify the path to the checkpoint file trained in stage 1 in
136
+ [train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
137
+ You can also specify the output path there.
138
+ Then, run the following command. In our experiments, we use 1 A100.
139
+
140
+ ```bash
141
+ torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
142
+ ```
143
+
144
+ After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
145
+
146
+
147
+
148
+
149
+ ## Acknowledgement
150
+
151
+ + [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
152
+ + [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
153
+ + [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
154
+
155
+
156
+ If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
157
+ ```bibtex
158
+ @article{zhu2023minigpt,
159
+ title={MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models},
160
+ author={Zhu, Deyao and Chen, Jun and Shen, Xiaoqian and Li, Xiang and Elhoseiny, Mohamed},
161
+ journal={arXiv preprint arXiv:2304.10592},
162
+ year={2023}
163
+ }
164
+ ```
165
+
166
+
167
+ ## License
168
+ This repository is under [BSD 3-Clause License](LICENSE.md).
169
+ Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
170
+ BSD 3-Clause License [here](LICENSE_Lavis.md).
__pycache__/demo.cpython-39.pyc ADDED
Binary file (114 Bytes). View file
 
api.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ import random
4
+ from flask import Flask, redirect, url_for, request
5
+
6
+ import numpy as np
7
+ import torch
8
+ import torch.backends.cudnn as cudnn
9
+ import gradio as gr
10
+
11
+ from minigpt4.common.config import Config
12
+ from minigpt4.common.dist_utils import get_rank
13
+ from minigpt4.common.registry import registry
14
+ from minigpt4.conversation.conversation import Chat, CONV_VISION
15
+
16
+ # imports modules for registration
17
+ from minigpt4.datasets.builders import *
18
+ from minigpt4.models import *
19
+ from minigpt4.processors import *
20
+ from minigpt4.runners import *
21
+ from minigpt4.tasks import *
22
+ from PIL import Image
23
+ import requests
24
+
25
+
26
+ from huggingface_hub import login
27
+ login("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")
28
+
29
+ def parse_args():
30
+ parser = argparse.ArgumentParser(description="Demo")
31
+ parser.add_argument("--cfg-path", required=True, help="path to configuration file.")
32
+ parser.add_argument("--gpu-id", type=int, default=0, help="specify the gpu to load the model.")
33
+ parser.add_argument(
34
+ "--options",
35
+ nargs="+",
36
+ help="override some settings in the used config, the key-value pair "
37
+ "in xxx=yyy format will be merged into config file (deprecate), "
38
+ "change to --cfg-options instead.",
39
+ )
40
+ args = parser.parse_args()
41
+ return args
42
+
43
+
44
+ def setup_seeds(config):
45
+ seed = config.run_cfg.seed + get_rank()
46
+
47
+ random.seed(seed)
48
+ np.random.seed(seed)
49
+ torch.manual_seed(seed)
50
+
51
+ cudnn.benchmark = False
52
+ cudnn.deterministic = True
53
+
54
+
55
+
56
+ # ========================================
57
+ # Model Initialization
58
+ # ========================================
59
+
60
+ print('Initializing Chat')
61
+ args = parse_args()
62
+ cfg = Config(args)
63
+
64
+ model_config = cfg.model_cfg
65
+ model_config.device_8bit = args.gpu_id
66
+ model_cls = registry.get_model_class(model_config.arch)
67
+ model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
68
+
69
+ vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train
70
+ vis_processor = registry.get_processor_class(vis_processor_cfg.name).from_config(vis_processor_cfg)
71
+ chat = Chat(model, vis_processor, device='cuda:{}'.format(args.gpu_id))
72
+ print('Initialization Finished')
73
+
74
+
75
+ #
76
+
77
+ # curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "user_message=Response in json format with keys image_description, name, objects, object_name, object_color. " http://127.0.0.1:5000
78
+ # curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "user_message=describe the image" http://127.0.0.1:5000
79
+
80
+ #curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "user_message=Response in json format with keys image_description, name, objects, object_name, object_color. " http://127.0.0.1:5000
81
+
82
+ app = Flask(__name__)
83
+ app.config["DEBUG"] = False
84
+
85
+
86
+ @app.route('/', methods = ['POST', 'GET'])
87
+ def home():
88
+ user_message = request.form['user_message']
89
+ image = Image.open(requests.get(request.form['image'], stream=True).raw)
90
+
91
+ print(user_message)
92
+ chat_state = CONV_VISION.copy()
93
+ chat_state.messages = []
94
+ img_list = []
95
+ llm_message = chat.upload_img(image, chat_state, img_list)
96
+ chat.ask(user_message, chat_state)
97
+
98
+ llm_message = chat.answer(conv=chat_state,
99
+ img_list=img_list,
100
+ num_beams=5,
101
+ temperature=1,
102
+ max_new_tokens=600,
103
+ max_length=2000)[0]
104
+ return llm_message
105
+
106
+ app.run(host='0.0.0.0')
107
+
dataset/README_1_STAGE.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Download the filtered Conceptual Captions, SBU, LAION datasets
2
+
3
+ ### Pre-training datasets download:
4
+ We use the filtered synthetic captions prepared by BLIP. For more details about the dataset, please refer to [BLIP](https://github.com/salesforce/BLIP).
5
+
6
+ It requires ~2.3T to store LAION and CC3M+CC12M+SBU datasets
7
+
8
+ Image source | Filtered synthetic caption by ViT-L
9
+ --- | :---:
10
+ CC3M+CC12M+SBU | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_synthetic_filtered_large.json">Download</a>
11
+ LAION115M | <a href="https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/laion_synthetic_filtered_large.json">Download</a>
12
+
13
+ This will download two json files
14
+ ```
15
+ ccs_synthetic_filtered_large.json
16
+ laion_synthetic_filtered_large.json
17
+ ```
18
+
19
+ ## prepare the data step-by-step
20
+
21
+
22
+ ### setup the dataset folder and move the annotation file to the data storage folder
23
+ ```
24
+ export MINIGPT4_DATASET=/YOUR/PATH/FOR/LARGE/DATASET/
25
+ mkdir ${MINIGPT4_DATASET}/cc_sbu
26
+ mkdir ${MINIGPT4_DATASET}/laion
27
+ mv ccs_synthetic_filtered_large.json ${MINIGPT4_DATASET}/cc_sbu
28
+ mv laion_synthetic_filtered_large.json ${MINIGPT4_DATASET}/laion
29
+ ```
30
+
31
+ ### Convert the scripts to data storate folder
32
+ ```
33
+ cp convert_cc_sbu.py ${MINIGPT4_DATASET}/cc_sbu
34
+ cp download_cc_sbu.sh ${MINIGPT4_DATASET}/cc_sbu
35
+ cp convert_laion.py ${MINIGPT4_DATASET}/laion
36
+ cp download_laion.sh ${MINIGPT4_DATASET}/laion
37
+ ```
38
+
39
+
40
+ ### Convert the laion and cc_sbu annotation file format to be img2dataset format
41
+ ```
42
+ cd ${MINIGPT4_DATASET}/cc_sbu
43
+ python convert_cc_sbu.py
44
+
45
+ cd ${MINIGPT4_DATASET}/laion
46
+ python convert_laion.py
47
+ ```
48
+
49
+ ### Download the datasets with img2dataset
50
+ ```
51
+ cd ${MINIGPT4_DATASET}/cc_sbu
52
+ sh download_cc_sbu.sh
53
+ cd ${MINIGPT4_DATASET}/laion
54
+ sh download_laion.sh
55
+ ```
56
+
57
+
58
+ The final dataset structure
59
+
60
+ ```
61
+ .
62
+ ├── ${MINIGPT4_DATASET}
63
+ │ ├── cc_sbu
64
+ │ ├── convert_cc_sbu.py
65
+ │ ├── download_cc_sbu.sh
66
+ │ ├── ccs_synthetic_filtered_large.json
67
+ │ ├── ccs_synthetic_filtered_large.tsv
68
+ │ └── cc_sbu_dataset
69
+ │ ├── 00000.tar
70
+ │ ├── 00000.parquet
71
+ │ ...
72
+ │ ├── laion
73
+ │ ├── convert_laion.py
74
+ │ ├── download_laion.sh
75
+ │ ├── laion_synthetic_filtered_large.json
76
+ │ ├── laion_synthetic_filtered_large.tsv
77
+ │ └── laion_dataset
78
+ │ ├── 00000.tar
79
+ │ ├── 00000.parquet
80
+ │ ...
81
+ ...
82
+ ```
83
+
84
+
85
+ ## Set up the dataset configuration files
86
+
87
+ Then, set up the LAION dataset loading path in
88
+ [here](../minigpt4/configs/datasets/laion/defaults.yaml#L5) at Line 5 as
89
+ ${MINIGPT4_DATASET}/laion/laion_dataset/{00000..10488}.tar
90
+
91
+ and the Conceptual Captoin and SBU datasets loading path in
92
+ [here](../minigpt4/configs/datasets/cc_sbu/defaults.yaml#L5) at Line 5 as
93
+ ${MINIGPT4_DATASET}/cc_sbu/cc_sbu_dataset/{00000..01255}.tar
94
+
95
+
96
+
dataset/README_2_STAGE.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Second Stage Data Preparation
2
+
3
+ Our second stage dataset can be downloaded from
4
+ [here](https://drive.google.com/file/d/1nJXhoEcy3KTExr17I7BXqY5Y9Lx_-n-9/view?usp=share_link)
5
+ After extraction, you will get a data follder with the following structure:
6
+
7
+ ```
8
+ cc_sbu_align
9
+ ├── filter_cap.json
10
+ └── image
11
+ ├── 2.jpg
12
+ ├── 3.jpg
13
+ ...
14
+ ```
15
+
16
+ Put the folder to any path you want.
17
+ Then, set up the dataset path in the dataset config file
18
+ [here](../minigpt4/configs/datasets/cc_sbu/align.yaml#L5) at Line 5.
19
+
dataset/convert_cc_sbu.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import csv
3
+
4
+ # specify input and output file paths
5
+ input_file = 'ccs_synthetic_filtered_large.json'
6
+ output_file = 'ccs_synthetic_filtered_large.tsv'
7
+
8
+ # load JSON data from input file
9
+ with open(input_file, 'r') as f:
10
+ data = json.load(f)
11
+
12
+ # extract header and data from JSON
13
+ header = data[0].keys()
14
+ rows = [x.values() for x in data]
15
+
16
+ # write data to TSV file
17
+ with open(output_file, 'w') as f:
18
+ writer = csv.writer(f, delimiter='\t')
19
+ writer.writerow(header)
20
+ writer.writerows(rows)
dataset/convert_laion.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import csv
3
+
4
+ # specify input and output file paths
5
+ input_file = 'laion_synthetic_filtered_large.json'
6
+ output_file = 'laion_synthetic_filtered_large.tsv'
7
+
8
+ # load JSON data from input file
9
+ with open(input_file, 'r') as f:
10
+ data = json.load(f)
11
+
12
+ # extract header and data from JSON
13
+ header = data[0].keys()
14
+ rows = [x.values() for x in data]
15
+
16
+ # write data to TSV file
17
+ with open(output_file, 'w') as f:
18
+ writer = csv.writer(f, delimiter='\t')
19
+ writer.writerow(header)
20
+ writer.writerows(rows)
dataset/download_cc_sbu.sh ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ img2dataset --url_list ccs_synthetic_filtered_large.tsv --input_format "tsv"\
4
+ --url_col "url" --caption_col "caption" --output_format webdataset\
5
+ --output_folder cc_sbu_dataset --processes_count 16 --thread_count 128 --image_size 256 \
6
+ --enable_wandb True
dataset/download_laion.sh ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ img2dataset --url_list laion_synthetic_filtered_large.tsv --input_format "tsv"\
4
+ --url_col "url" --caption_col "caption" --output_format webdataset\
5
+ --output_folder laion_dataset --processes_count 16 --thread_count 128 --image_size 256 \
6
+ --enable_wandb True
demo.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ import random
4
+ import flask
5
+
6
+ import numpy as np
7
+ import torch
8
+ import torch.backends.cudnn as cudnn
9
+ import gradio as gr
10
+
11
+ from minigpt4.common.config import Config
12
+ from minigpt4.common.dist_utils import get_rank
13
+ from minigpt4.common.registry import registry
14
+ from minigpt4.conversation.conversation import Chat, CONV_VISION
15
+
16
+ # imports modules for registration
17
+ from minigpt4.datasets.builders import *
18
+ from minigpt4.models import *
19
+ from minigpt4.processors import *
20
+ from minigpt4.runners import *
21
+ from minigpt4.tasks import *
22
+
23
+
24
+ def parse_args():
25
+ parser = argparse.ArgumentParser(description="Demo")
26
+ parser.add_argument("--cfg-path", required=True, help="path to configuration file.")
27
+ parser.add_argument("--gpu-id", type=int, default=0, help="specify the gpu to load the model.")
28
+ parser.add_argument(
29
+ "--options",
30
+ nargs="+",
31
+ help="override some settings in the used config, the key-value pair "
32
+ "in xxx=yyy format will be merged into config file (deprecate), "
33
+ "change to --cfg-options instead.",
34
+ )
35
+ args = parser.parse_args()
36
+ return args
37
+
38
+
39
+ def setup_seeds(config):
40
+ seed = config.run_cfg.seed + get_rank()
41
+
42
+ random.seed(seed)
43
+ np.random.seed(seed)
44
+ torch.manual_seed(seed)
45
+
46
+ cudnn.benchmark = False
47
+ cudnn.deterministic = True
48
+
49
+
50
+ # ========================================
51
+ # Model Initialization
52
+ # ========================================
53
+
54
+ print('Initializing Chat')
55
+ args = parse_args()
56
+ cfg = Config(args)
57
+
58
+ model_config = cfg.model_cfg
59
+ model_config.device_8bit = args.gpu_id
60
+ model_cls = registry.get_model_class(model_config.arch)
61
+ model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
62
+
63
+ vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train
64
+ vis_processor = registry.get_processor_class(vis_processor_cfg.name).from_config(vis_processor_cfg)
65
+ chat = Chat(model, vis_processor, device='cuda:{}'.format(args.gpu_id))
66
+ print('Initialization Finished')
67
+
68
+ # ========================================
69
+ # Gradio Setting
70
+ # ========================================
71
+
72
+ def gradio_reset(chat_state, img_list):
73
+ if chat_state is not None:
74
+ chat_state.messages = []
75
+ if img_list is not None:
76
+ img_list = []
77
+ return None, gr.update(value=None, interactive=True), gr.update(placeholder='Please upload your image first', interactive=False),gr.update(value="Upload & Start Chat", interactive=True), chat_state, img_list
78
+
79
+ def upload_img(gr_img, text_input, chat_state):
80
+ if gr_img is None:
81
+ return None, None, gr.update(interactive=True), chat_state, None
82
+ chat_state = CONV_VISION.copy()
83
+ img_list = []
84
+ llm_message = chat.upload_img(gr_img, chat_state, img_list)
85
+ return gr.update(interactive=False), gr.update(interactive=True, placeholder='Type and press Enter'), gr.update(value="Start Chatting", interactive=False), chat_state, img_list
86
+
87
+ def gradio_ask(user_message, chatbot, chat_state):
88
+ if len(user_message) == 0:
89
+ return gr.update(interactive=True, placeholder='Input should not be empty!'), chatbot, chat_state
90
+ chat.ask(user_message, chat_state)
91
+ chatbot = chatbot + [[user_message, None]]
92
+ return '', chatbot, chat_state
93
+
94
+
95
+ def gradio_answer(chatbot, chat_state, img_list, num_beams, temperature):
96
+ llm_message = chat.answer(conv=chat_state,
97
+ img_list=img_list,
98
+ num_beams=num_beams,
99
+ temperature=temperature,
100
+ max_new_tokens=300,
101
+ max_length=2000)[0]
102
+ chatbot[-1][1] = llm_message
103
+ return chatbot, chat_state, img_list
104
+
105
+ title = """<h1 align="center">Demo of MiniGPT-4</h1>"""
106
+ description = """<h3>This is the demo of MiniGPT-4. Upload your images and start chatting!</h3>"""
107
+ article = """<p><a href='https://minigpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a></p><p><a href='https://github.com/Vision-CAIR/MiniGPT-4'><img src='https://img.shields.io/badge/Github-Code-blue'></a></p><p><a href='https://raw.githubusercontent.com/Vision-CAIR/MiniGPT-4/main/MiniGPT_4.pdf'><img src='https://img.shields.io/badge/Paper-PDF-red'></a></p>
108
+ """
109
+
110
+ #TODO show examples below
111
+
112
+ with gr.Blocks() as demo:
113
+ gr.Markdown(title)
114
+ gr.Markdown(description)
115
+ gr.Markdown(article)
116
+
117
+ with gr.Row():
118
+ with gr.Column(scale=0.5):
119
+ image = gr.Image(type="pil")
120
+ upload_button = gr.Button(value="Upload & Start Chat", interactive=True, variant="primary")
121
+ clear = gr.Button("Restart")
122
+
123
+ num_beams = gr.Slider(
124
+ minimum=1,
125
+ maximum=10,
126
+ value=1,
127
+ step=1,
128
+ interactive=True,
129
+ label="beam search numbers)",
130
+ )
131
+
132
+ temperature = gr.Slider(
133
+ minimum=0.1,
134
+ maximum=2.0,
135
+ value=1.0,
136
+ step=0.1,
137
+ interactive=True,
138
+ label="Temperature",
139
+ )
140
+
141
+ with gr.Column():
142
+ chat_state = gr.State()
143
+ img_list = gr.State()
144
+ chatbot = gr.Chatbot(label='MiniGPT-4')
145
+ text_input = gr.Textbox(label='User', placeholder='Please upload your image first', interactive=False)
146
+
147
+ upload_button.click(upload_img, [image, text_input, chat_state], [image, text_input, upload_button, chat_state, img_list])
148
+
149
+ text_input.submit(gradio_ask, [text_input, chatbot, chat_state], [text_input, chatbot, chat_state]).then(
150
+ gradio_answer, [chatbot, chat_state, img_list, num_beams, temperature], [chatbot, chat_state, img_list]
151
+ )
152
+ clear.click(gradio_reset, [chat_state, img_list], [chatbot, image, text_input, upload_button, chat_state, img_list], queue=False)
153
+
154
+ demo.launch(share=True, enable_queue=True)
environment.yml ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: minigpt4
2
+ channels:
3
+ - pytorch
4
+ - defaults
5
+ - anaconda
6
+ dependencies:
7
+ - python=3.9
8
+ - cudatoolkit
9
+ - pip
10
+ - pytorch=1.12.1
11
+ - pytorch-mutex=1.0=cuda
12
+ - torchaudio=0.12.1
13
+ - torchvision=0.13.1
14
+ - pip:
15
+ - accelerate==0.16.0
16
+ - aiohttp==3.8.4
17
+ - aiosignal==1.3.1
18
+ - async-timeout==4.0.2
19
+ - attrs==22.2.0
20
+ - bitsandbytes==0.37.0
21
+ - cchardet==2.1.7
22
+ - chardet==5.1.0
23
+ - contourpy==1.0.7
24
+ - cycler==0.11.0
25
+ - filelock==3.9.0
26
+ - fonttools==4.38.0
27
+ - frozenlist==1.3.3
28
+ - huggingface-hub==0.13.4
29
+ - importlib-resources==5.12.0
30
+ - kiwisolver==1.4.4
31
+ - matplotlib==3.7.0
32
+ - multidict==6.0.4
33
+ - openai==0.27.0
34
+ - packaging==23.0
35
+ - psutil==5.9.4
36
+ - pycocotools==2.0.6
37
+ - pyparsing==3.0.9
38
+ - python-dateutil==2.8.2
39
+ - pyyaml==6.0
40
+ - regex==2022.10.31
41
+ - tokenizers==0.13.2
42
+ - tqdm==4.64.1
43
+ - transformers==4.28.0
44
+ - timm==0.6.13
45
+ - spacy==3.5.1
46
+ - webdataset==0.2.48
47
+ - scikit-learn==1.2.2
48
+ - scipy==1.10.1
49
+ - yarl==1.8.2
50
+ - zipp==3.14.0
51
+ - omegaconf==2.3.0
52
+ - opencv-python==4.7.0.72
53
+ - iopath==0.1.10
54
+ - decord==0.6.0
55
+ - tenacity==8.2.2
56
+ - peft
57
+ - pycocoevalcap
58
+ - sentence-transformers
59
+ - umap-learn
60
+ - notebook
61
+ - gradio==3.24.1
62
+ - gradio-client==0.0.8
63
+ - wandb
eval_configs/minigpt4_eval.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ arch: mini_gpt4
3
+ model_type: pretrain_vicuna
4
+ freeze_vit: True
5
+ freeze_qformer: True
6
+ max_txt_len: 160
7
+ end_sym: "###"
8
+ low_resource: False
9
+ prompt_path: "prompts/alignment.txt"
10
+ prompt_template: '###Human: {} ###Assistant: '
11
+ ckpt: '/app/MiniGPT-4/pretrained_minigpt4.pth'
12
+
13
+
14
+ datasets:
15
+ cc_sbu_align:
16
+ vis_processor:
17
+ train:
18
+ name: "blip2_image_eval"
19
+ image_size: 224
20
+ text_processor:
21
+ train:
22
+ name: "blip_caption"
23
+
24
+ run:
25
+ task: image_text_pretrain
examples/ad_1.png ADDED

Git LFS Details

  • SHA256: ef7b83cfdd5dc4f78b1845c6a620631788c6bd48cb10f8606e9b31076e655812
  • Pointer size: 131 Bytes
  • Size of remote file: 389 kB
examples/ad_2.png ADDED

Git LFS Details

  • SHA256: 39f1afdbdaf392dad6ec90de7f1623d1f3b541d782716dca4658fc5357fe1682
  • Pointer size: 131 Bytes
  • Size of remote file: 468 kB
examples/cook_1.png ADDED

Git LFS Details

  • SHA256: 482a78d5e676ac6660318a9a4f322fd8dc66cb1e0332a624f8fa62e28dbb3330
  • Pointer size: 131 Bytes
  • Size of remote file: 551 kB
examples/cook_2.png ADDED

Git LFS Details

  • SHA256: c2dbe40a905886d8b7036e6995e4a4bfbb48b48818b250f419e06d44455576c4
  • Pointer size: 131 Bytes
  • Size of remote file: 600 kB
examples/describe_1.png ADDED

Git LFS Details

  • SHA256: d10b3a8c6529dc86f0c2dd8793345f8c2e6257a66d236863e4c1a03e80ef98d5
  • Pointer size: 131 Bytes
  • Size of remote file: 696 kB
examples/describe_2.png ADDED

Git LFS Details

  • SHA256: b16180e612d28687e3ff3b697f6c93a2c85937aa438b8406d7573a56d6d293dc
  • Pointer size: 131 Bytes
  • Size of remote file: 568 kB
examples/fact_1.png ADDED

Git LFS Details

  • SHA256: 745fae1e807a0550d019e9076a506dc0d289efaa8052a36ab1735611cc5401bb
  • Pointer size: 131 Bytes
  • Size of remote file: 479 kB
examples/fact_2.png ADDED

Git LFS Details

  • SHA256: 3bfbad89a077466b6f580c12a28086c5beeaca0db9fc6c3c603d82ddd56b720d
  • Pointer size: 131 Bytes
  • Size of remote file: 674 kB
examples/fix_1.png ADDED

Git LFS Details

  • SHA256: 1d6704fc44a19039157621ec2c3be89bfd0cdbf2901fb5eb3532973b1c79e97b
  • Pointer size: 131 Bytes
  • Size of remote file: 707 kB
examples/fix_2.png ADDED

Git LFS Details

  • SHA256: fbba787e21974ce718fb7d9ac78fec24c646de17602efd8be64261e7d9cae1ba
  • Pointer size: 131 Bytes
  • Size of remote file: 600 kB
examples/fun_1.png ADDED

Git LFS Details

  • SHA256: af7390eac23468f6313a73d5a0f6972cf4fab2481dab7cf5c44a9f0dcdd87d35
  • Pointer size: 131 Bytes
  • Size of remote file: 730 kB
examples/fun_2.png ADDED

Git LFS Details

  • SHA256: 9f80d9ee8cd1b45165793d2e08ed3605c1dba79f7c735fa0418942575c7ba23f
  • Pointer size: 131 Bytes
  • Size of remote file: 611 kB
examples/logo_1.png ADDED

Git LFS Details

  • SHA256: b05d317d40296c754e004c9839c4a6743470cabbf1d1ac1eb55b08db8180fa8c
  • Pointer size: 131 Bytes
  • Size of remote file: 194 kB
examples/op_1.png ADDED

Git LFS Details

  • SHA256: a733feba1c432611760362f05a0b34d378a2c1925ec4cb4964274fd83c564d19
  • Pointer size: 131 Bytes
  • Size of remote file: 618 kB
examples/op_2.png ADDED

Git LFS Details

  • SHA256: 482cf00a72249d387c0b0a06bf0aeb5d37b5d1bfcfcf1f92f72c20e0a540aecb
  • Pointer size: 131 Bytes
  • Size of remote file: 649 kB
examples/people_1.png ADDED

Git LFS Details

  • SHA256: 6bdfe44c1b6177933714c81f074990e2646fe85a92673138b4d74135e9dd85f6
  • Pointer size: 131 Bytes
  • Size of remote file: 255 kB
examples/people_2.png ADDED

Git LFS Details

  • SHA256: 3c39bfad76722c3101531d799ead8724efecde77d8c7dd0b7743dce49f19422a
  • Pointer size: 131 Bytes
  • Size of remote file: 313 kB
examples/rhyme_1.png ADDED

Git LFS Details

  • SHA256: 0e0e757d0ecbfa2e5658f23b6488a722686f37abea4abfc84e366c8a223b5531
  • Pointer size: 131 Bytes
  • Size of remote file: 602 kB
examples/rhyme_2.png ADDED

Git LFS Details

  • SHA256: 57f279184258012c9794b0d635abddb7bcdfcf13b43c2d62bdb3c96f83840df3
  • Pointer size: 131 Bytes
  • Size of remote file: 824 kB
examples/story_1.png ADDED

Git LFS Details

  • SHA256: 8e7da2a22e5cd1488fe20e9a20c4e9018586eee61a69bf941f4eb2318e5c54d7
  • Pointer size: 131 Bytes
  • Size of remote file: 873 kB
examples/story_2.png ADDED

Git LFS Details

  • SHA256: 6376e6deb27845b3050b7e4fd79a6b4ea9aeada764fda85a604b19d572b7442d
  • Pointer size: 131 Bytes
  • Size of remote file: 581 kB
examples/web_1.png ADDED

Git LFS Details

  • SHA256: 82b0a417ebb7ef9611d60656bd3a9dfd776e6ec0e396b13a8b47a87e65d08ffa
  • Pointer size: 131 Bytes
  • Size of remote file: 729 kB
examples/wop_1.png ADDED

Git LFS Details

  • SHA256: 8d43e8197182792b0d9fd9e62b986ffd085ea85c1ce4cc32ac317584acfe8776
  • Pointer size: 131 Bytes
  • Size of remote file: 532 kB
examples/wop_2.png ADDED

Git LFS Details

  • SHA256: 740dcc86b42c0c0cccfac89b031202477a6cde3fded6b9a8ca3b7e7f46ed3df1
  • Pointer size: 131 Bytes
  • Size of remote file: 579 kB
figs/examples/ad_1.png ADDED

Git LFS Details

  • SHA256: ef7b83cfdd5dc4f78b1845c6a620631788c6bd48cb10f8606e9b31076e655812
  • Pointer size: 131 Bytes
  • Size of remote file: 389 kB
figs/examples/ad_2.png ADDED

Git LFS Details

  • SHA256: 39f1afdbdaf392dad6ec90de7f1623d1f3b541d782716dca4658fc5357fe1682
  • Pointer size: 131 Bytes
  • Size of remote file: 468 kB
figs/examples/cook_1.png ADDED

Git LFS Details

  • SHA256: 482a78d5e676ac6660318a9a4f322fd8dc66cb1e0332a624f8fa62e28dbb3330
  • Pointer size: 131 Bytes
  • Size of remote file: 551 kB
figs/examples/cook_2.png ADDED

Git LFS Details

  • SHA256: c2dbe40a905886d8b7036e6995e4a4bfbb48b48818b250f419e06d44455576c4
  • Pointer size: 131 Bytes
  • Size of remote file: 600 kB
figs/examples/describe_1.png ADDED

Git LFS Details

  • SHA256: d10b3a8c6529dc86f0c2dd8793345f8c2e6257a66d236863e4c1a03e80ef98d5
  • Pointer size: 131 Bytes
  • Size of remote file: 696 kB
figs/examples/describe_2.png ADDED

Git LFS Details

  • SHA256: b16180e612d28687e3ff3b697f6c93a2c85937aa438b8406d7573a56d6d293dc
  • Pointer size: 131 Bytes
  • Size of remote file: 568 kB
figs/examples/fact_1.png ADDED

Git LFS Details

  • SHA256: 745fae1e807a0550d019e9076a506dc0d289efaa8052a36ab1735611cc5401bb
  • Pointer size: 131 Bytes
  • Size of remote file: 479 kB
figs/examples/fact_2.png ADDED

Git LFS Details

  • SHA256: 3bfbad89a077466b6f580c12a28086c5beeaca0db9fc6c3c603d82ddd56b720d
  • Pointer size: 131 Bytes
  • Size of remote file: 674 kB