|
--- |
|
library_name: transformers |
|
license: cc-by-nc-4.0 |
|
language: |
|
- ko |
|
--- |
|
<p align="left"> |
|
<img src="https://huggingface.co/algograp-Inc/algograpV4/resolve/main/[email protected]" width="50%"/> |
|
<p> |
|
|
|
# algograp-Inc/algograpV4 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
- **Developed by:** algograp-Inc |
|
- **License:** cc-by-nc-4.0 |
|
|
|
## Hardware and Software |
|
|
|
* **Hardware**: We utilized an H100x4 * 1 |
|
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) |
|
|
|
## Method |
|
- This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf). |
|
|
|
## Base Model |
|
- [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) |