morinoko-inari commited on
Commit
daa2552
·
verified ·
1 Parent(s): a22ee08

Model save

Browse files
Files changed (2) hide show
  1. README.md +10 -9
  2. generation_config.json +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: Helsinki-NLP/opus-mt-ja-en
4
  tags:
@@ -17,9 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.6632
21
- - Bleu: 0.3309
22
- - Chrf: 65.6928
23
 
24
  ## Model description
25
 
@@ -42,7 +43,7 @@ The following hyperparameters were used during training:
42
  - train_batch_size: 16
43
  - eval_batch_size: 16
44
  - seed: 42
45
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - num_epochs: 3
48
 
@@ -50,14 +51,14 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
53
- | No log | 1.0 | 23 | 1.7526 | 0.3133 | 64.6743 |
54
- | No log | 2.0 | 46 | 1.6831 | 0.3396 | 66.1705 |
55
- | No log | 3.0 | 69 | 1.6632 | 0.3309 | 65.6928 |
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.31.0
61
  - Pytorch 2.6.0
62
  - Datasets 3.5.0
63
- - Tokenizers 0.13.3
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: Helsinki-NLP/opus-mt-ja-en
5
  tags:
 
18
 
19
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.6714
22
+ - Bleu: 0.3300
23
+ - Chrf: 64.8493
24
 
25
  ## Model description
26
 
 
43
  - train_batch_size: 16
44
  - eval_batch_size: 16
45
  - seed: 42
46
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
  - num_epochs: 3
49
 
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf |
53
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
54
+ | No log | 1.0 | 23 | 1.7593 | 0.3283 | 64.8523 |
55
+ | No log | 2.0 | 46 | 1.6915 | 0.3260 | 64.6705 |
56
+ | No log | 3.0 | 69 | 1.6714 | 0.3300 | 64.8493 |
57
 
58
 
59
  ### Framework versions
60
 
61
+ - Transformers 4.51.3
62
  - Pytorch 2.6.0
63
  - Datasets 3.5.0
64
+ - Tokenizers 0.21.1
generation_config.json CHANGED
@@ -12,5 +12,5 @@
12
  "num_beams": 6,
13
  "pad_token_id": 60715,
14
  "renormalize_logits": true,
15
- "transformers_version": "4.31.0"
16
  }
 
12
  "num_beams": 6,
13
  "pad_token_id": 60715,
14
  "renormalize_logits": true,
15
+ "transformers_version": "4.51.3"
16
  }