Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Final Project in AI Engineering, DVAE26, ht24
|
2 |
#### by Marlene Kulowatz
|
3 |
|
@@ -62,7 +68,7 @@ The model was evaluated on the following metrics:
|
|
62 |
Accuracy is the best measure for evaluating image recognition models, particularly when comparing different Convolutional Neural Networks (CNNs), because it directly reflects the model's ability to correctly identify objects or patterns within an image.
|
63 |
|
64 |
#### Deployment
|
65 |
-
Finally, the code was deployed on the platform "Hugging Face"
|
66 |
|
67 |
### Data Quality Analysis
|
68 |
#### analysis of data quality
|
@@ -77,6 +83,7 @@ The data quality was evaluated throughout the whole process. Visual verification
|
|
77 |
- Documentation: The Code was well documented and the project maintains a README file including detailed project objectives and results.
|
78 |
- Hyperparameter Tuning: Different configurations for learning rate, optimizer, and architecture were tested.
|
79 |
- Testing: Unit tests were implemented, as well as many logs.
|
|
|
80 |
|
81 |
### Key Outcomes
|
82 |
#### Results
|
@@ -87,6 +94,4 @@ The results indicate that the simple CNN is too basic to fully benefit from data
|
|
87 |
When data augmentation is used, the dataset becomes more diverse. Simpler models may excel in situations where the data is already simple.
|
88 |
- *Trade-Off Between Simplicity and Generalization* There’s a trade-off here between model simplicity (which works well for non-augmented data) and complexity (which handles augmented data better)
|
89 |
#### general
|
90 |
-
On a personal level, this was the first time I applied and implemented a CNN. It was helpful to deepen my understanding to watch how much longer the calculations take as soon as a single layer is added. It was also impressive to experience how small changes in the model architecture can have a big impact on the outcomes. Also, I learned that the field is very big and there are many more things to learn.
|
91 |
-
|
92 |
-
|
|
|
1 |
+
---
|
2 |
+
metrics:
|
3 |
+
- precision
|
4 |
+
pipeline_tag: image-classification
|
5 |
+
library_name: keras
|
6 |
+
---
|
7 |
# Final Project in AI Engineering, DVAE26, ht24
|
8 |
#### by Marlene Kulowatz
|
9 |
|
|
|
68 |
Accuracy is the best measure for evaluating image recognition models, particularly when comparing different Convolutional Neural Networks (CNNs), because it directly reflects the model's ability to correctly identify objects or patterns within an image.
|
69 |
|
70 |
#### Deployment
|
71 |
+
Finally, the code was deployed on the platform "Hugging Face". It is publicly accessible and can be found by searching for "maykulo" and "final_project_ai_engineering".
|
72 |
|
73 |
### Data Quality Analysis
|
74 |
#### analysis of data quality
|
|
|
83 |
- Documentation: The Code was well documented and the project maintains a README file including detailed project objectives and results.
|
84 |
- Hyperparameter Tuning: Different configurations for learning rate, optimizer, and architecture were tested.
|
85 |
- Testing: Unit tests were implemented, as well as many logs.
|
86 |
+
- Version Control: The jupyter notebooks where uploaded in a repository and constantly updated, using commit messages.
|
87 |
|
88 |
### Key Outcomes
|
89 |
#### Results
|
|
|
94 |
When data augmentation is used, the dataset becomes more diverse. Simpler models may excel in situations where the data is already simple.
|
95 |
- *Trade-Off Between Simplicity and Generalization* There’s a trade-off here between model simplicity (which works well for non-augmented data) and complexity (which handles augmented data better)
|
96 |
#### general
|
97 |
+
On a personal level, this was the first time I applied and implemented a CNN. It was helpful to deepen my understanding to watch how much longer the calculations take as soon as a single layer is added. It was also impressive to experience how small changes in the model architecture can have a big impact on the outcomes. Also, I learned that the field is very big and there are many more things to learn.
|
|
|
|