Kaushik066 commited on
Commit
f02d1fe
·
verified ·
1 Parent(s): 7223d1f

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +25 -27
app.py CHANGED
@@ -193,34 +193,32 @@ prod_dl = DataLoader(prod_ds, batch_size=BATCH_SIZE)
193
  #prod_inputs = next(iter(prod_dl))
194
  #st.write(prod_inputs['pixel_values'].shape)
195
 
196
-
197
- st.markdown("# AI Face Recognition app for automated employee attendance")
198
  about_tab, app_tab = st.tabs(["About the app", "Face Recognition"])
199
- ## About the app Tab
200
- #with about_tab:
201
- # st.markdown(
202
- # """
203
- # ## Product Description/Objective
204
- # An AI face recognition app for automated employee attendance uses advanced facial recognition technology to accurately and efficiently track employee attendance.
205
- # By simply scanning employees' faces upon arrival and departure, the app eliminates the need for traditional timecards or biometric devices, reducing errors and fraud.
206
- # It provides real-time attendance data, enhances workplace security, and streamlines HR processes for greater productivity and accuracy.
207
- #
208
- # ## How does it work ?
209
- # Our app leverages Google's advanced **Vision Transformer (ViT)** architecture, trained on the **LFW (Labeled Faces in the Wild) dataset**, to deliver highly accurate employee attendance tracking through facial recognition.
210
- # The AI model intelligently extracts distinct facial features and compares them to the stored data of registered employees. When an employee’s face is scanned, the model analyzes the key features, and a confidence score is generated.
211
- # A high score indicates a match, confirming the employee’s identity and marking their attendance automatically. This seamless, secure process ensures precise tracking while minimizing errors and enhancing workplace efficiency.
212
- #
213
- # ### About the architecture.
214
- # The Vision Transformer (ViT) is a deep learning architecture designed for image classification tasks, which applies transformer models—originally developed for natural language processing (NLP)—to images.
215
- # ViT divides an image into fixed-size non-overlapping patches. Each patch is flattened into a 1D vector, which is then linearly embedded into a higher-dimensional space. The patch embeddings are processed using a standard transformer encoder.
216
- # This consists of layers with multi-head self-attention and feed-forward networks. The transformer is capable of learning global dependencies across the entire image.
217
- # The Vision Transformer outperforms traditional convolutional neural networks (CNNs) on large-scale datasets, especially when provided with sufficient training data and computational resources.
218
- #
219
- # ### About the Dataset.
220
- # Labeled Faces in the Wild (LFW) is a well-known dataset used primarily for evaluating face recognition algorithms. It consists of a collection of facial images of famous individuals from the web.
221
- # LFW contains 13,000+ labeled images of 5,749 different individuals. The faces are collected from various sources, with images often showing individuals in different lighting, poses, and backgrounds.
222
- # LFW is typically used for face verification and face recognition tasks. The goal is to determine if two images represent the same person or not.
223
- # """)
224
 
225
  # Gesture recognition Tab
226
  with app_tab:
 
193
  #prod_inputs = next(iter(prod_dl))
194
  #st.write(prod_inputs['pixel_values'].shape)
195
 
 
 
196
  about_tab, app_tab = st.tabs(["About the app", "Face Recognition"])
197
+ # About the app Tab
198
+ with about_tab:
199
+ st.markdown(
200
+ """
201
+ ## Product Description/Objective
202
+ An AI face recognition app for automated employee attendance uses advanced facial recognition technology to accurately and efficiently track employee attendance.
203
+ By simply scanning employees' faces upon arrival and departure, the app eliminates the need for traditional timecards or biometric devices, reducing errors and fraud.
204
+ It provides real-time attendance data, enhances workplace security, and streamlines HR processes for greater productivity and accuracy.
205
+
206
+ ## How does it work ?
207
+ Our app leverages Google's advanced **Vision Transformer (ViT)** architecture, trained on the **LFW (Labeled Faces in the Wild) dataset**, to deliver highly accurate employee attendance tracking through facial recognition.
208
+ The AI model intelligently extracts distinct facial features and compares them to the stored data of registered employees. When an employee’s face is scanned, the model analyzes the key features, and a confidence score is generated.
209
+ A high score indicates a match, confirming the employee’s identity and marking their attendance automatically. This seamless, secure process ensures precise tracking while minimizing errors and enhancing workplace efficiency.
210
+
211
+ ### About the architecture.
212
+ The Vision Transformer (ViT) is a deep learning architecture designed for image classification tasks, which applies transformer models—originally developed for natural language processing (NLP)—to images.
213
+ ViT divides an image into fixed-size non-overlapping patches. Each patch is flattened into a 1D vector, which is then linearly embedded into a higher-dimensional space. The patch embeddings are processed using a standard transformer encoder.
214
+ This consists of layers with multi-head self-attention and feed-forward networks. The transformer is capable of learning global dependencies across the entire image.
215
+ The Vision Transformer outperforms traditional convolutional neural networks (CNNs) on large-scale datasets, especially when provided with sufficient training data and computational resources.
216
+
217
+ ### About the Dataset.
218
+ Labeled Faces in the Wild (LFW) is a well-known dataset used primarily for evaluating face recognition algorithms. It consists of a collection of facial images of famous individuals from the web.
219
+ LFW contains 13,000+ labeled images of 5,749 different individuals. The faces are collected from various sources, with images often showing individuals in different lighting, poses, and backgrounds.
220
+ LFW is typically used for face verification and face recognition tasks. The goal is to determine if two images represent the same person or not.
221
+ """)
222
 
223
  # Gesture recognition Tab
224
  with app_tab: