Kaushik066 commited on
Commit
e6bd663
·
verified ·
1 Parent(s): 6b0bc5a

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +56 -56
app.py CHANGED
@@ -194,62 +194,62 @@ prod_dl = DataLoader(prod_ds, batch_size=BATCH_SIZE)
194
  #st.write(prod_inputs['pixel_values'].shape)
195
 
196
 
197
- st.markdown("# AI Face Recognition app for automated employee attendance")
198
- about_tab, app_tab = st.tabs(["About the app", "Face Recognition"])
199
- # About the app Tab
200
- with about_tab:
201
- st.markdown(
202
- """
203
- ## Product Description/Objective
204
- An AI face recognition app for automated employee attendance uses advanced facial recognition technology to accurately and efficiently track employee attendance.
205
- By simply scanning employees' faces upon arrival and departure, the app eliminates the need for traditional timecards or biometric devices, reducing errors and fraud.
206
- It provides real-time attendance data, enhances workplace security, and streamlines HR processes for greater productivity and accuracy.
207
-
208
- ## How does it work ?
209
- Our app leverages Google's advanced **Vision Transformer (ViT)** architecture, trained on the **LFW (Labeled Faces in the Wild) dataset**, to deliver highly accurate employee attendance tracking through facial recognition.
210
- The AI model intelligently extracts distinct facial features and compares them to the stored data of registered employees. When an employee’s face is scanned, the model analyzes the key features, and a confidence score is generated.
211
- A high score indicates a match, confirming the employee’s identity and marking their attendance automatically. This seamless, secure process ensures precise tracking while minimizing errors and enhancing workplace efficiency.
212
-
213
- ### About the architecture.
214
- The Vision Transformer (ViT) is a deep learning architecture designed for image classification tasks, which applies transformer models—originally developed for natural language processing (NLP)—to images.
215
- ViT divides an image into fixed-size non-overlapping patches. Each patch is flattened into a 1D vector, which is then linearly embedded into a higher-dimensional space. The patch embeddings are processed using a standard transformer encoder.
216
- This consists of layers with multi-head self-attention and feed-forward networks. The transformer is capable of learning global dependencies across the entire image.
217
- The Vision Transformer outperforms traditional convolutional neural networks (CNNs) on large-scale datasets, especially when provided with sufficient training data and computational resources.
218
-
219
- ### About the Dataset.
220
- Labeled Faces in the Wild (LFW) is a well-known dataset used primarily for evaluating face recognition algorithms. It consists of a collection of facial images of famous individuals from the web.
221
- LFW contains 13,000+ labeled images of 5,749 different individuals. The faces are collected from various sources, with images often showing individuals in different lighting, poses, and backgrounds.
222
- LFW is typically used for face verification and face recognition tasks. The goal is to determine if two images represent the same person or not.
223
- """)
224
 
225
  # Gesture recognition Tab
226
- with app_tab:
227
- # Read image from Camera
228
- enable = st.checkbox("Enable camera")
229
- picture = st.camera_input("Take a picture", disabled=not enable)
230
- if picture is not None:
231
- #img = Image.open(picture)
232
- #picture.save(webcam_path, "JPEG")
233
- #st.write('Image saved as:',webcam_path)
234
-
235
- ## Create DataLoader for Webcam Image
236
- webcam_ds = dataset_prod_obj.create_dataset(picture, webcam=True)
237
- webcam_dl = DataLoader(webcam_ds, batch_size=BATCH_SIZE)
 
 
 
 
 
 
 
 
 
 
 
 
238
 
239
- ## Testing the dataloader
240
- #prod_inputs = next(iter(webcam_dl))
241
- #st.write(prod_inputs['pixel_values'].shape)
242
-
243
- with st.spinner("Wait for it...", show_time=True):
244
- # Run the predictions
245
- prediction = prod_function(model_pretrained, prod_dl, webcam_dl)
246
- predictions = torch.cat(prediction, 0).to(device)
247
- match_idx = torch.argmin(predictions)
248
- st.write(predictions)
249
- st.write(image_paths)
250
-
251
- # Display the results
252
- if predictions[match_idx] <= 0.3:
253
- st.write('Welcome: ',image_paths[match_idx].split('/')[-1].split('.')[0])
254
- else:
255
- st.write("Match not found")
 
194
  #st.write(prod_inputs['pixel_values'].shape)
195
 
196
 
197
+ #st.markdown("# AI Face Recognition app for automated employee attendance")
198
+ #about_tab, app_tab = st.tabs(["About the app", "Face Recognition"])
199
+ ## About the app Tab
200
+ #with about_tab:
201
+ # st.markdown(
202
+ # """
203
+ # ## Product Description/Objective
204
+ # An AI face recognition app for automated employee attendance uses advanced facial recognition technology to accurately and efficiently track employee attendance.
205
+ # By simply scanning employees' faces upon arrival and departure, the app eliminates the need for traditional timecards or biometric devices, reducing errors and fraud.
206
+ # It provides real-time attendance data, enhances workplace security, and streamlines HR processes for greater productivity and accuracy.
207
+ #
208
+ # ## How does it work ?
209
+ # Our app leverages Google's advanced **Vision Transformer (ViT)** architecture, trained on the **LFW (Labeled Faces in the Wild) dataset**, to deliver highly accurate employee attendance tracking through facial recognition.
210
+ # The AI model intelligently extracts distinct facial features and compares them to the stored data of registered employees. When an employee’s face is scanned, the model analyzes the key features, and a confidence score is generated.
211
+ # A high score indicates a match, confirming the employee’s identity and marking their attendance automatically. This seamless, secure process ensures precise tracking while minimizing errors and enhancing workplace efficiency.
212
+ #
213
+ # ### About the architecture.
214
+ # The Vision Transformer (ViT) is a deep learning architecture designed for image classification tasks, which applies transformer models—originally developed for natural language processing (NLP)—to images.
215
+ # ViT divides an image into fixed-size non-overlapping patches. Each patch is flattened into a 1D vector, which is then linearly embedded into a higher-dimensional space. The patch embeddings are processed using a standard transformer encoder.
216
+ # This consists of layers with multi-head self-attention and feed-forward networks. The transformer is capable of learning global dependencies across the entire image.
217
+ # The Vision Transformer outperforms traditional convolutional neural networks (CNNs) on large-scale datasets, especially when provided with sufficient training data and computational resources.
218
+ #
219
+ # ### About the Dataset.
220
+ # Labeled Faces in the Wild (LFW) is a well-known dataset used primarily for evaluating face recognition algorithms. It consists of a collection of facial images of famous individuals from the web.
221
+ # LFW contains 13,000+ labeled images of 5,749 different individuals. The faces are collected from various sources, with images often showing individuals in different lighting, poses, and backgrounds.
222
+ # LFW is typically used for face verification and face recognition tasks. The goal is to determine if two images represent the same person or not.
223
+ # """)
224
 
225
  # Gesture recognition Tab
226
+ #with app_tab:
227
+ # Read image from Camera
228
+ enable = st.checkbox("Enable camera")
229
+ picture = st.camera_input("Take a picture", disabled=not enable)
230
+ if picture is not None:
231
+ #img = Image.open(picture)
232
+ #picture.save(webcam_path, "JPEG")
233
+ #st.write('Image saved as:',webcam_path)
234
+
235
+ ## Create DataLoader for Webcam Image
236
+ webcam_ds = dataset_prod_obj.create_dataset(picture, webcam=True)
237
+ webcam_dl = DataLoader(webcam_ds, batch_size=BATCH_SIZE)
238
+
239
+ ## Testing the dataloader
240
+ #prod_inputs = next(iter(webcam_dl))
241
+ #st.write(prod_inputs['pixel_values'].shape)
242
+
243
+ with st.spinner("Wait for it...", show_time=True):
244
+ # Run the predictions
245
+ prediction = prod_function(model_pretrained, prod_dl, webcam_dl)
246
+ predictions = torch.cat(prediction, 0).to(device)
247
+ match_idx = torch.argmin(predictions)
248
+ st.write(predictions)
249
+ st.write(image_paths)
250
 
251
+ # Display the results
252
+ if predictions[match_idx] <= 0.3:
253
+ st.write('Welcome: ',image_paths[match_idx].split('/')[-1].split('.')[0])
254
+ else:
255
+ st.write("Match not found")