File size: 2,400 Bytes
f510c1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
import gradio as gr

def render_eval_info():
    text = r"""

            We use **Equal Error Rate (EER %)**  a standard method used in bimoretric and anti-spoofing systems.

            ### **What is EER?**  
            Equal Error Rate (EER) is a performance metric used to evaluate biometric systems. It represents the point at which the **False Acceptance Rate (FAR)** and **False Rejection Rate (FRR)** are equal. A lower EER indicates a more accurate system.

            #### **False Acceptance Rate (FAR)**  
            FAR is the proportion of **unauthorized** users incorrectly accepted by the system.  

            $FAR = \frac{\text{False Acceptances}}{\text{Total Imposter Attempts}}$

            #### **False Rejection Rate (FRR)**  
            FRR is the proportion of **genuine** users incorrectly rejected by the system.  

            $FRR = \frac{\text{False Rejections}}{\text{Total Genuine Attempts}}$


            - EER is the point at which FAR and FRR are equal.

            ### How to compute your own EER score file ? 

            In order to streamline the evaluation process across many models and datasets, we
            have developed df_arena_toolkit which can be used to compute score files for evaluation.
            The tool can be found at https://github.com/Speech-Arena/speech_df_arena.

            ### Usage 							
            #### 1. Data Preparation 
            Create metadata.csv for your desired dataset with below format:

            ```
            file_name,label
            /path/to/audio1,spoof
            /path/to/audio2,bonafide
            ...

            ```
            NOTE : The labels should contain "spoof" for spoofed samples and "bonafide" for real samples.
                All the file_name paths should be absolute 

            #### 2. Evaluation

            Example usage : 
            ```py
            python evaluation.py --model_name wavlm_ecapa 
                                --batch_size 32 
                                --protocol_file_path /path/to/metadata.csv 
                                --model_path /path/to/model.ckpt 
                                --out_score_file_name scores.txt 
                                --trim pad 
                                --num workers 4
            ```

"""
    return gr.Markdown(text, latex_delimiters=[{ "left": "$", "right": "$", "display": True }])