Essay-Grader commited on
Commit
7c65163
Β·
1 Parent(s): 07d8432
Files changed (5) hide show
  1. Procfile +1 -0
  2. README.md +41 -42
  3. app.py +4 -3
  4. verify_model.py +1 -0
  5. yaml +1 -1
Procfile CHANGED
@@ -1 +1,2 @@
 
1
  web: uvicorn app:app --host 0.0.0.0 --port $PORT
 
1
+
2
  web: uvicorn app:app --host 0.0.0.0 --port $PORT
README.md CHANGED
@@ -8,18 +8,19 @@ app_file: Dockerfile
8
  pinned: false
9
  ---
10
 
 
11
 
 
12
 
13
- # Essay Grader API
 
14
 
15
- This API uses advanced AI models to evaluate essays for:
16
- - AI-generated content detection (identifies if content was written by AI)
17
- - Internal plagiarism detection (identifies repetitive patterns within the text)
18
 
19
  ## Endpoints
20
 
21
- ### `GET /health`
22
- Checks the API health status and model loading state.
23
 
24
  **Response:**
25
  ```json
@@ -28,70 +29,68 @@ Checks the API health status and model loading state.
28
  "hub_accessible": true,
29
  "pdf_processing": true
30
  }
31
- ```
32
 
33
- ### `POST /analyze`
 
34
  Upload a PDF essay for comprehensive analysis.
35
 
36
- **Request:**
37
- - Content-Type: multipart/form-data
38
- - Body: file (PDF document)
39
 
40
- **Response:**
41
- ```json
42
- {
43
- "ai_content_detection": {
44
- "label": "Human-written",
45
- "confidence": 92.5
46
- },
47
- "internal_plagiarism_score": 18.3,
48
- "max_similarity_between_chunks": 45.2,
49
- "chunks_analyzed": 12
50
- }
51
- ```
52
 
53
- # Narrowed Response
54
 
55
  **Response:**
56
- ```json
57
  {
58
- "ai_content_detection": {
59
- "confidence": 92.5
 
 
 
 
60
  },
61
- "internal_plagiarism_score": 18.3,
62
  }
63
- ```
 
 
64
 
65
  ## Usage Examples
66
 
67
  ### Using cURL:
68
- ```bash
69
  curl -X 'POST' \
70
- 'https://yourusername-essay-grader-api.hf.space/analyze' \
71
  -H 'accept: application/json' \
72
  -H 'Content-Type: multipart/form-data' \
73
  -F 'file=@your_essay.pdf'
74
- ```
75
 
76
  ### Using Python Requests:
77
- ```python
78
  import requests
79
 
80
- url = "https://yourusername-essay-grader-api.hf.space/analyze"
81
  files = {"file": open("your_essay.pdf", "rb")}
82
 
83
  response = requests.post(url, files=files)
84
  result = response.json()
85
  print(result)
86
- ```
87
 
88
- ## Technical Details
89
 
90
- This API uses:
91
- - RoBERTa-based models for AI content detection
92
- - Sentence transformers for semantic analysis
93
- - PyPDF2 for PDF text extraction
 
 
 
 
 
 
 
 
94
 
95
- The application is built with FastAPI and deployed on Hugging Face Spaces.
96
 
97
- ```Created by: Christian Mpambira(BED-COM-22-20)```
 
8
  pinned: false
9
  ---
10
 
11
+ # Detection and Plagiarism Check API πŸ•΅οΈβ€β™‚οΈ
12
 
13
+ This API uses advanced AI models to evaluate essays for:
14
 
15
+ - **AI Content Detection**: Identifies if the content was written by an AI.
16
+ - **Internal Plagiarism Detection**: Detects repetitive patterns and similarities within the text.
17
 
18
+ ---
 
 
19
 
20
  ## Endpoints
21
 
22
+ ### `GET /health`
23
+ Check the API health and model loading status.
24
 
25
  **Response:**
26
  ```json
 
29
  "hub_accessible": true,
30
  "pdf_processing": true
31
  }
 
32
 
33
+
34
+ πŸ“„ POST /analyze
35
  Upload a PDF essay for comprehensive analysis.
36
 
37
+ Request:
 
 
38
 
39
+ Content-Type: multipart/form-data
 
 
 
 
 
 
 
 
 
 
 
40
 
41
+ Body: file (PDF document)
42
 
43
  **Response:**
44
+
45
  {
46
+ "analysis": {
47
+ "ai_detection": {
48
+ "human_written": 47.22,
49
+ "ai_generated": 52.78
50
+ },
51
+ "plagiarism_score": 0
52
  },
53
+ "status": "success"
54
  }
55
+
56
+
57
+
58
 
59
  ## Usage Examples
60
 
61
  ### Using cURL:
62
+
63
  curl -X 'POST' \
64
+ 'https://essay-grader-detection-and-plagiarism-check.hf.space/analyze' \
65
  -H 'accept: application/json' \
66
  -H 'Content-Type: multipart/form-data' \
67
  -F 'file=@your_essay.pdf'
 
68
 
69
  ### Using Python Requests:
70
+
71
  import requests
72
 
73
+ url = "https://essay-grader-detection-and-plagiarism-check.hf.space/analyze"
74
  files = {"file": open("your_essay.pdf", "rb")}
75
 
76
  response = requests.post(url, files=files)
77
  result = response.json()
78
  print(result)
 
79
 
 
80
 
81
+ ### Technical Stack
82
+
83
+ AI Content Detection: RoBERTa-based custom fine-tuned model.
84
+
85
+ Internal Plagiarism Detection: SentenceTransformers (semantic similarity analysis).
86
+
87
+ PDF Text Extraction: PyPDF2.
88
+
89
+ Framework: FastAPI.
90
+
91
+ Deployment: Docker + Hugging Face Spaces.
92
+
93
 
94
+ Credits:
95
 
96
+ Created by: Christian Mpambira (BED-COM-22-20)
app.py CHANGED
@@ -1,4 +1,5 @@
1
  # app.py: AI Detection and Plagiarism Check API
 
2
  from fastapi import FastAPI, UploadFile, File, HTTPException, BackgroundTasks
3
  from fastapi.responses import JSONResponse
4
  from sentence_transformers import SentenceTransformer
@@ -27,7 +28,7 @@ app = FastAPI(
27
  description="API for AI Content Detection and Plagiarism Checking",
28
  version="1.0.0",
29
  docs_url="/docs",
30
- redoc_url=None
31
  )
32
 
33
  # Configuration Constants
@@ -250,9 +251,9 @@ async def health_check() -> Dict[str, Any]:
250
 
251
  @app.get("/")
252
  async def root():
253
- """Root endpoint"""
254
  return {
 
255
  "service": "Essay Analysis API",
256
  "version": "1.0.0",
257
  "endpoints": ["/analyze", "/health", "/reload-models"]
258
- }
 
1
  # app.py: AI Detection and Plagiarism Check API
2
+
3
  from fastapi import FastAPI, UploadFile, File, HTTPException, BackgroundTasks
4
  from fastapi.responses import JSONResponse
5
  from sentence_transformers import SentenceTransformer
 
28
  description="API for AI Content Detection and Plagiarism Checking",
29
  version="1.0.0",
30
  docs_url="/docs",
31
+ redoc_url="/redoc"
32
  )
33
 
34
  # Configuration Constants
 
251
 
252
  @app.get("/")
253
  async def root():
 
254
  return {
255
+ """Root endpoint"""
256
  "service": "Essay Analysis API",
257
  "version": "1.0.0",
258
  "endpoints": ["/analyze", "/health", "/reload-models"]
259
+ }
verify_model.py CHANGED
@@ -1,3 +1,4 @@
 
1
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
2
 
3
  model = AutoModelForSequenceClassification.from_pretrained(
 
1
+ #Verify_model.py
2
  from transformers import AutoModelForSequenceClassification, AutoTokenizer
3
 
4
  model = AutoModelForSequenceClassification.from_pretrained(
yaml CHANGED
@@ -1,4 +1,4 @@
1
- # Readme metadata in your Space
2
  ---
3
  title: Essay Grader
4
  emoji: πŸ”
 
1
+ # Readme metadata file
2
  ---
3
  title: Essay Grader
4
  emoji: πŸ”