File size: 4,567 Bytes
5e37e32
 
 
 
 
 
ad1cfb3
5e37e32
 
3ff5801
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69a44c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
title: Mip Csv Analyser
emoji: πŸš€
colorFrom: yellow
colorTo: gray
sdk: streamlit
sdk_version: 1.28.1
app_file: app.py
pinned: false
---

# Batch Run Analyzer

A comprehensive Streamlit application for analyzing batch run results from CSV or XLSX files, visualizing pass/fail statistics, and comparing runs across different environments.

## Features

- Support for both CSV and XLSX file formats
- Multiple analysis modes:
  - **Multi**: Analyze multiple files from different environments
  - **Compare**: Compare two files to identify differences in scenario outcomes
  - **Weekly**: Generate weekly trend reports
  - **Multi-Env Compare**: Compare scenarios across multiple environments
- Detailed statistics on passing and failing scenarios
- Visual charts for failure counts by functional area
- Interactive filtering by functional area and status
- Time spent analysis per functional area
- Error Message analysis

## Setup and Installation

1. Clone this repository:
   ```
   git clone <repository-url>
   cd batch-run-csv-analyser
   ```

2. Install the required dependencies:
   ```
   pip install -r requirements.txt
   ```

3. Run the application:
   ```
   streamlit run app.py
   ```

## File Format Support

### CSV Format (Legacy)
The application still supports the original CSV format with the following columns:
- Functional area
- Scenario Name
- Start datetime
- End datetime
- Status
- Error Message

### XLSX Format (New)
The application now supports XLSX files with step-level data:
- Feature Name
- Scenario Name
- Step
- Result
- Time Stamp
- Duration (ms)
- Error Message

The application will automatically detect the file format based on the file extension and process it accordingly.

## Usage

1. Start the application with `streamlit run app.py`
2. Use the sidebar to select the desired analysis mode
3. Upload the necessary files based on the selected mode
4. Follow the on-screen instructions for filtering and analysis

## Analysis Modes

### Multi Mode
Upload files from multiple environments for individual analysis. View statistics, filter by functional area, and see charts of failing scenarios.

### Compare Mode
Upload two files to compare scenario statuses between them. The application will identify:
- Consistent failures (failed in both files)
- New failures (passed in the older file, failed in the newer)
- New passes (failed in the older file, passed in the newer)

### Weekly Mode
Upload files from multiple dates to see trend reports. Filter by environment and functional area, and view detailed statistics for each day.

### Multi-Env Compare Mode
Compare scenarios across multiple environments to identify inconsistencies in test coverage.

## Notes

- Filename format is important for date extraction in Weekly mode. The application will try to extract dates using various patterns like `name_YYYYMMDD_HHMMSS`, `name_YYYYMMDD`, or any 8-digit sequence resembling a date.
- For XLSX files, all steps within a scenario are aggregated to determine the overall scenario status.

## Troubleshooting

If you encounter issues:
1. Ensure the file format follows the expected structure
2. Check the logs for specific error messages
3. Try processing smaller files first to verify functionality

# Jira Integration for Test Analysis

This application provides a Streamlit interface for analyzing test results and creating Jira tasks for failed scenarios.

## Setup

1. Clone the repository
2. Install dependencies:
```bash
pip install -r requirements.txt
```

3. Create a `.env` file in the root directory with the following variables:
```env
JIRA_SERVER=your_jira_server_url
GROQ_API_KEY=your_groq_api_key
```

## Environment Variables

- `JIRA_SERVER`: Your Jira server URL (e.g., https://jira.yourdomain.com)
- `GROQ_API_KEY`: Your Groq API key for AI functionality

## Running the Application

```bash
streamlit run jira_integration.py
```

## Features

- Jira authentication and session management
- Test scenario analysis
- Automated Jira task creation
- Sprint statistics tracking
- Functional area mapping
- Customer field mapping

## Deployment

This application is designed to be deployed on Huggingface Spaces. When deploying:

1. Add the environment variables in the Huggingface Spaces settings
2. Ensure all dependencies are listed in requirements.txt
3. The application will automatically use the environment variables from Huggingface Spaces

## Security Notes

- Never commit the `.env` file to version control
- Keep your Jira credentials secure
- Use environment variables for all sensitive information