This is an Multi-Agentic Sprint Assistant that takes a product feature request and autonomously understands it, breaks it into tasks, assigns roles, generates code, writes tests, reviews its own output and evaluates quality. This simulates a real Agile sprint cycle using AI agents.
flowchart TD;
A(["User Input"])
A --> |Feature Request| B["Defining LLM For Generating Text"]
B --> C{"Defining Agents"}
C --> D["Product Manager"]
C --> E["Architect"]
C --> F["Developer"]
C --> G["Tester"]
C --> H["Reviewer"]
D --> |Managed By| I["Langraph Orchestrator"]
E --> |Managed By| I["Langraph Orchestrator"]
F --> |Managed By| I["Langraph Orchestrator"]
G --> |Managed By| I["Langraph Orchestrator"]
H --> |Managed By| I["Langraph Orchestrator"]
I --> J["Agents Evaluation"]
J --> K["Final Sprint Report"]
- Code Complexity: This is cyclomatic code complexity. It measures the quality of code based on the complexity of if-else and loops.
- Test Coverage: This measures how much test cases are generated by the LLM. More the test cases, more the coverage.
- Semantic Similarity to Feature: This measures the similarity of generated code with the feature requested by user using an embedding model.
- LLM Reflection Score: This measures the correctness,maintainability and completeness of the generated code on the scale of 1-10 using the LLM.