Skip to content

Akarsh1/Autonomous-Software-Engineering-Sprint-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Autonomous-Software-Engineering-Sprint-Assistant

This is an Multi-Agentic Sprint Assistant that takes a product feature request and autonomously understands it, breaks it into tasks, assigns roles, generates code, writes tests, reviews its own output and evaluates quality. This simulates a real Agile sprint cycle using AI agents.

Project Architecture

flowchart TD;
        A(["User Input"])
        A --> |Feature Request| B["Defining LLM For Generating Text"]
        B --> C{"Defining Agents"}
        C --> D["Product Manager"]
        C --> E["Architect"]
        C --> F["Developer"]
        C --> G["Tester"]
        C --> H["Reviewer"]
        D --> |Managed By| I["Langraph Orchestrator"]
        E --> |Managed By| I["Langraph Orchestrator"]
        F --> |Managed By| I["Langraph Orchestrator"]
        G --> |Managed By| I["Langraph Orchestrator"]
        H --> |Managed By| I["Langraph Orchestrator"]
        I --> J["Agents Evaluation"]
        J --> K["Final Sprint Report"]
Loading

Evaluation Metrics

  1. Code Complexity: This is cyclomatic code complexity. It measures the quality of code based on the complexity of if-else and loops.
  2. Test Coverage: This measures how much test cases are generated by the LLM. More the test cases, more the coverage.
  3. Semantic Similarity to Feature: This measures the similarity of generated code with the feature requested by user using an embedding model.
  4. LLM Reflection Score: This measures the correctness,maintainability and completeness of the generated code on the scale of 1-10 using the LLM.