diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 0000000..2aaf0a3 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,62 @@ +# SAP CodeJam: Code-Based AI Agents + +Hands-on workshop for building multi-agent AI systems on SAP BTP using **CrewAI** and **LiteLLM** connected to **SAP Generative AI Hub** (AI Core). The scenario: investigate an art heist by orchestrating specialized agents that analyse evidence, appraise stolen items, and identify the culprit. + +## Architecture + +Three-agent sequential crew in `project/Python/`: + +| Agent | Tool | Purpose | +|---|---|---| +| Appraiser | `call_rpt1()` | Predict item categories & insurance values via SAP-RPT-1 ML model | +| Evidence Analyst | `call_grounding_service()` | RAG queries over evidence documents via SAP Grounding Service | +| Lead Detective | _(none)_ | Synthesise findings and name the culprit | + +Key files in `project/Python/solution/`: +- `investigator_crew.py` — `@CrewBase` class; agents and tasks defined with `@agent`, `@task`, `@crew` decorators; tools use `@tool("name")` +- `main.py` — entry point; calls `crew().kickoff(inputs={...})` +- `rpt_client.py` — OAuth2 client for SAP-RPT-1 (client-credentials grant, Bearer auth) +- `payload.py` — structured art-item data with `[PREDICT]` placeholders for RPT-1 +- `config/agents.yaml`, `config/tasks.yaml` — YAML definitions (method names must match decorator names) + +Evidence documents (plain text, loaded into the Grounding Service pipeline) are in `exercises/data/documents/`. + +## Build & Run + +```bash +# Activate virtual env (Windows) +.\env\Scripts\Activate.ps1 + +# Install dependencies (only needed once) +pip install litellm crewai python-dotenv + +# Run the crew +python main.py +``` + +Requires a `.env` file in `project/Python/starter-project/` (copy structure from the exercise docs): + +``` +AICORE_CLIENT_ID= +AICORE_CLIENT_SECRET= +AICORE_AUTH_URL= +RPT1_DEPLOYMENT_URL= +AICORE_RESOURCE_GROUP= +AICORE_BASE_URL= +``` + +## Conventions + +- **CrewAI YAML config**: agent/task names in `agents.yaml` / `tasks.yaml` must exactly match the Python method names decorated with `@agent` / `@task`. +- **Tool pattern**: tools are plain functions decorated with `@tool("Descriptive Name")`. Return error messages as strings so the LLM can handle failures gracefully. +- **LLM model strings**: use `sap/` format (e.g. `sap/gpt-4o`) matching deployments in SAP AI Launchpad. +- **Process**: always `Process.sequential` — tasks pass outputs as context to the next task in order. +- **RPT-1 payload**: `[PREDICT]` string is the placeholder for values to be inferred; schema (dtype, categories, value ranges) must be exact. + +## Pitfalls + +- **Hardcoded Grounding pipeline ID** in `call_grounding_service()` — replace with your own vector DB pipeline ID from SAP AI Launchpad before running. +- **Token refresh**: `RPT1Client` fetches the OAuth token once at init; long-running crews may hit expiry — re-instantiate if needed. +- **No `.env` validation** at startup — credential errors only surface on the first API call. +- **YAML / decorator name mismatch** causes silent CrewAI failures with no clear error message. +- **Grounding pipeline must be pre-loaded** with the evidence documents; empty pipelines return no results and agents will hallucinate. diff --git a/.github/instructions/python-sap-tools.instructions.md b/.github/instructions/python-sap-tools.instructions.md new file mode 100644 index 0000000..987aef8 --- /dev/null +++ b/.github/instructions/python-sap-tools.instructions.md @@ -0,0 +1,125 @@ +--- +description: "Use when writing Python code for SAP AI Core integrations: CrewAI agents, LiteLLM models, SAP-RPT-1 API calls, Grounding Service queries, OAuth2 token handling. Covers tool patterns, model string format, error handling, and YAML/decorator conventions." +applyTo: "**/*.py" +--- + +# Python SAP Tools Conventions + +## LLM Model Strings + +Use the `sap/` format matching the deployment name in SAP AI Launchpad: + +```python +llm = LLM(model="sap/gpt-4o") +``` + +Never use bare provider strings like `"gpt-4o"` or `"openai/gpt-4o"` for SAP AI Core deployments. + +## CrewAI Agent & Task Pattern + +Agents and their tasks are defined in YAML; the Python method names **must exactly match** the YAML keys: + +```python +@CrewBase +class MyCrew(): + agents_config = "config/agents.yaml" + tasks_config = "config/tasks.yaml" + + @agent + def appraiser_agent(self) -> Agent: # key in agents.yaml: appraiser_agent + return Agent(config=self.agents_config["appraiser_agent"], tools=[call_rpt1]) + + @task + def appraisal_task(self) -> Task: # key in tasks.yaml: appraisal_task + return Task(config=self.tasks_config["appraisal_task"]) + + @crew + def crew(self) -> Crew: + return Crew(agents=self.agents, tasks=self.tasks, process=Process.sequential) +``` + +Always use `Process.sequential`; task outputs are automatically passed as context to subsequent tasks. + +## Tool Pattern + +Tools are module-level functions decorated with `@tool`. Return error messages as plain strings so the LLM can recover gracefully — never raise exceptions out of a tool: + +```python +from crewai.tools import tool + +@tool("call_rpt1") +def call_rpt1(payload: dict) -> str: + """Docstring is the tool description shown to the agent.""" + try: + response = rpt1_client.post_request(json_payload=payload) + if response.status_code == 200: + return json.dumps(response.json(), indent=2) + return f"Error {response.status_code}: {response.text}" + except Exception as e: + return f"Error calling RPT-1: {str(e)}" +``` + +## OAuth2 Client-Credentials Pattern (SAP AI Core) + +```python +import os, requests + +data = { + "grant_type": "client_credentials", + "client_id": os.getenv("AICORE_CLIENT_ID"), + "client_secret": os.getenv("AICORE_CLIENT_SECRET"), +} +headers = {"Content-Type": "application/x-www-form-urlencoded"} +resp = requests.post(os.getenv("AICORE_AUTH_URL"), data=data, headers=headers, timeout=30) +resp.raise_for_status() +token = resp.json()["access_token"] +``` + +Subsequent requests use `"Authorization": f"Bearer {token}"` and `"AI-Resource-Group": resource_group`. + +> **Token expiry**: the token is fetched once at init. For long-running crews, re-instantiate the client to refresh. + +## Grounding Service Query Pattern + +```python +from gen_ai_hub.document_grounding.client import RetrievalAPIClient +from gen_ai_hub.document_grounding.models.retrieval import RetrievalSearchInput, RetrievalSearchFilter +from gen_ai_hub.orchestration.models.document_grounding import DataRepositoryType + +@tool("call_grounding_service") +def call_grounding_service(user_question: str) -> str: + """Query evidence documents via SAP Grounding Service.""" + client = RetrievalAPIClient() + search_filter = RetrievalSearchFilter( + id="vector", + dataRepositoryType=DataRepositoryType.VECTOR.value, + dataRepositories=[""], # Replace with pipeline ID from SAP AI Launchpad + searchConfiguration={"maxChunkCount": 5}, + ) + search_input = RetrievalSearchInput(query=user_question, filters=[search_filter]) + response = client.search(search_input) + return json.dumps(response.model_dump(), indent=2) +``` + +Replace `` with the vector DB pipeline ID from SAP AI Launchpad before running. + +## RPT-1 Payload + +Use the string `"[PREDICT]"` as the placeholder for values the model should infer. The schema (dtype, categories, value ranges) must match the deployed model exactly. Pass the payload dict directly from `main.py` inputs via the crew kickoff: + +```python +crew.kickoff(inputs={"payload": payload, "query": "Who stole the art?"}) +``` + +## Environment Variables + +Load `.env` once at the top of the entry-point module, before instantiating any client: + +```python +from pathlib import Path +from dotenv import load_dotenv + +load_dotenv(dotenv_path=Path(__file__).parent / ".env") +``` + +Required keys: `AICORE_CLIENT_ID`, `AICORE_CLIENT_SECRET`, `AICORE_AUTH_URL`, `RPT1_DEPLOYMENT_URL`, `AICORE_RESOURCE_GROUP`, `AICORE_BASE_URL`. diff --git a/exercises/Python/01-setup-dev-space.md b/exercises/Python/01-setup-dev-space.md index 7591836..92ba160 100644 --- a/exercises/Python/01-setup-dev-space.md +++ b/exercises/Python/01-setup-dev-space.md @@ -88,7 +88,14 @@ RPT1_DEPLOYMENT_URL="https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v python3 --version ``` -> ⚠️ **Important**: CrewAI requires Python 3.10 or newer (up to 3.13). If your Python version is 3.9 or older, you'll need to install a compatible version. On macOS, you can use [Homebrew](https://brew.sh/) to install Python 3.11: `brew install python@3.11`, then use `python3.11` instead of `python3` in the commands below. +> ⚠️ **Important**: CrewAI requires Python 3.10 or newer (up to 3.13). If your Python version is 3.9 or older, install a compatible version first. +> +> - **macOS**: Use [Homebrew](https://brew.sh/) to install Python 3.11: `brew install python@3.11`, then use `python3.11` in the commands below. +> - **Linux**: Install Python 3.11 with your distro package manager, for example: +> - Ubuntu/Debian: `sudo apt update && sudo apt install python3.11 python3.11-venv` +> - Fedora/RHEL: `sudo dnf install python3.11` +> Then use `python3.11` in the commands below. +> - **Windows**: Install Python 3.11 from the [official Python downloads page](https://www.python.org/downloads/windows/) and make sure **Add python.exe to PATH** is enabled during installation. Then use `python` (or `py -3.11`) in the commands below. 👉 Create a virtual environment using the following command: @@ -96,12 +103,43 @@ python3 --version python3 -m venv ~/projects/codejam-code-based-agents/env --upgrade-deps ``` +Or use the variant that matches your OS/shell if not in BAS: + +```bash +# macOS / Linux +python3 -m venv ~/projects/codejam-code-based-agents/env --upgrade-deps +``` + +```powershell +# Windows (PowerShell) +python -m venv .\env --upgrade-deps +``` + 👉 Activate the `env` virtual environment like this and make sure it is activated: ```bash source ~/projects/codejam-code-based-agents/env/bin/activate ``` +Use the activation command for your environment: + +```bash +# macOS / Linux +source ~/projects/codejam-code-based-agents/env/bin/activate +``` + +```powershell +# Windows (PowerShell) +.\env\Scripts\Activate.ps1 +``` + +```cmd +:: Windows (Command Prompt) +.\env\Scripts\activate.bat +``` + +> ℹ️ If PowerShell blocks script execution, run `Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass` and then run `./env/Scripts/Activate.ps1` again. + ![venv](/exercises/data/images/venv.png) 👉 Install LiteLLM, CrewAI, and python-dotenv using the following `pip install` commands. @@ -114,6 +152,6 @@ pip install litellm crewai python-dotenv ![bas-message](/exercises/data/images/virtual-env-python-bas-warning.png) -## Let's start coding! +## Let's start coding [Next exercise](02-build-a-basic-agent.md) diff --git a/exercises/Python/02-build-a-basic-agent.md b/exercises/Python/02-build-a-basic-agent.md index 0bdc7fc..59c3e04 100644 --- a/exercises/Python/02-build-a-basic-agent.md +++ b/exercises/Python/02-build-a-basic-agent.md @@ -109,10 +109,40 @@ if __name__ == "__main__": 👉 Execute the crew with the basic agent: -> ☝️ Make sure you're in the repository root directory (e.g., `codejam-code-based-agents-1`) when running this command. If you're already in the `starter-project` folder, use `python basic_agent.py` instead. +> ☝️ Make sure you're in the repository root directory (e.g., `codejam-code-based-agents-1`) when running this command. If you're already in the `starter-project` folder, use the appropriate command for your OS. + +**From repository root:** ```bash -python project/Python/starter-project/basic_agent.py +# macOS / Linux / BAS +python3 ./basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python .\basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python .\basic_agent.py +``` + +**From starter-project folder:** + +```bash +# macOS / Linux +python3 basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python basic_agent.py ``` You should see: @@ -136,7 +166,7 @@ You created and ran a working AI agent that: The basic workflow is: -``` +```text Task → Agent (Role/Goal/Backstory) → LLM Processing (GPT-4o) → Response → Output ``` @@ -167,7 +197,25 @@ In the following exercises, you will: **Issue**: `ModuleNotFoundError: No module named 'crewai'` -- **Solution**: Ensure you're in the correct Python environment: `source venv/bin/activate` and run `pip install crewai litellm` +- **Solution**: Ensure you're in the correct Python environment and run the install command. + +```bash +# macOS / Linux - Activate environment and install +source ~/projects/codejam-code-based-agents/env/bin/activate +pip install crewai litellm +``` + +```powershell +# Windows (PowerShell) - Activate environment and install +.\env\Scripts\Activate.ps1 +pip install crewai litellm +``` + +```cmd +# Windows (Command Prompt) - Activate environment and install +.\env\Scripts\activate.bat +pip install crewai litellm +``` --- diff --git a/exercises/Python/03-add-your-first-tool.md b/exercises/Python/03-add-your-first-tool.md index f1e9f19..616ab0f 100644 --- a/exercises/Python/03-add-your-first-tool.md +++ b/exercises/Python/03-add-your-first-tool.md @@ -289,10 +289,40 @@ if __name__ == "__main__": 👉 Run your crew to test it. -> ☝️ Make sure you're in the repository root directory (e.g., `codejam-code-based-agents`) when running this command. If you're already in the `starter-project` folder, use `python basic_agent.py` instead. +> ☝️ Make sure you run from the starter-project folder, or use the full path from the repository root. + +**From repository root:** + +```bash +# macOS / Linux +python3 ./project/Python/starter-project/basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python .\project\Python\starter-project\basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python .\project\Python\starter-project\basic_agent.py +``` + +**From starter-project folder:** ```bash -python project/Python/starter-project/basic_agent.py +# macOS / Linux +python3 basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python basic_agent.py ``` ☝️ You added an input variable to your agent but the agent is still not using a tool. Let's build the actual tool next. @@ -541,10 +571,40 @@ RPT1_DEPLOYMENT_URL="https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v 👉 Run your crew to test it. -> ☝️ Make sure you're in the repository root directory (e.g., `codejam-code-based-agents`) when running this command. If you're already in the `starter-project` folder, use `python basic_agent.py` instead. +> ☝️ Make sure you run from the starter-project folder, or use the full path from the repository root. + +**From starter-project folder:** ```bash -python project/Python/starter-project/basic_agent.py +# macOS / Linux +python3 basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python basic_agent.py +``` + +**From repository root:** + +```bash +# macOS / Linux +python3 ./project/Python/starter-project/basic_agent.py +``` + +```powershell +# Windows (PowerShell) +python .\project\Python\starter-project\basic_agent.py +``` + +```cmd +# Windows (Command Prompt) +python .\project\Python\starter-project\basic_agent.py ``` 👉 Understand the output of the agent using SAP-RPT-1 as a tool. @@ -565,7 +625,7 @@ You extended your agent with: ### The Tool Flow -``` +```text Agent Task → LLM Reasoning → Tool Decision → Tool Execution → Result → Agent Processing → LLM Reasoning → Output ``` diff --git a/exercises/Python/04-building-multi-agent-system.md b/exercises/Python/04-building-multi-agent-system.md index 8c86be1..0a9a8c1 100644 --- a/exercises/Python/04-building-multi-agent-system.md +++ b/exercises/Python/04-building-multi-agent-system.md @@ -218,7 +218,7 @@ if __name__ == "__main__": ### Step 1: Adding a New Agent -We also have a lot of evidence in our evidence database. You can check the documents that are part of the evidence [here](exercises/data/documents). +We also have a lot of evidence in our evidence database. You can check the documents that are part of the evidence [here](/exercises/data/documents). To analyze the evidence and find all the information on our three suspects: @@ -294,8 +294,38 @@ analyze_evidence_task: 👉 Run your crew to test it. +**From repository root:** + ```bash -python project/Python/starter-project/main.py +# macOS / Linux +python3 ./main.py +``` + +```powershell +# Windows (PowerShell) +python .\main.py +``` + +```cmd +# Windows (Command Prompt) +python .\main.py +``` + +**From starter-project folder:** + +```bash +# macOS / Linux +python3 main.py +``` + +```powershell +# Windows (PowerShell) +python main.py +``` + +```cmd +# Windows (Command Prompt) +python main.py ``` > 💡 **Note:** The Evidence Analyst currently uses `call_rpt1` as a placeholder tool to make the code functional. This isn't the right tool for evidence analysis. You'll replace it with the `call_grounding_service` tool in Exercise 05 to give the agent proper access to evidence documents. diff --git a/exercises/Python/05-add-the-grounding-service.md b/exercises/Python/05-add-the-grounding-service.md index d478bc3..218fa0c 100644 --- a/exercises/Python/05-add-the-grounding-service.md +++ b/exercises/Python/05-add-the-grounding-service.md @@ -78,70 +78,36 @@ Before your agent can search documents, they must be prepared: When your agent asks a question, here's what happens: +```mermaid +flowchart TD + A["Agent Question: \"What evidence exists about Marcus Chen?\""] + B["1. Convert Query to Vector Embedding
\"Marcus Chen evidence\" → [0.23, -0.45, 0.87, ...]"] + C["2. Search Vector Database (Similarity Search)
Find document chunks with similar vectors
(Cosine similarity scores 0.0 - 1.0)"] + D["3. Retrieve Top 5 Most Relevant Chunks
✓ MARCUS_TERMINATION_LETTER.txt (score: 0.92)
✓ SECURITY_LOG.txt - Marcus entries (score: 0.88)
✓ BANK_RECORDS.txt - Marcus account (score: 0.85)
✓ MARCUS_EXIT_LOG.txt (score: 0.83)
✓ PHONE_RECORDS.txt - Marcus calls (score: 0.79)"] + E["Return to Agent"] + + A --> B --> C --> D --> E ``` -┌─────────────────────────────────────────────────────────────┐ -│ Agent Question: "What evidence exists about Marcus Chen?" │ -└─────────────────────────┬───────────────────────────────────┘ - ↓ -┌─────────────────────────────────────────────────────────────┐ -│ 1. Convert Query to Vector Embedding │ -│ "Marcus Chen evidence" → [0.23, -0.45, 0.87, ...] │ -└─────────────────────────┬───────────────────────────────────┘ - ↓ -┌─────────────────────────────────────────────────────────────┐ -│ 2. Search Vector Database (Similarity Search) │ -│ Find document chunks with similar vectors │ -│ (Cosine similarity scores 0.0 - 1.0) │ -└─────────────────────────┬───────────────────────────────────┘ - ↓ -┌─────────────────────────────────────────────────────────────┐ -│ 3. Retrieve Top 5 Most Relevant Chunks │ -│ ✓ MARCUS_TERMINATION_LETTER.txt (score: 0.92) │ -│ ✓ SECURITY_LOG.txt - Marcus entries (score: 0.88) │ -│ ✓ BANK_RECORDS.txt - Marcus account (score: 0.85) │ -│ ✓ MARCUS_EXIT_LOG.txt (score: 0.83) │ -│ ✓ PHONE_RECORDS.txt - Marcus calls (score: 0.79) │ -└─────────────────────────┬───────────────────────────────────┘ - ↓ - Return to Agent -``` + +If Mermaid doesn't render in your viewer, see the plain-text version in [ASCII Fallbacks](#ascii-fallbacks). > ⚡ **Speed:** Vector search is incredibly fast—searches millions of documents! #### **Phase 3: Context-Enhanced Response** -``` -┌─────────────────────────────────────────────────┐ -│ Retrieved Document Chunks (with text) │ -│ ──────────────────────────────────────── │ -│ Chunk 1: "Marcus Chen was terminated on..." │ -│ Chunk 2: "Security logs show Marcus accessed..."│ -│ Chunk 3: "Bank records indicate deposits of..." │ -└──────────────────┬──────────────────────────────┘ - ↓ - Pass as Context to LLM - ↓ -┌─────────────────────────────────────────────────┐ -│ LLM Prompt: │ -│ "Based ONLY on these documents, answer: │ -│ What evidence exists about Marcus Chen? │ -│ │ -│ Documents: │ -│ [chunks inserted here]" │ -└──────────────────┬──────────────────────────────┘ - ↓ -┌─────────────────────────────────────────────────┐ -│ LLM generates answer grounded in facts: │ -│ │ -│ "According to MARCUS_TERMINATION_LETTER.txt, │ -│ Marcus was fired on 2024-01-15 for │ -│ 'unauthorized access.' SECURITY_LOG.txt shows │ -│ he entered secured areas 3 times after hours..." │ -└──────────────────┬──────────────────────────────┘ - ↓ - Agent receives factual response +```mermaid +flowchart TD + A["Retrieved Document Chunks (with text)
────────────────────────────────────────
Chunk 1: \"Marcus Chen was terminated on...\"
Chunk 2: \"Security logs show Marcus accessed...\"
Chunk 3: \"Bank records indicate deposits of...\""] + B["Pass as Context to LLM"] + C["LLM Prompt:
\"Based ONLY on these documents, answer:
What evidence exists about Marcus Chen?

Documents:
[chunks inserted here]\""] + D["LLM generates answer grounded in facts:

\"According to MARCUS_TERMINATION_LETTER.txt,
Marcus was fired on 2024-01-15 for
'unauthorized access.' SECURITY_LOG.txt shows
he entered secured areas 3 times after hours...\""] + E["Agent receives factual response"] + + A --> B --> C --> D --> E ``` +If Mermaid doesn't render in your viewer, see the plain-text version in [ASCII Fallbacks](#ascii-fallbacks). + > 🎯 **Key Insight:** The LLM can **only** use information from the retrieved chunks. It can't make things up which is called hallucination! ### The Grounding Pipeline @@ -421,8 +387,38 @@ def evidence_analyst_agent(self) -> Agent: 👉 Run your crew to test the grounding service! +**From repository root:** + +```bash +# macOS / Linux +python3 ./project/Python/starter-project/main.py +``` + +```powershell +# Windows (PowerShell) +python .\project\Python\starter-project\main.py +``` + +```cmd +# Windows (Command Prompt) +python .\project\Python\starter-project\main.py +``` + +**From starter-project folder:** + ```bash -python project/Python/starter-project/main.py +# macOS / Linux +python3 main.py +``` + +```powershell +# Windows (PowerShell) +python main.py +``` + +```cmd +# Windows (Command Prompt) +python main.py ``` Your Evidence Analyst should now search through actual evidence documents and cite specific sources (like "MARCUS_TERMINATION_LETTER.txt") instead of making up information! @@ -443,7 +439,7 @@ You integrated a grounding service tool with your agent that: ### The Grounding Flow -``` +```text User Query → LLM Reasoning → Agent Processing → Grounding Tool Call → Vector Search → Document Chunks → Agent Processing → LLM Reasoning → Output ``` @@ -514,6 +510,74 @@ In the following exercises, you will: --- +## ASCII Fallbacks + +### Phase 2: Query Processing (ASCII) + +```text +┌─────────────────────────────────────────────────────────────┐ +│ Agent Question: "What evidence exists about Marcus Chen?" │ +└─────────────────────────┬───────────────────────────────────┘ + ↓ +┌─────────────────────────────────────────────────────────────┐ +│ 1. Convert Query to Vector Embedding │ +│ "Marcus Chen evidence" → [0.23, -0.45, 0.87, ...] │ +└─────────────────────────┬───────────────────────────────────┘ + ↓ +┌─────────────────────────────────────────────────────────────┐ +│ 2. Search Vector Database (Similarity Search) │ +│ Find document chunks with similar vectors │ +│ (Cosine similarity scores 0.0 - 1.0) │ +└─────────────────────────┬───────────────────────────────────┘ + ↓ +┌─────────────────────────────────────────────────────────────┐ +│ 3. Retrieve Top 5 Most Relevant Chunks │ +│ ✓ MARCUS_TERMINATION_LETTER.txt (score: 0.92) │ +│ ✓ SECURITY_LOG.txt - Marcus entries (score: 0.88) │ +│ ✓ BANK_RECORDS.txt - Marcus account (score: 0.85) │ +│ ✓ MARCUS_EXIT_LOG.txt (score: 0.83) │ +│ ✓ PHONE_RECORDS.txt - Marcus calls (score: 0.79) │ +└─────────────────────────┬───────────────────────────────────┘ + ↓ + Return to Agent +``` + +### Phase 3: Context-Enhanced Response (ASCII) + +```text +┌─────────────────────────────────────────────────┐ +│ Retrieved Document Chunks (with text) │ +│ ──────────────────────────────────────── │ +│ Chunk 1: "Marcus Chen was terminated on..." │ +│ Chunk 2: "Security logs show Marcus accessed..."│ +│ Chunk 3: "Bank records indicate deposits of..." │ +└──────────────────┬──────────────────────────────┘ + ↓ + Pass as Context to LLM + ↓ +┌─────────────────────────────────────────────────┐ +│ LLM Prompt: │ +│ "Based ONLY on these documents, answer: │ +│ What evidence exists about Marcus Chen? │ +│ │ +│ Documents: │ +│ [chunks inserted here]" │ +└──────────────────┬──────────────────────────────┘ + ↓ +┌─────────────────────────────────────────────────┐ +│ LLM generates answer grounded in facts: │ +│ │ +│ "According to MARCUS_TERMINATION_LETTER.txt, │ +│ Marcus was fired on 2024-01-15 for │ +│ 'unauthorized access.' SECURITY_LOG.txt shows │ +│ he entered secured areas 3 times after hours..." │ +└──────────────────┬──────────────────────────────┘ + ↓ + Agent receives factual response +``` + +--- + ## Resources - [SAP AI Core Grounding Management](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/document-grounding) diff --git a/exercises/Python/06-solve-the-crime.md b/exercises/Python/06-solve-the-crime.md index b8c4dc6..f88608c 100644 --- a/exercises/Python/06-solve-the-crime.md +++ b/exercises/Python/06-solve-the-crime.md @@ -119,8 +119,38 @@ result = InvestigatorCrew().crew().kickoff(inputs={ 👉 Run your crew to start the investigation! +**From repository root:** + +```bash +# macOS / Linux +python3 ./project/Python/starter-project/main.py +``` + +```powershell +# Windows (PowerShell) +python .\project\Python\starter-project\main.py +``` + +```cmd +# Windows (Command Prompt) +python .\project\Python\starter-project\main.py +``` + +**From starter-project folder:** + ```bash -python project/Python/starter-project/main.py +# macOS / Linux +python3 main.py +``` + +```powershell +# Windows (PowerShell) +python main.py +``` + +```cmd +# Windows (Command Prompt) +python main.py ``` > ⏱️ **This may take 2-5 minutes** as your agents: @@ -160,7 +190,41 @@ If your Lead Detective identifies the wrong suspect, you'll need to refine your - ❌ Avoid vague instructions like "solve the crime" without guidance - ❌ Don't assume agents know which evidence is most important -👉 After updating prompts in the YAML files, run `python project/Python/starter-project/main.py` again +👉 After updating prompts in the YAML files, run the crew again: + +**From repository root:** + +```bash +# macOS / Linux +python3 ./project/Python/starter-project/main.py +``` + +```powershell +# Windows (PowerShell) +python .\project\Python\starter-project\main.py +``` + +```cmd +# Windows (Command Prompt) +python .\project\Python\starter-project\main.py +``` + +**From starter-project folder:** + +```bash +# macOS / Linux +python3 main.py +``` + +```powershell +# Windows (PowerShell) +python main.py +``` + +```cmd +# Windows (Command Prompt) +python main.py +``` 👉 Verify the answer with the instructor @@ -180,7 +244,7 @@ You created a complete multi-agent system where: ### The Investigation Flow -``` +```text Lead Detective → Evidence Analysis → Grounding Search → Suspect Investigation ↓ Loss Appraisal → RPT-1 Predictions → Value Determination @@ -236,14 +300,15 @@ Congratulations on completing the CodeJam! You've successfully built a sophistic **Issue**: `AttributeError: 'NoneType' object has no attribute 'get'` when running main.py - **Solution**: This error occurs when YAML configuration has incorrect indentation. Check your `config/tasks.yaml` file and ensure all fields (`description`, `expected_output`, `agent`) are indented with **2 spaces** under each task name: - ```yaml - solve_crime: - description: > # ← Must be indented 2 spaces - Your task description - expected_output: > # ← Must be indented 2 spaces - Expected output - agent: lead_detective_agent # ← Must be indented 2 spaces - ``` + +```yaml +solve_crime: + description: > # ← Must be indented 2 spaces + Your task description + expected_output: > # ← Must be indented 2 spaces + Expected output + agent: lead_detective_agent # ← Must be indented 2 spaces +``` **Issue**: Agent is not using the Grounding Service tool