Skip to content

Commit fbb7016

Browse files
authored
Merge pull request #2881 from burdeazy/burdeazy-feature-s3-lambda-agentcore
New pattern - s3-lambda-agentcore
2 parents d2f641a + 6ab15ad commit fbb7016

15 files changed

+784
-0
lines changed

s3-lambda-agentcore/.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
metadata
2+
invoke_agent.zip

s3-lambda-agentcore/Readme.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Amazon S3 to AWS Lambda to Amazon Bedrock AgentCore
2+
This pattern creates an AWS Lambda function to invoke an agent in AgentCore Runtime when an object is uploaded to the Amazon S3 bucket.
3+
4+
This Terraform template creates 2 S3 buckets (input and output), an AWS Lambda Function, and an agent in AgentCore Runtime.
5+
6+
Learn more about this pattern at Serverless Land Patterns: https://serverlessland.com/patterns/s3-lambda-agentcore
7+
8+
Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the AWS Pricing page for details. You are responsible for any AWS costs incurred. No warranty is implied in this example.
9+
10+
## Requirements
11+
12+
* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources.
13+
* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured
14+
* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
15+
* [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (Terraform) installed
16+
* [Docker](https://docs.docker.com/get-docker/) installed and running (required for building the agent container image)
17+
18+
## Deployment Instructions
19+
20+
1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository:
21+
22+
`git clone https://github.com/aws-samples/serverless-patterns`
23+
24+
2. Change directory to the pattern directory:
25+
26+
`cd serverless-patterns/s3-lambda-agentcore`
27+
28+
3. From the command line, initialize terraform to download and install the providers defined in the configuration:
29+
30+
`terraform init`
31+
32+
4. From the command line, apply the configuration in the deploy.tf file:
33+
34+
`terraform apply`
35+
36+
1. When prompted, enter `yes` to confirm the deployment
37+
38+
2. Note the outputs from the deployment process, these contain the resource names and/or ARNs which are used for testing.
39+
40+
## How it works
41+
42+
S3 will invoke the Lambda function when an object is created or updated. It will pass metadata about the new object in the event argument of the Lambda invocation.
43+
44+
The lambda function will invoke the agent and pass a uri for the s3 file.
45+
46+
The agent will categorize the file as architecture, runbook, or other and identify some metadata. Then it will send the results back to the Lambda function as JSON.
47+
48+
The Lambda function will write the metadata to the S3 output bucket.
49+
50+
## Testing
51+
52+
Ensure you're in the correct directory (`cd serverless-patterns/s3-lambda-agentcore`). Then run the following script to test with files in the `./test-files` folder.
53+
54+
```bash
55+
# upload test files to the input bucket
56+
aws s3 cp ./test-files/ s3://$(terraform output -raw s3_input_bucket)/ --recursive
57+
# wait for the agent to process the files
58+
sleep 10
59+
# download the metadata from the output bucket
60+
aws s3 cp s3://$(terraform output -raw s3_output_bucket)/ ./metadata/ --recursive
61+
```
62+
You can view the metadata in `./metadata`
63+
64+
## Cleanup
65+
66+
1. Ensure you're in the correct directory (`cd serverless-patterns/s3-lambda-agentcore`)
67+
68+
2. Delete all created resources:
69+
70+
`terraform destroy`
71+
72+
3. When prompted, enter `yes` to confirm the destruction
73+
74+
4. Confirm all created resources has been deleted:
75+
76+
`terraform show`
77+
78+
---
79+
80+
Copyright 2026 Amazon.com, Inc. or its affiliates. All Rights Reserved.
81+
82+
SPDX-License-Identifier: MIT-0
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
FROM python:3
2+
3+
COPY requirements.txt ./
4+
RUN pip install --no-cache-dir -r requirements.txt
5+
6+
COPY . .
7+
8+
CMD [ "opentelemetry-instrument", "python", "main.py" ]

s3-lambda-agentcore/agent/main.py

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
from strands import Agent
2+
from strands_tools import use_aws, current_time
3+
from strands.models import BedrockModel
4+
from bedrock_agentcore.runtime import BedrockAgentCoreApp
5+
from pydantic import BaseModel, Field
6+
from typing import List, Literal
7+
from datetime import datetime, timezone
8+
9+
app = BedrockAgentCoreApp()
10+
11+
# Define structured output schema
12+
class FileMetadata(BaseModel):
13+
filename: str = Field(description="The name of the file")
14+
system: str = Field(description="The system or service the file relates to")
15+
keywords: List[str] = Field(description="List of relevant keywords or subjects")
16+
17+
class FileClassification(BaseModel):
18+
category: Literal["architecture", "operations", "other"] = Field(description="The category of the file")
19+
metadata: FileMetadata = Field(description="Metadata about the file")
20+
reasoning: str = Field(description="The reasoning behind the categorization")
21+
time: str = Field(description="The UTC timestamp of the categorization")
22+
23+
model_id = "us.amazon.nova-pro-v1:0"
24+
model = BedrockModel(
25+
model_id=model_id,
26+
)
27+
28+
agent = Agent(
29+
model=model,
30+
tools=[use_aws, current_time],
31+
system_prompt="""
32+
You are an IT documentation classifier. Your task is to categorize documentation files into one of three categories and extract relevant metadata.
33+
34+
CATEGORIES:
35+
36+
1. **architecture** - System design and technical architecture documentation including:
37+
- System architecture diagrams and design documents
38+
- Reference architectures
39+
- API specifications and interface definitions
40+
- Data models, database schemas, and ER diagrams
41+
- Technology stack decisions and architecture decision records (ADRs)
42+
- Component interaction diagrams and sequence diagrams
43+
- Infrastructure architecture and network topology
44+
- Security architecture and authentication flows
45+
46+
2. **operations** - Operational procedures and runbooks including:
47+
- Deployment procedures and release processes
48+
- Troubleshooting guides and incident response playbooks
49+
- Monitoring and alerting setup documentation
50+
- Backup and recovery procedures
51+
- Configuration management and environment setup
52+
- Maintenance schedules and operational checklists
53+
- On-call procedures and escalation paths
54+
55+
3. **other** - All other documentation including:
56+
- Meeting notes and minutes
57+
- Project plans and timelines
58+
- Training materials and user guides
59+
- General reference documents
60+
- Administrative documentation
61+
62+
TASK:
63+
64+
For each file, analyze its content and provide:
65+
- **category**: One of "architecture", "operations", or "other"
66+
- **metadata**:
67+
- **filename**: The name of the file
68+
- **system**: The primary system, service, or component the document relates to
69+
- **keywords**: A list of relevant technical keywords or topics covered
70+
71+
Base your categorization on the document's primary purpose and content. If a document covers multiple areas, choose the category that best represents its main focus.
72+
"""
73+
)
74+
75+
@app.entrypoint
76+
def strands_agent_bedrock(payload):
77+
"""
78+
Invoke the agent with a payload and return structured output
79+
"""
80+
user_input = payload.get("prompt")
81+
response = agent(user_input, structured_output_model=FileClassification)
82+
return response.structured_output.model_dump()
83+
84+
if __name__ == "__main__":
85+
app.run()
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
strands-agents
2+
strands-agents-tools
3+
uv
4+
boto3
5+
bedrock-agentcore<=0.1.5
6+
bedrock-agentcore-starter-toolkit==0.1.14
7+
aws-opentelemetry-distro>=0.10.0

s3-lambda-agentcore/bin/build.sh

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
#!/bin/bash
2+
3+
# Fail fast
4+
set -e
5+
6+
# This is the order of arguments
7+
ECR_BASE_ARN=${1}
8+
BUILD_FOLDER=${2}
9+
IMAGE_NAME=${3}
10+
IMAGE_URI=${4}
11+
TARGET_AWS_REGION=${5}
12+
MYTAG=$(date +%Y%m%d%H%M%S)
13+
14+
# Check that aws is installed
15+
which aws >/dev/null || {
16+
echo 'ERROR: aws-cli is not installed'
17+
exit 1
18+
}
19+
20+
# Check that docker is installed and running
21+
which docker >/dev/null && docker ps >/dev/null || {
22+
echo 'ERROR: docker is not running'
23+
exit 1
24+
}
25+
26+
# Connect into aws
27+
aws ecr get-login-password --region ${TARGET_AWS_REGION} | docker login --username AWS --password-stdin ${ECR_BASE_ARN} || {
28+
echo 'ERROR: aws ecr login failed'
29+
exit 1
30+
}
31+
32+
# Build image
33+
docker build --no-cache -t ${IMAGE_NAME} ${BUILD_FOLDER} --platform linux/arm64
34+
35+
# Docker Tag and Push
36+
docker tag ${IMAGE_NAME}:latest ${IMAGE_URI}:latest
37+
docker tag ${IMAGE_URI}:latest ${IMAGE_URI}:${MYTAG}
38+
docker push ${IMAGE_URI}:latest
39+
docker push ${IMAGE_URI}:${MYTAG}
40+
41+
echo "Tags Used for ${IMAGE_NAME} Image are ${MYTAG}"

0 commit comments

Comments
 (0)