Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions python/samples/02-agents/context_providers/neo4j/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Neo4j Context Providers

Neo4j offers two context providers for the Agent Framework, each serving a different purpose:

| | [Neo4j Memory](../neo4j_memory/README.md) | [Neo4j GraphRAG](../neo4j_graphrag/README.md) |
|---|---|---|
| **What it does** | Read-write memory — stores conversations, builds knowledge graphs, learns from interactions | Read-only retrieval from a pre-existing knowledge base with optional graph traversal |
| **Data source** | Agent interactions (grows over time) | Pre-loaded documents and indexes |
| **Package** | [`neo4j-agent-memory`](https://github.com/neo4j-labs/agent-memory) | [`agent-framework-neo4j`](https://github.com/neo4j-labs/neo4j-maf-provider) |
| **Database setup** | Empty — creates its own schema | Requires pre-indexed documents with vector or fulltext indexes |
| **Example use case** | "Remember my preferences", "What did we discuss last time?" | "Search our documents", "What risks does Acme Corp face?" |

## Which should I use?

**Use [Neo4j Memory](../neo4j_memory/README.md)** when your agent needs to remember things across sessions — user preferences, past conversations, extracted entities, and reasoning traces. The memory provider writes to the database on every interaction, building a knowledge graph that grows over time.

**Use [Neo4j GraphRAG](../neo4j_graphrag/README.md)** when your agent needs to search an existing knowledge base — documents, articles, product catalogs — and optionally enrich results by traversing graph relationships. The GraphRAG provider is read-only and does not modify your data.

You can use both together: GraphRAG for domain knowledge retrieval, Memory for personalization and learning.
140 changes: 140 additions & 0 deletions python/samples/02-agents/context_providers/neo4j_graphrag/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
# Neo4j GraphRAG Context Provider Examples

The [Neo4j GraphRAG context provider](https://github.com/neo4j-labs/neo4j-maf-provider) retrieves relevant documents from Neo4j vector and fulltext indexes and optionally enriches results by traversing graph relationships, giving agents access to connected knowledge that flat document search cannot provide.

This is a **read-only retrieval provider** — it queries a pre-existing knowledge base and does not modify data. For persistent agent memory that grows from interactions, see the [Neo4j Memory Provider](../neo4j_memory/README.md). For help choosing between the two, see the [Neo4j Context Providers overview](../neo4j/README.md).

## Examples

- **Vector search**: Semantic similarity search using embeddings to retrieve conceptually related document chunks
- **Fulltext search**: Keyword search using BM25 scoring — no embedder required
- **Hybrid search**: Vector + fulltext combined for best of both worlds
- **Graph-enriched search**: Any search mode combined with a custom Cypher `retrieval_query` to traverse related entities

For full runnable examples, see the [Neo4j GraphRAG Provider samples](https://github.com/neo4j-labs/neo4j-maf-provider/tree/main/python/samples).

## Installation

```bash
pip install agent-framework-neo4j
```

## Prerequisites

### Required Resources

1. **Neo4j database** with a vector or fulltext index containing your documents
- [Neo4j AuraDB](https://neo4j.com/cloud/auradb/) (managed) or self-hosted
- Documents must be indexed with a vector or fulltext index
2. **Azure AI Foundry project** with a model deployment (for the agent's chat model)
3. **For vector/hybrid search**: An embedding model endpoint (e.g., Azure AI `text-embedding-ada-002`)

### Authentication

- Neo4j: Username/password authentication
- Azure AI: Uses `DefaultAzureCredential` for embeddings and chat model

Run `az login` for Azure authentication.

## Configuration

### Environment Variables

**Neo4j** (auto-loaded by `Neo4jSettings`):
- `NEO4J_URI`: Neo4j connection URI (e.g., `neo4j+s://your-instance.databases.neo4j.io`)
- `NEO4J_USERNAME`: Database username
- `NEO4J_PASSWORD`: Database password

**Azure AI** (auto-loaded by `AzureAISettings`):
- `AZURE_AI_PROJECT_ENDPOINT`: Azure AI Foundry project endpoint
- `AZURE_AI_MODEL_DEPLOYMENT_NAME`: Chat model deployment name (e.g., `gpt-4o`)
- `AZURE_AI_EMBEDDING_NAME`: Embedding model name (default: `text-embedding-ada-002`)

## Code Example

### Vector Search with Graph Enrichment

```python
import os

from agent_framework import Agent
from agent_framework.azure import AzureAIClient
from agent_framework_neo4j import Neo4jContextProvider, Neo4jSettings, AzureAIEmbedder, AzureAISettings
from azure.identity import DefaultAzureCredential
from azure.identity.aio import AzureCliCredential

neo4j_settings = Neo4jSettings()
azure_settings = AzureAISettings()

embedder = AzureAIEmbedder(
endpoint=azure_settings.inference_endpoint,
credential=DefaultAzureCredential(),
model=azure_settings.embedding_model,
)

provider = Neo4jContextProvider(
uri=neo4j_settings.uri,
username=neo4j_settings.username,
password=neo4j_settings.get_password(),
index_name="chunkEmbeddings",
index_type="vector",
embedder=embedder,
top_k=5,
retrieval_query="""
MATCH (node)-[:FROM_DOCUMENT]->(doc:Document)<-[:FILED]-(company:Company)
RETURN node.text AS text, score, company.name AS company, doc.title AS title
ORDER BY score DESC
""",
)

async with (
provider,
AzureAIClient(
credential=AzureCliCredential(),
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
) as client,
Agent(
client=client,
name="FinancialAnalyst",
instructions="You are a financial analyst assistant.",
context_providers=[provider],
) as agent,
):
session = agent.create_session()
response = await agent.run("What risks does Acme Corp face?", session=session)
print(response.text)
```

### Fulltext Search (No Embedder Required)

```python
provider = Neo4jContextProvider(
uri=neo4j_settings.uri,
username=neo4j_settings.username,
password=neo4j_settings.get_password(),
index_name="search_chunks",
index_type="fulltext",
top_k=5,
)
```

### Hybrid Search

```python
provider = Neo4jContextProvider(
uri=neo4j_settings.uri,
username=neo4j_settings.username,
password=neo4j_settings.get_password(),
index_name="chunkEmbeddings",
index_type="hybrid",
fulltext_index_name="chunkFulltext",
embedder=embedder,
top_k=5,
)
```

## Additional Resources

- [Neo4j GraphRAG Provider Repository](https://github.com/neo4j-labs/neo4j-maf-provider)
- [Neo4j GraphRAG Python Library](https://neo4j.com/docs/neo4j-graphrag-python/current/)
- [Neo4j Vector Index Documentation](https://neo4j.com/docs/cypher-manual/current/indexes/semantic-indexes/vector-indexes/)
103 changes: 103 additions & 0 deletions python/samples/02-agents/context_providers/neo4j_memory/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Neo4j Memory Context Provider Examples

[Neo4j Agent Memory](https://github.com/neo4j-labs/agent-memory) is a graph-native memory system for AI agents that stores conversations, builds knowledge graphs from interactions, and lets agents learn from their own reasoning — all backed by Neo4j.

This is a **read-write memory provider** — it grows over time as the agent interacts with users. For read-only retrieval from an existing knowledge base, see the [Neo4j GraphRAG Provider](../neo4j_graphrag/README.md). For help choosing between the two, see the [Neo4j Context Providers overview](../neo4j/README.md).

## Examples

- **Basic memory**: Store conversations and recall context across sessions
- **Memory with tools**: Give the agent tools to search memory, remember preferences, and find entity connections in the knowledge graph

For a full runnable example, see the [retail assistant sample](https://github.com/neo4j-labs/agent-memory/tree/main/examples/microsoft_agent_retail_assistant).

## Installation

```bash
pip install neo4j-agent-memory[microsoft-agent]
```

## Prerequisites

### Required Resources

1. **Neo4j database** (empty — the memory provider creates its own schema)
- [Neo4j AuraDB](https://neo4j.com/cloud/auradb/) (managed) or self-hosted
- No pre-existing indexes or data required
2. **Azure AI Foundry project** with a model deployment (for the agent's chat model)
3. **Embedding model** — supports OpenAI, Azure AI, or other providers for semantic search over memories

### Authentication

- Neo4j: Username/password authentication
- Azure AI: Uses `DefaultAzureCredential`

Run `az login` for Azure authentication.

## Configuration

### Environment Variables

**Neo4j:**
- `NEO4J_URI`: Neo4j connection URI (e.g., `neo4j+s://your-instance.databases.neo4j.io`)
- `NEO4J_USERNAME`: Database username
- `NEO4J_PASSWORD`: Database password

**Azure AI:**
- `AZURE_AI_PROJECT_ENDPOINT`: Azure AI Foundry project endpoint
- `AZURE_AI_MODEL_DEPLOYMENT_NAME`: Chat model deployment name (e.g., `gpt-4o`)

**Embeddings (pick one):**
- `OPENAI_API_KEY`: For OpenAI embeddings
- Or configure Azure AI embeddings via `AZURE_AI_PROJECT_ENDPOINT`

## Code Example

```python
import os

from agent_framework import Agent
from agent_framework.azure import AzureAIClient
from azure.identity.aio import AzureCliCredential
from neo4j_agent_memory import MemoryClient, MemorySettings
from neo4j_agent_memory.integrations.microsoft_agent import (
Neo4jMicrosoftMemory,
create_memory_tools,
)

settings = MemorySettings(...)
memory_client = MemoryClient(settings)

async with memory_client:
memory = Neo4jMicrosoftMemory.from_memory_client(
memory_client=memory_client,
session_id="user-123",
)
tools = create_memory_tools(memory)

async with (
AzureAIClient(
credential=AzureCliCredential(),
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
) as client,
Agent(
client=client,
name="MemoryAssistant",
instructions="You are a helpful assistant with persistent memory.",
tools=tools,
context_providers=[memory.context_provider],
) as agent,
):
session = agent.create_session()
response = await agent.run(
"Remember that I prefer window seats on flights.", session=session
)
print(response.text)
```

`create_memory_tools()` returns callable `FunctionTool` instances that the framework auto-invokes during streaming — no manual tool dispatch is needed. The core tools are: `search_memory`, `remember_preference`, `recall_preferences`, `search_knowledge`, `remember_fact`, and `find_similar_tasks`. Optional GDS graph algorithm tools (`find_connection_path`, `find_similar_items`, `find_important_entities`) are included when a `GDSConfig` is provided.

## Additional Resources

- [Neo4j Agent Memory Repository](https://github.com/neo4j-labs/agent-memory)
- [Neo4j AuraDB](https://neo4j.com/cloud/auradb/)