Skip to content

A role based access control RAG application built with Spring AI, Ollama, and MongoDB.

Notifications You must be signed in to change notification settings

mongodb-developer/securerag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

secure-local-rag

A secure, locally-managed Retrieval-Augmented Generation (RAG) system built with Spring Boot, Spring AI, Ollama, and MongoDB Atlas.
This project demonstrates how to build a RAG application that keeps all AI processing local and enforces role-based access control at the vector retrieval step, ensuring sensitive data is never exposed to external services.

The full tutorial is Secure Local RAG with Role-Based Access: Spring AI, Ollama & MongoDB.


What this app does

  1. Run Ollama locally (inside Docker) to host both embedding and chat models.
  2. Store documents and their vector embeddings in MongoDB Atlas.
  3. Use Spring AI to integrate MongoDB Vector Search with RAG logic.
  4. Apply role-based filters at query time so users only retrieve documents they are permitted to see.
  5. Expose a simple REST API for secure question answering over the local AI + RAG pipeline.

Prerequisites

  • MongoDB Atlas account with a cluster (M0 or higher)
  • Docker installed (to run Ollama)
  • Java 21+ and Maven 3.9+
  • Basic understanding of Spring Boot

Setup and Configuration

Start Ollama (Docker)

Create a docker-compose.yml:

version: '3.8'
services:
  ollama:
    image: ollama/ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
volumes:
  ollama_data:

Start Ollama:

docker compose up -d

Pull models:

docker exec -it ollama ollama pull llama3.2
docker exec -it ollama ollama pull nomic-embed-text

Dependencies

Use Spring Initializr with:

  • Spring Web
  • Spring AI (Ollama & Vector Store)
  • Spring Data MongoDB

Add these starters:

<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-starter-model-ollama</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.ai</groupId>
  <artifactId>spring-ai-starter-vector-store-mongodb-atlas</artifactId>
</dependency>

application.properties

Configure MongoDB and Ollama:

spring.application.name=securerag

spring.data.mongodb.uri=${MONGODB_URI}
spring.data.mongodb.database=rag

spring.ai.vectorstore.mongodb.collection-name=vector_store
spring.ai.vectorstore.mongodb.initialize-schema=true
spring.ai.vectorstore.mongodb.metadata-fields-to-filter=department,access_level,roles

spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.embedding.model=nomic-embed-text
spring.ai.ollama.chat.options.model=llama3.2

Core Logic

ChatService

Handles secure query processing:

public String sendSecureMessage(String message, String userRole, String department) {
    QuestionAnswerAdvisor advisor = QuestionAnswerAdvisor.builder(vectorStore)
        .searchRequest(SearchRequest.builder().topK(5).build())
        .build();

    ChatClient filteredChatClient = ChatClient.builder(chatModel)
        .defaultAdvisors(advisor)
        .build();

    String filterExpression = createAccessFilterExpression(userRole, department);

    return filteredChatClient.prompt()
        .user(message)
        .advisors(a -> a.param(QuestionAnswerAdvisor.FILTER_EXPRESSION, filterExpression))
        .call()
        .content();
}

The filter expression ensures that only documents matching the user’s roles and department are considered during retrieval. (DEV Community)


REST API

Expose secure RAG endpoints:

@RestController
@RequestMapping("/chat")
public class ChatController {
    private final ChatService chatService;

    public ChatController(ChatService chatService) {
        this.chatService = chatService;
    }

    @PostMapping("/secure")
    public String sendSecureMessage(@RequestBody ChatRequest request) {
        return chatService.sendSecureMessage(request.getMessage(), request.getUserRole(), request.getDepartment());
    }
}

Running the App

  1. Start MongoDB Atlas cluster.
  2. Run Ollama locally via Docker.
  3. Build and start the Spring Boot app:
mvn clean spring-boot:run
  1. Load some documents into the vector store.
  2. Send POST requests to /chat/secure with a message and user role/department to get secure RAG responses.

Notes

  • This project keeps data local and under your control with Ollama.
  • It enforces role-based filters at vector search time so users only see allowed content.
  • You can extend it with additional metadata filters, more roles, and richer document types.

About

A role based access control RAG application built with Spring AI, Ollama, and MongoDB.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages