Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Arm MCP Server Overview
title: Understand the Arm MCP Server for AI-driven x86-to-Arm migration
weight: 2

### FIXED, DO NOT MODIFY
Expand All @@ -10,31 +10,42 @@ layout: learningpathall

The Arm MCP Server is a tool that enables AI-powered developer tools to become Arm cloud migration and optimization experts. It implements the Model Context Protocol (MCP), an open standard that allows AI assistants to access external tools and data sources.

Think of the Arm MCP Server as a bridge between AI coding assistants and Arm-specific migration and optimization tools. It allows AI agents to use real, structured capabilities instead of guessing.

By connecting your AI coding assistant to the Arm MCP Server, you gain access to Arm-specific knowledge, container image inspection tools, and code analysis capabilities that streamline the process of migrating applications from x86 to Arm.

## How to interact with the Arm MCP Server

There are multiple ways to interact with the Arm MCP Server, depending on your development environment and workflow:
The Arm MCP Server supports different interaction styles depending on the complexity of the migration task, from quick checks to fully automated workflows:

- Direct AI chat: quick, exploratory checks
- Prompt files: repeatable, structured workflows
- Agentic workflows: fully autonomous multi-step migrations

### 1. Direct AI Chat

## Direct AI chat

You can ask your AI assistant natural language questions, and it will automatically use the MCP tools when appropriate. For example:

```text
Check if the nginx:latest Docker image supports Arm architecture
```

### 2. Prompt Files
## Prompt files

Many AI coding tools support prompt files that provide structured instructions. These files can reference MCP tools and guide the AI through complex workflows like full codebase migrations.

### 3. Agentic Workflows
## Agentic workflows

Tools like GitHub Copilot Agent Mode, Claude Code, Kiro, and OpenAI Codex support autonomous agent workflows where the AI can execute multi-step migration tasks with minimal intervention. These fully agentic workflows can be combined with prompt files and direct chat to create an extremely powerful development system.

## Available Arm MCP Server Tools
## Available Arm MCP Server tools

The Arm MCP Server provides several specialized tools for migration and optimization, and these are detailed below.

The Arm MCP Server provides several specialized tools for migration and optimization:
{{% notice Note %}}
You don't need all of these tools immediately. You'll start by using image inspection and knowledge lookup tools, and encounter the others as workflows become more advanced.
{{% /notice %}}

### knowledge_base_search

Expand Down Expand Up @@ -74,11 +85,16 @@ Provides instructions for installing and using sysreport, a tool that obtains sy

## Setting up the Arm MCP Server

To use the Arm MCP Server with an AI coding assistant, you must configure the assistant to connect to the MCP server. This connection allows the assistant to query Arm-specific tools, documentation, and capabilities exposed through the Model Context Protocol (MCP).
To use the Arm MCP Server with an AI coding assistant, you need to configure the assistant to connect to the MCP server. Connecting your assistant allows it to query Arm-specific tools, documentation, and capabilities exposed through the Model Context Protocol (MCP).

The required configuration steps vary by AI coding assistant. Refer to the installation guides below for step-by-step instructions on connecting the following AI coding assistants to the Arm MCP server:

[GitHub Copilot](/install-guides/github-copilot/)
[Gemini CLI](/install-guides/gemini/)
[Kiro CLI](/install-guides/kiro-cli/)
- [GitHub Copilot](/install-guides/github-copilot/)
- [Gemini CLI](/install-guides/gemini/)
- [Kiro CLI](/install-guides/kiro-cli/)

## What you've accomplished and what's next

In this section, you've learned about the Arm MCP Server and its available tools for migration and optimization. You've also seen the different ways to interact with it: direct AI chat, prompt files, and agentic workflows.

Continue to the next section to see the Arm MCP Server in action.
In the next section, you'll use direct AI chat with the Arm MCP Server to check Docker base images for Arm compatibility.
Original file line number Diff line number Diff line change
@@ -1,20 +1,24 @@
---
title: Direct AI Chat
title: Verify Docker image compatibility with Arm using AI
weight: 3

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Checking Base Images for Arm Compatibility
## Checking base images for Arm compatibility

This section demonstrates just one example of using direct AI chat with the Arm MCP Server. You can use similar natural language prompts to check library compatibility, search for Arm documentation, or analyze code for migration issues.

A common first step when migrating a containerized application to Arm is verifying that the base container images support the arm64 architecture. The Arm MCP Server simplifies this process by allowing you to ask this question directly using a natural language prompt, without manually inspecting image manifests or registry metadata.

## Example: Legacy CentOS 6 Application
Direct AI chat works best as a fast decision gate as it helps you rule out incompatible base images and configurations before investing time in deeper migration or automation work.

Consider an application built on CentOS 6, a legacy Linux distribution that has reached end of life (EOL). The following Dockerfile represents a typical x86-optimized, compute-heavy benchmark application that you might encounter when migrating older workloads.
## Example: Legacy CentOS 6 application

Consider an application built on CentOS 6, a legacy Linux distribution that has reached end of life (EOL). This example represents a typical x86-optimized, compute-heavy benchmark application that you might encounter when migrating older workloads.

Before examining the Dockerfile, note that it contains several x86-specific elements that need attention during migration: the `centos:6` base image (which might lack arm64 support), the `-mavx2` compiler flag for x86 AVX2 SIMD instructions, and C++ source files with x86 intrinsics.

Copy this Dockerfile into VS Code using GitHub Copilot or another agentic IDE connected to the Arm MCP Server:

Expand Down Expand Up @@ -73,19 +77,18 @@ RUN chmod +x start.sh
CMD ["./start.sh"]
```

This Dockerfile has several x86-specific elements:
- The `centos:6` base image
- The `-mavx2` compiler flag for x86 AVX2 SIMD instructions
- C++ source files containing x86 intrinsics (which you will examine in the next section)

## Using the Arm MCP Server to Check Compatibility
## Using the Arm MCP Server to check compatibility

With the Arm MCP Server connected to your AI assistant, you can quickly verify base image compatibility using a simple natural language prompt:

```text
Check this base image for Arm compatibility
```

The AI assistant will use the `check_image` or `skopeo` tool to inspect the image and return a report. For `centos:6`, you would discover that this legacy image does **not** support `arm64` architecture.
The AI assistant will use the `check_image` or `skopeo` tool to inspect the image and return a report. For `centos:6`, you'd discover that this legacy image doesn't support `arm64` architecture.

## What you've accomplished and what's next

In this section, you've used direct AI chat with the Arm MCP Server to check Docker base images for Arm compatibility. You've seen how a simple natural language prompt can quickly identify compatibility issues without manually inspecting image manifests.

This simple interaction demonstrates how direct AI chat can quickly surface compatibility issues. In the next section, you'll see how to resolve these issues automatically using a fully agentic migration workflow with prompt files.
In the next section, you'll migrate x86 SIMD code to Arm using a fully agentic workflow with prompt files.
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
---
title: Fully Agentic Migration with Prompt Files
title: Automate x86 code migration to Arm using AI prompt files
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Migrating SIMD Code with AI Assistance
## Migrating SIMD code with AI assistance

When migrating applications from x86 to Arm, you may encounter SIMD (Single Instruction, Multiple Data) code that is written using architecture-specific intrinsics. On x86 platforms, SIMD is commonly implemented with SSE, AVX, or AVX2 intrinsics, while Arm platforms use NEON and SVE intrinsics to provide similar vectorized capabilities. Updating this code manually can be time-consuming and challenging. By combining the Arm MCP Server with a well-defined prompt file, you can automate much of this work and guide an AI assistant through a structured, architecture-aware migration of your codebase.
When migrating applications from x86 to Arm, you might encounter SIMD (Single Instruction, Multiple Data) code that is written using architecture-specific intrinsics. On x86 platforms, SIMD is commonly implemented with SSE, AVX, or AVX2 intrinsics, while Arm platforms use NEON and SVE intrinsics to provide similar vectorized capabilities. Updating this code manually can be time-consuming and challenging. By combining the Arm MCP Server with a well-defined prompt file, you can automate much of this work and guide an AI assistant through a structured, architecture-aware migration of your codebase.

## Sample x86 Code with AVX2 Intrinsics
## Sample x86 code with AVX2 intrinsics

{{% notice Note %}}You don't need to understand every detail of this code to follow the migration workflow. It's included to represent the kind of architecture-specific SIMD logic commonly found in real-world applications. {{% /notice %}}

The following example shows a matrix multiplication implementation using x86 AVX2 intrinsics. This is representative of performance-critical code found in compute benchmarks and scientific workloads. Copy this code into a file named `matrix_operations.cpp`:

Expand Down Expand Up @@ -170,39 +172,42 @@ int main() {
}
```

## The Arm Migration Prompt File
Prompt files act as executable migration playbooks. They encode a repeatable process that the AI can follow reliably, rather than relying on one-off instructions or guesswork.

## The Arm migration prompt file

To automate migration, you can define a prompt file that instructs the AI assistant how to analyze and transform the project using the Arm MCP Server.
Create the following example prompt file to use with GitHub Copilot `.github/prompts/arm-migration.prompt.md`:
To automate migration, you can define a prompt file that instructs the AI assistant how to analyze and transform the project using the Arm MCP Server. Prompt files encode best practices, tool usage, and migration strategy, allowing the AI assistant to operate fully autonomously through complex multi-step workflows.

Create the following example prompt file to use with GitHub Copilot at `.github/prompts/arm-migration.prompt.md`:
```markdown
---
tools: ['search/codebase', 'edit/editFiles', 'arm-mcp/skopeo', 'arm-mcp/check_image', 'arm-mcp/knowledge_base_search', 'arm-mcp/migrate_ease_scan', 'arm-mcp/mca', 'arm-mcp/sysreport_instructions']
description: 'Scan a project and migrate to ARM architecture'
description: 'Scan a project and migrate to Arm architecture'
---

Your goal is to migrate a codebase from x86 to Arm. Use the mcp server tools to help you with this. Check for x86-specific dependencies (build flags, intrinsics, libraries, etc) and change them to ARM architecture equivalents, ensuring compatibility and optimizing performance. Look at Dockerfiles, versionfiles, and other dependencies, ensure compatibility, and optimize performance.
Your goal is to migrate a codebase from x86 to Arm. Use the mcp server tools to help you with this. Check for x86-specific dependencies (such as build flags, intrinsics, and libraries) and change them to Arm architecture equivalents, ensuring compatibility and optimizing performance. Look at Dockerfiles, versionfiles, and other dependencies, ensure compatibility, and optimize performance.

Steps to follow:
* Look in all Dockerfiles and use the check_image and/or skopeo tools to verify ARM compatibility, changing the base image if necessary.
* Look at the packages installed by the Dockerfile send each package to the knowledge_base_search tool to check each package for ARM compatibility. If a package is not compatible, change it to a compatible version. When invoking the tool, explicitly ask "Is [package] compatible with ARM architecture?" where [package] is the name of the package.
* Look at the contents of any requirements.txt files line-by-line and send each line to the knowledge_base_search tool to check each package for ARM compatibility. If a package is not compatible, change it to a compatible version. When invoking the tool, explicitly ask "Is [package] compatible with ARM architecture?" where [package] is the name of the package.
* Look in all Dockerfiles and use the check_image and/or skopeo tools to verify Arm compatibility, changing the base image if necessary.
* Look at the packages installed by the Dockerfile and send each package to the knowledge_base_search tool to check each package for Arm compatibility. If a package isn't compatible, change it to a compatible version. When invoking the tool, explicitly ask "Is [package] compatible with Arm architecture?" where [package] is the name of the package.
* Look at the contents of any requirements.txt files line-by-line and send each line to the knowledge_base_search tool to check each package for Arm compatibility. If a package isn't compatible, change it to a compatible version. When invoking the tool, explicitly ask "Is [package] compatible with Arm architecture?" where [package] is the name of the package.
* Look at the codebase that you have access to, and determine what the language used is.
* Run the migrate_ease_scan tool on the codebase, using the appropriate language scanner based on what language the codebase uses, and apply the suggested changes. Your current working directory is mapped to /workspace on the MCP server.
* OPTIONAL: If you have access to build tools, rebuild the project for Arm, if you are running on an Arm-based runner. Fix any compilation errors.
* OPTIONAL: If you have access to build tools, rebuild the project for Arm, if you're running on an Arm-based runner. Fix any compilation errors.
* OPTIONAL: If you have access to any benchmarks or integration tests for the codebase, run these and report the timing improvements to the user.

Pitfalls to avoid:

* Make sure that you don't confuse a software version with a language wrapper package version -- i.e. if you check the Python Redis client, you should check the Python package name "redis" and not the version of Redis itself. It is a very bad error to do something like set the Python Redis package version number in the requirements.txt to the Redis version number, because this will completely fail.
* Don't confuse a software version with a language wrapper package version. For example, when checking the Python Redis client, check the Python package name "redis" rather than the Redis server version. Setting the Python Redis package version to the Redis server version in requirements.txt will fail.
* NEON lane indices must be compile-time constants, not variables.

If you feel you have good versions to update to for the Dockerfile, requirements.txt, etc. immediately change the files, no need to ask for confirmation.
If you have good versions to update for the Dockerfile, requirements.txt, and other files, change them immediately without asking for confirmation.

Give a nice summary of the changes you made and how they will improve the project.
Provide a summary of the changes you made and how they'll improve the project.
```
This prompt file encodes best practices, tool usage, and migration strategy, allowing the AI assistant to operate fully agentically.

## Running the Migration
## Running the migration

With the prompt file in place and the Arm MCP Server connected, invoke the migration workflow from your AI assistant:

Expand All @@ -215,7 +220,7 @@ The assistant will:
* Remove architecture-specific build flags
* Update container and dependency configurations as needed

## Verifying the Migration
## Verify the migration

After reviewing and accepting the changes, build and run the application on an Arm system:

Expand All @@ -224,7 +229,7 @@ g++ -O2 -o benchmark matrix_operations.cpp main.cpp -std=c++11
./benchmark
```

If everything works, a successful migration produces output similar to the following:
If everything works, the output is similar to:

```bash
ARM-Optimized Matrix Operations Benchmark
Expand All @@ -236,5 +241,12 @@ Matrix size: 200x200
Time: 12 ms
Result sum: 2.01203e+08
```

If compilation or runtime issues occur, feed the errors back to the AI assistant. This iterative loop allows the agent to refine the migration until the application is correct, performant, and Arm-native.

## What you've accomplished and what's next

In this section, you've used a prompt file to guide an AI assistant through a fully automated migration of x86 AVX2 SIMD code to Arm NEON. You've seen how structured instructions enable the assistant to analyze, transform, and verify architecture-specific code.

In the next section, you'll learn how to configure different agentic AI systems with similar migration workflows.

Loading