diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/_index.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/_index.md new file mode 100644 index 0000000000..a36578580a --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/_index.md @@ -0,0 +1,77 @@ +--- +title: Deploy OpenTelemetry on Google Cloud C4A Arm-based Axion VMs + +minutes_to_complete: 40 + +who_is_this_for: This learning path is designed for DevOps engineers, platform engineers, and software developers who want to deploy and observe a cloud-native microservice on Arm64-based Google Cloud C4A Axion processors using OpenTelemetry, along with industry-standard observability tools. + +learning_objectives: + - Provision a SUSE Linux Enterprise Server (SLES) virtual machine on Google Cloud C4A Arm-based Axion processors + - Install and configure Docker and Docker Compose on an Arm64 environment + - Build and run an Arm-native Python Flask microservice in containers + - Instrument the microservice using OpenTelemetry SDK for distributed tracing and metrics + - Deploy and configure OpenTelemetry Collector for telemetry processing + - Integrate Prometheus for metrics collection and visualization + - Integrate Jaeger for distributed tracing and service visualization + - Expose and validate observability dashboards + - Generate traffic to observe telemetry data flow and performance behavior + - Understand observability best practices on Arm-based cloud infrastructure + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled + - Basic familiarity with Python and Flask + - Basic understanding of containers and Kubernetes concepts + +author: Pareena Verma + +##### Tags +skilllevels: Introductory +subjects: Performance and Architecture +cloud_service_providers: Google Cloud + +armips: +- Neoverse + +tools_software_languages: +- Flask +- Docker +- Prometheus +- Jaeger + +operatingsystems: +- Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ + +further_reading: + - resource: + title: Google Cloud documentation + link: https://cloud.google.com/docs + type: documentation + + - resource: + title: OpenTelemetry documentation + link: https://opentelemetry.io/docs/ + type: documentation + + - resource: + title: Prometheus documentation + link: https://prometheus.io/docs/introduction/overview/ + type: documentation + + - resource: + title: Jaeger documentation + link: https://www.jaegertracing.io/docs/ + type: documentation + + - resource: + title: Docker documentation + link: https://docs.docker.com/ + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: yes +--- diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/background.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/background.md new file mode 100644 index 0000000000..d2db077b5b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/background.md @@ -0,0 +1,37 @@ +--- +title: Get started with OpenTelemetry on Google Axion C4A + +weight: 2 + +layout: "learningpathall" +--- + +## Explore Axion C4A Arm instances in Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. + +To learn more, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). + + +## Explore OpenTelemetry on Google Axion C4A (Arm Neoverse V2) + +OpenTelemetry is an open-source observability framework that provides standardized APIs, SDKs, and tools for collecting telemetry data such as traces, metrics, and logs from cloud-native applications. It is a CNCF (Cloud Native Computing Foundation) project and is widely adopted for building vendor-neutral observability solutions. + +OpenTelemetry enables developers to instrument applications once and export telemetry data to multiple backend systems including Prometheus, Grafana, Jaeger, Zipkin, and cloud monitoring platforms. + +Running OpenTelemetry on Google Axion C4A Arm-based infrastructure allows you to achieve high-throughput telemetry processing with improved performance per watt and reduced infrastructure costs, making it ideal for modern distributed systems. + +Common use cases include application performance monitoring (APM), distributed tracing, infrastructure metrics collection, log aggregation, and monitoring microservices architectures. + +To learn more, visit the [OpenTelemetry website](https://opentelemetry.io/) and explore the [OpenTelemetry documentation](https://opentelemetry.io/docs/). + +## What you've accomplished and what's next + +In this section, you learned about: +- Google Axion C4A Arm-based VMs and their performance characteristics +- OpenTelemetry observability framework and its role in modern cloud-native monitoring +- How Arm architecture enables cost-effective, high-performance telemetry processing + +Next, you'll configure firewall rules and network access to allow external communication between your OpenTelemetry components and monitored services. diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/firewall-setup.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/firewall-setup.md new file mode 100644 index 0000000000..1d76165738 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/firewall-setup.md @@ -0,0 +1,63 @@ +--- +title: Create a firewall rule on GCP +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Configure GCP firewall for OpenTelemetry + +To allow inbound traffic for the Flask application and observability components, you must create firewall rules in the Google Cloud Console. + +{{% notice Note %}} For more information about GCP setup, see [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/).{{% /notice %}} + +## Required Ports + +| Service | Port | Purpose | +|--------|------|---------| +| Flask Application | 8080 | Application traffic | +| Jaeger UI | 16686 | Trace visualization | +| Prometheus UI | 9090 | Metrics dashboard | +| OTLP gRPC | 4317 | Telemetry ingestion | +| OTLP HTTP | 4318 | Telemetry ingestion | +| Collector Metrics | 8889 | Prometheus scrape endpoint | + +## Create a Firewall Rule in GCP + +To expose the Flask application (port 8080), create a firewall rule. + +Navigate to the [Google Cloud Console](https://console.cloud.google.com/), go to **VPC Network > Firewall**, and select **Create firewall rule**. + +![Google Cloud Console VPC Network Firewall page showing existing firewall rules and Create Firewall Rule button alt-txt#center](images/firewall-rule1.png "Create a firewall rule") + +Next, create the firewall rule that exposes the TCP port 8080. +Set the **Name** of the new rule to "allow-tcp-8080". Select your network that you intend to bind to your VM (default is "autoscaling-net", but your organization might have others). + +Set **Direction of traffic** to "Ingress". Set **Allow on match** to "Allow" and **Targets** to "Specified target tags". + +![Google Cloud Console firewall rule creation form showing name field, network selection, direction set to Ingress, and targets set to Specified target tags alt-txt#center](images/network-rule2.png "Creating the TCP/8080 firewall rule") + +Next, enter "allow-tcp-8080" in the **Target tags** text field. Set **Source IPv4 ranges** to "0.0.0.0/0". + +![Google Cloud Console firewall rule form showing target tags field with allow-tcp-8080 entered and source IPv4 ranges set to 0.0.0.0/0 alt-txt#center](images/network-rule3.png "Creating the TCP/8080 firewall rule") + +Finally, select **Specified protocols and ports** under the **Protocols and ports** section. Select the **TCP** checkbox, enter "8080" in the **Ports" text field, and select **Create**. + +![Google Cloud Console firewall rule form showing protocols and ports section with TCP selected and port 8080 specified alt-txt#center](images/network-port.png "Specifying the TCP port to expose") + +{{% notice Note %}} +The above steps demonstrate how to open port **8080** for the Flask application. + +You can repeat the same workflow to create firewall rules for the following OpenTelemetry services: + +- **16686** → Jaeger UI +- **9090** → Prometheus UI +- **4317** → OTLP gRPC endpoint +- **4318** → OTLP HTTP endpoint +- **8889** → Collector metrics endpoint + +Change the rule name and port number accordingly (for example: `allow-tcp-16686`, `allow-tcp-9090`, etc.). +{{% /notice %}} + +You can now proceed to deploy and validate the OpenTelemetry stack. diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/firewall-rule1.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/firewall-rule1.png new file mode 100644 index 0000000000..e1ab8aecb5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/firewall-rule1.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-pubip-ssh.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-pubip-ssh.png new file mode 100644 index 0000000000..558745de3e Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-pubip-ssh.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-shell.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-shell.png new file mode 100644 index 0000000000..7e2fc3d1b5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-shell.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-vm.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-vm.png new file mode 100644 index 0000000000..0d1072e20d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/gcp-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/jaeger-ui.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/jaeger-ui.png new file mode 100644 index 0000000000..c93c35353d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/jaeger-ui.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-port.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-port.png new file mode 100644 index 0000000000..4c9180b44a Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-port.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule2.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule2.png new file mode 100644 index 0000000000..4901590c80 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule2.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule3.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule3.png new file mode 100644 index 0000000000..4f1f75cd84 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/network-rule3.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/prometheus.png b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/prometheus.png new file mode 100644 index 0000000000..8f0a70c7e7 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/opentelemetry/images/prometheus.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/instance.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/instance.md new file mode 100644 index 0000000000..80b98e73c0 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/instance.md @@ -0,0 +1,50 @@ +--- +title: Create a Google Axion C4A Arm virtual machine on GCP +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Provision a Google Axion C4A Arm VM + +In this section you'll create a Google Axion C4A Arm-based virtual machine on Google Cloud Platform. You'll use the `c4a-standard-4` machine type, which provides 4 vCPUs and 16 GB of memory. This VM will host your OpenTelemetryapplication. + +{{% notice Note %}} +For help with GCP setup, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). +{{% /notice %}} + +## Provision a Google Axion C4A Arm VM in Google Cloud Console + +To create a virtual machine based on the C4A instance type: +- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). +- Go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**: + - Populate fields such as **Instance name**, **Region**, and **Zone**. + - Set **Series** to `C4A`. + - Select `c4a-standard-4` for machine type. + +![Screenshot of the Google Cloud Console showing the Machine configuration section. The Series dropdown is set to C4A and the machine type c4a-standard-4 is selected alt-txt#center](images/gcp-vm.png "Configuring machine type to C4A in Google Cloud Console") + + +- Under **OS and storage**, select **Change**, and then choose an Arm64-based operating system image. + - For this Learning Path, select **SUSE Linux Enterprise Server**. + - For the license type, choose **Pay as you go**. + - Increase **Size (GB)** from **10** to **100** to allocate sufficient disk space. + - Select **Choose** to apply the changes. +- Under **Networking**, enable **Allow HTTP traffic** and **Allow HTTPS traffic** to simplify access for later Kubernetes testing. +- Select **Create** to launch the virtual machine. + +After the instance starts, click **SSH** next to the VM in the instance list to open a browser-based terminal session. + +![Google Cloud Console VM instances page displaying running instance with green checkmark and SSH button in the Connect column alt-txt#center](images/gcp-pubip-ssh.png "Connecting to a running C4A VM using SSH") + +A new browser window opens with a terminal connected to your VM. + +![Browser-based SSH terminal window with black background showing Linux command prompt and Google Cloud branding at top alt-txt#center](images/gcp-shell.png "Terminal session connected to the VM") + +## What you've accomplished and what's next + +In this section, you provisioned a Google Axion C4A Arm VM and connected to it using SSH. + +Next, you'll need to install OpenTelemetry and the required dependencies on your VM. diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-observability.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-observability.md new file mode 100644 index 0000000000..b90386fff0 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-observability.md @@ -0,0 +1,263 @@ +--- +title: OpenTelemetry Observability Stack & Telemetry Pipeline on Arm64 +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you will deploy and connect the **OpenTelemetry Collector**, **Prometheus**, and **Jaeger** to collect, store, and visualize telemetry data generated by the Flask microservice running on Arm64 infrastructure. + +By the end of this part, you will have a complete observability pipeline for metrics and distributed tracing. + +## Architecture Overview + +```text +Flask Microservice (Arm64) + | + | OpenTelemetry SDK + v +OpenTelemetry Collector + | | +Metrics → Prometheus +Traces → Jaeger +``` + +The Flask application sends telemetry to the OpenTelemetry Collector, which routes metrics to Prometheus and traces to Jaeger for monitoring and visualization. + + +## Network & Firewall Requirements + +Ensure the following ports are open on your VM firewall: + +| Service | Port | Purpose | +| ----------------- | ----- | -------------------------- | +| Prometheus | 9090 | Metrics dashboard UI | +| Jaeger UI | 16686 | Distributed tracing UI | +| Collector Metrics | 8889 | Prometheus scrape endpoint | +| OTLP gRPC | 4317 | Telemetry ingestion (gRPC) | +| OTLP HTTP | 4318 | Telemetry ingestion (HTTP) | + +These ports enable telemetry ingestion and provide web interfaces for monitoring metrics and traces. + +## Configure OpenTelemetry Collector + +This configuration defines how telemetry data is received from the Flask application and exported to Prometheus and Jaeger. + +```bash +nano otel-collector-config.yaml +``` + +### otel-collector-config.yaml + +```yaml +receivers: + otlp: + protocols: + grpc: + endpoint: 0.0.0.0:4317 + http: + endpoint: 0.0.0.0:4318 + +exporters: + otlp: + endpoint: jaeger:4317 + tls: + insecure: true + + prometheus: + endpoint: 0.0.0.0:8889 + +service: + pipelines: + traces: + receivers: [otlp] + exporters: [otlp] + + metrics: + receivers: [otlp] + exporters: [prometheus] +``` + +The Collector now receives OTLP telemetry and routes traces to Jaeger while exposing metrics for Prometheus scraping. + + +## Configure Prometheus + +This configuration instructs Prometheus to scrape metrics from the OpenTelemetry Collector. + +```bash +nano prometheus.yml +``` + +### prometheus.yml + +```yaml +global: + scrape_interval: 5s + +scrape_configs: + - job_name: "otel-collector" + static_configs: + - targets: ["otel-collector:8889"] +``` + +Prometheus will now periodically collect metrics generated by the Flask application via the Collector. + +## Create Docker Compose Observability Stack + +Docker Compose orchestrates the Flask service, Collector, Prometheus, and Jaeger in a single deployment. + +```bash +nano docker-compose.yml +``` + +### docker-compose.yml + +```yaml +services: + otel-demo-app: + build: . + ports: + - "8080:8080" + depends_on: + - otel-collector + + otel-collector: + image: otel/opentelemetry-collector-contrib:latest + command: ["--config=/etc/otel-collector-config.yaml"] + volumes: + - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml + ports: + - "4317:4317" + - "4318:4318" + - "8889:8889" + + jaeger: + image: jaegertracing/all-in-one:latest + ports: + - "16686:16686" + + prometheus: + image: prom/prometheus:latest + volumes: + - ./prometheus.yml:/etc/prometheus/prometheus.yml + command: + - "--config.file=/etc/prometheus/prometheus.yml" + ports: + - "9090:9090" +``` + +All observability components and the Flask application are now defined for automated deployment. + +## Launch the Observability Stack + +Build the Flask image and start all services. + +```bash +docker compose up --build -d +``` + +Verify running containers: + +```bash +docker ps +``` + +The output is similar to: +```output +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +944ce1f16551 otel-demo-otel-demo-app "python app.py" 21 hours ago Up 5 seconds 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp otel-demo-otel-demo-app-1 +6cdc86f7d1a9 prom/prometheus:latest "/bin/prometheus --c…" 21 hours ago Up 5 seconds 0.0.0.0:9090->9090/tcp, [::]:9090->9090/tcp otel-demo-prometheus-1 +a88d6979db39 otel/opentelemetry-collector-contrib:latest "/otelcol-contrib --…" 21 hours ago Up 5 seconds 0.0.0.0:4317-4318->4317-4318/tcp, [::]:4317-4318->4317-4318/tcp, 0.0.0.0:8889->8889/tcp, [::]:8889->8889/tcp, 55679/tcp otel-demo-otel-collector-1 +f4f7776b2201 jaegertracing/all-in-one:latest "/go/bin/all-in-one-…" 21 hours ago Up 5 seconds 4317-4318/tcp, 9411/tcp, 14250/tcp, 14268/tcp, 0.0.0.0:16686->16686/tcp, [::]:16686->16686/tcp otel-demo-jaeger-1 +``` + +### Expected Services + +* otel-demo-app +* otel-collector +* jaeger +* prometheus + +The full observability stack is now running in containers on Arm64. + +## Generate Application Traffic + +Send requests to the Flask service to produce telemetry data. + +```bash +curl http://:8080 +``` + +Run a loop to generate more traffic: + +```bash +for i in {1..10}; do curl http://:8080; done +``` + +Each request generates traces and increments custom metrics. + +The output is similar to: +```output +gcpuser@otel-suse-arm64:~/otel-demo> for i in {1..10}; do curl http://34.58.132.15:8080; done +Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!Hello OpenTelemetry!gcpuser@otel-suse-arm64:~/otel-demo> +``` + +## Validate Metrics in Prometheus + +Open Prometheus in your browser: + +```text +http://:9090 +``` + +### Suggested Queries + +* up +* demo_requests_total + +Successful query results confirm that metrics are flowing correctly through the pipeline. + +#### Prometheus Metrics View + +![Prometheus Metrics - OpenTelemetry Collector](images/prometheus.png) + +This image confirms that Prometheus is successfully scraping metrics from the OpenTelemetry Collector, including the custom `demo_requests_total` counter generated by the Flask application. + + +## Validate Traces in Jaeger + +Open Jaeger UI: + +```text +http://:16686 +``` + +Select the service: + +```text +flask-arm-service +``` + +Click **Find Traces** to view request traces. + +You should now see distributed traces generated by the Flask microservice. + +#### Jaeger Distributed Tracing View + +![Jaeger Traces - Flask Arm Service](images/jaeger-ui.png) + +This image shows distributed traces generated by the Flask microservice and collected via the OpenTelemetry pipeline, visualized in the Jaeger UI. + + +## What You Have Accomplished + +- Flask app accessible on port 8080 +- OpenTelemetry Collector receiving telemetry +- Prometheus scraping metrics successfully +- Jaeger displaying distributed traces + +You now have a complete OpenTelemetry observability pipeline running on Arm64 infrastructure. diff --git a/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-setup.md b/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-setup.md new file mode 100644 index 0000000000..c511f59353 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/opentelemetry/otel-setup.md @@ -0,0 +1,215 @@ +--- +title: OpenTelemetry Environment & Application Setup on ARM64 +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## OpenTelemetry Environment & Application Setup + +In this guide, you will prepare an **Arm64-based SUSE Linux virtual machine** with container tooling and deploy an **instrumented Python Flask microservice** that emits OpenTelemetry traces and metrics. This forms the foundation for building a complete observability pipeline in the upcoming steps. + +## Architecture Overview + +This setup includes a lightweight application and telemetry flow as shown below: + +```text +Flask Microservice (ARM64) + | + | OpenTelemetry SDK + v +OpenTelemetry Collector +``` +The Flask application generates telemetry data using the OpenTelemetry SDK and sends it to an OpenTelemetry Collector for further processing and visualization. + +## Network & Firewall Requirements +Ensure the following port is open on your VM firewall: + +| Service | Port | Purpose | +|-----------|------|-----------------------------| +| Flask App | 8080 | Application HTTP traffic | + + +Opening port **8080** allows external access to the Flask microservice running inside the container. + +## Install Docker on SUSE ARM64 + +Docker is required to run containerized services on the ARM-based VM. + +```bash +sudo zypper refresh +sudo zypper install -y docker +sudo systemctl enable docker +sudo systemctl start docker +sudo usermod -aG docker $USER +newgrp docker +``` + +### Verify Installation + +```bash +docker --version +``` + +Docker Engine is now installed and configured to run without sudo for the current user. + + + +## Install Docker Compose (v2) + +Docker Compose is used to orchestrate multi-container applications. + +```bash +sudo curl -L https://github.com/docker/compose/releases/download/v2.27.0/docker-compose-linux-aarch64 \ + -o /usr/local/bin/docker-compose + +sudo chmod +x /usr/local/bin/docker-compose +``` + +### Verify Installation + +```bash +docker-compose --version +``` + +Docker Compose v2 is now installed and ready to manage multi-service deployments. + +## Create Project Workspace +Create a dedicated directory for the OpenTelemetry demo application. + +```bash +mkdir ~/otel-demo +cd ~/otel-demo +``` + +This directory will store the Flask application code, dependencies, and container configuration. + +## Build an Instrumented Flask Application +This Flask service is integrated with the OpenTelemetry SDK to emit distributed traces and metrics. + +```bash +vi app.py +``` + +### File: app.py + +```python +from flask import Flask +import time + +from opentelemetry import trace, metrics +from opentelemetry.sdk.resources import Resource +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader +from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter +from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter +from opentelemetry.instrumentation.flask import FlaskInstrumentor +from opentelemetry.sdk.trace.export import BatchSpanProcessor + +resource = Resource.create({ + "service.name": "flask-arm-service" +}) + +trace_provider = TracerProvider(resource=resource) +trace.set_tracer_provider(trace_provider) + +trace_exporter = OTLPSpanExporter(endpoint="otel-collector:4317", insecure=True) +trace_provider.add_span_processor( + BatchSpanProcessor(trace_exporter) +) + +metric_exporter = OTLPMetricExporter(endpoint="otel-collector:4317", insecure=True) + +metric_reader = PeriodicExportingMetricReader( + metric_exporter, + export_interval_millis=5000 +) + +meter_provider = MeterProvider( + resource=resource, + metric_readers=[metric_reader] +) + +metrics.set_meter_provider(meter_provider) + +meter = metrics.get_meter(__name__) + +request_counter = meter.create_counter( + name="demo_requests_total", + description="Total number of requests" +) + +app = Flask(__name__) +FlaskInstrumentor().instrument_app(app) + +@app.route("/") +def hello(): + request_counter.add(1) + time.sleep(0.2) + return "Hello OpenTelemetry!" + +if __name__ == "__main__": + app.run(host="0.0.0.0", port=8080) +``` + +The Flask service now automatically generates traces for HTTP requests and custom metrics for request counts. + +## Define Python Dependencies +Create a file to list all required Python packages. + +```bash +vi requirements.txt +``` + +### File: requirements.txt + +``` +flask +opentelemetry-api +opentelemetry-sdk +opentelemetry-exporter-otlp +opentelemetry-instrumentation-flask +``` + +This ensures all OpenTelemetry and Flask libraries are installed consistently inside the container. + + + +## Create Application Docker Image + +Build an ARM-compatible container image for the Flask service. + +```bash +vi Dockerfile +``` + +### File: Dockerfile + +```dockerfile +FROM python:3.10-slim + +WORKDIR /app + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY app.py . + +CMD ["python", "app.py"] +``` + +This Dockerfile packages the instrumented Flask application into a lightweight ARM64-compatible container. + +## What You Have Accomplished + +- Installed Docker and Docker Compose on an ARM64 SUSE VM +- Created an OpenTelemetry-instrumented Flask microservice +- Defined application dependencies +- Built a container-ready application image + +### What’s Next + +In the next section, you will deploy the **OpenTelemetry Collector and observability stack** to receive, process, and visualize the telemetry data generated by this application.