diff --git a/content/en/docs/getting-started/install-cozystack/_index.md b/content/en/docs/getting-started/install-cozystack/_index.md
new file mode 100644
index 00000000..c9079c04
--- /dev/null
+++ b/content/en/docs/getting-started/install-cozystack/_index.md
@@ -0,0 +1,56 @@
+---
+title: "3. Install and Configure Cozystack"
+linkTitle: "3. Install Cozystack"
+description: "Install Cozystack, configure storage and networking, and access the dashboard."
+weight: 20
+---
+
+This guide walks you through installing Cozystack on top of a Kubernetes cluster.
+
+## Choose Your Version
+
+Cozystack has two major versions with different configuration approaches:
+
+### [Cozystack v1.x (Recommended)]({{% ref "./v1" %}})
+
+The latest version using **Package-based configuration**.
+
+- Unified configuration through a single Package resource
+- New bundle variants: `isp-full`, `isp-hosted`, `distro-full`
+- Managed by `cozystack-operator`
+- Recommended for new installations
+
+[→ Install Cozystack v1.x]({{% ref "./v1" %}})
+
+### [Cozystack v0.x (Legacy)]({{% ref "./v0" %}})
+
+Previous version using **ConfigMap-based configuration**.
+
+- Configuration through ConfigMap in `cozy-system`
+- Bundle names: `paas-full`, `paas-hosted`
+- For existing v0.x installations
+
+[→ Install Cozystack v0.x]({{% ref "./v0" %}})
+
+---
+
+## What You'll Learn
+
+Both installation guides cover:
+
+1. Preparing configuration (Package for v1.x or ConfigMap for v0.x)
+2. Installing Cozystack
+3. Configuring storage with LINSTOR
+4. Setting up networking (MetalLB or public IPs)
+5. Deploying etcd, ingress, and monitoring
+6. Accessing the Cozystack dashboard
+
+## Prerequisites
+
+Before starting, ensure you have completed:
+
+- [Requirements: infrastructure and tools]({{% ref "../requirements" %}})
+- [1. Install Talos Linux]({{% ref "../install-talos" %}})
+- [2. Install Kubernetes]({{% ref "../install-kubernetes" %}})
+
+Choose the version that matches your needs and proceed with the appropriate installation guide.
diff --git a/content/en/docs/getting-started/install-cozystack.md b/content/en/docs/getting-started/install-cozystack/v0.md
similarity index 97%
rename from content/en/docs/getting-started/install-cozystack.md
rename to content/en/docs/getting-started/install-cozystack/v0.md
index 2a1f33a4..27210c61 100644
--- a/content/en/docs/getting-started/install-cozystack.md
+++ b/content/en/docs/getting-started/install-cozystack/v0.md
@@ -1,13 +1,18 @@
---
-title: "3. Install and Configure Cozystack"
-linkTitle: "3. Install Cozystack"
-description: "Install Cozystack, get administrative access, perform basic configuration, and enable the UI dashboard."
+title: "Install Cozystack v0.x"
+linkTitle: "v0.x (Legacy)"
+description: "Install Cozystack v0.x using ConfigMap-based configuration, get administrative access, and enable the UI dashboard."
weight: 20
---
+{{< alert color="info" >}}
+**Version Note**: This guide is for Cozystack v0.x installations using ConfigMap configuration.
+For v1.x installations, see [Install Cozystack v1.x]({{% ref "./v1" %}}).
+{{< /alert >}}
+
## Objectives
-In this step of the tutorial, we'll install Cozystack on top of a [Kubernetes cluster, prepared in the previous step]({{% ref "./install-kubernetes" %}}).
+In this step of the tutorial, we'll install Cozystack v0.x on top of a [Kubernetes cluster, prepared in the previous step]({{% ref "../install-kubernetes" %}}).
The tutorial will guide you through the following stages:
diff --git a/content/en/docs/getting-started/install-cozystack/v1.md b/content/en/docs/getting-started/install-cozystack/v1.md
new file mode 100644
index 00000000..fb67039f
--- /dev/null
+++ b/content/en/docs/getting-started/install-cozystack/v1.md
@@ -0,0 +1,659 @@
+---
+title: "Install Cozystack v1.x"
+linkTitle: "v1.x (Recommended)"
+description: "Install Cozystack v1.x using Package-based configuration, set up storage and networking, and access the dashboard."
+weight: 10
+---
+
+{{< alert color="info" >}}
+**Version Note**: This guide is for Cozystack v1.x installations using Package-based configuration.
+For v0.x installations, see [Install Cozystack v0.x]({{% ref "./v0" %}}).
+{{< /alert >}}
+
+## Objectives
+
+In this step of the tutorial, we'll install Cozystack v1.x on top of a [Kubernetes cluster, prepared in the previous step]({{% ref "../install-kubernetes" %}}).
+
+The tutorial will guide you through the following stages:
+
+1. Prepare a Cozystack Package configuration
+1. Install Cozystack by applying configuration
+1. Configure storage
+1. Configure networking
+1. Deploy etcd, ingress and monitoring stack in the root tenant
+1. Finalize deployment and access Cozystack dashboard
+
+## 1. Prepare a Package Configuration
+
+Cozystack v1.x uses a Package resource for configuration instead of a ConfigMap.
+Take the example below and write it in a file **cozystack-package.yaml**:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+ namespace: cozy-system
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://api.example.org:6443"
+ exposedServices:
+ - dashboard
+ - api
+```
+
+Action points:
+
+1. Replace `example.org` in `publishing.host` and `publishing.apiServerEndpoint` with a routable fully-qualified domain name (FQDN) that you control.
+ If you only have a public IP, but no FQDN, use [nip.io](https://nip.io/) with dash notation.
+2. Use the same values for networking CIDRs as in the previous step, where you bootstrapped a Kubernetes cluster with Talm or `talosctl`.
+ Settings provided in the example are sane defaults that can be used in most cases.
+
+There are other values in this config that you don't need to change in the course of the tutorial.
+However, let's overview and explain each value:
+
+- `metadata.name` and `metadata.namespace` define that this is the main Package resource for Cozystack platform.
+- `spec.variant: "isp-full"` specifies the bundle variant - `isp-full` is the most complete set of components (replaces v0.x `paas-full`).
+- `publishing.host` is the main domain for all services created under Cozystack, such as the dashboard, Grafana, Keycloak, etc.
+- `publishing.apiServerEndpoint` is the Kubernetes API endpoint used for generating kubeconfig files. Use routable addresses instead of local-only addresses.
+- `publishing.exposedServices` defines which services to expose via ingress (dashboard and API in this example).
+- `networking.*` fields configure internal Kubernetes networking and must match your cluster bootstrap configuration.
+
+You can learn more about Package configuration in the [Package Configuration reference]({{% ref "/docs/operations/configuration/package" %}}).
+
+{{% alert color="info" %}}
+Cozystack gathers anonymous usage statistics by default. Learn more about what data is collected and how to opt out in the [Telemetry Documentation]({{% ref "/docs/operations/configuration/telemetry" %}}).
+{{% /alert %}}
+
+
+## 2. Install Cozystack
+
+Next, we will install Cozystack and check that the installation is complete and successful.
+
+
+### 2.1. Create Namespace and Apply Configuration
+
+Create a namespace `cozy-system`, then apply the Package created in the previous step:
+
+ ```bash
+ kubectl create ns cozy-system
+ kubectl apply -f cozystack-package.yaml
+ ```
+
+
+### 2.2. Apply Installer
+
+Apply the installer configuration.
+This file defines the Cozystack version.
+For tutorial, just take the latest stable version available on GitHub:
+
+```bash
+kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-installer.yaml
+```
+
+As the installation goes on, you can track the logs of installer:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack -f
+```
+
+
+### 2.3. Check Installation Status
+
+Wait for a while, then check the status of installation:
+
+```bash
+kubectl get hr -A
+```
+
+Wait and check again until you see `True` on each line, as in this example:
+
+```console
+NAMESPACE NAME AGE READY STATUS
+cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
+cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
+cozy-cilium cilium 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
+cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
+cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
+cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
+cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
+cozy-linstor linstor 4m1s True Release reconciliation succeeded
+cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
+cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
+cozy-metallb metallb 4m1s True Release reconciliation succeeded
+cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
+cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
+cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
+cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
+cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
+cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
+tenant-root tenant-root 4m1s True Release reconciliation succeeded
+```
+
+The list of components in your installation may be different from the example above,
+as it depends on your configuration and Cozystack version.
+
+Once every component shows `READY: True`, we're ready to proceed by configuring subsystems.
+
+
+## 3. Configure Storage
+
+Kubernetes needs a storage subsystem to provide persistent volumes to applications, but it doesn't include one of its own.
+Cozystack provides [LINSTOR](https://github.com/LINBIT/linstor-server) as a storage subsystem.
+
+In the following steps, we'll access LINSTOR interface, create storage pools, and define storage classes.
+
+
+### 3.1. Check Storage Devices
+
+1. Set up an alias to access LINSTOR:
+
+ ```bash
+ alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
+ ```
+
+1. List your nodes and check their readiness:
+
+ ```bash
+ linstor node list
+ ```
+
+ Example output shows node names and state:
+
+ ```console
+ +-------------------------------------------------------+
+ | Node | NodeType | Addresses | State |
+ |=======================================================|
+ | srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
+ | srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
+ | srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+ +-------------------------------------------------------+
+ ```
+
+1. List available empty devices:
+
+ ```bash
+ linstor physical-storage list
+ ```
+
+ Example output shows the same node names:
+
+ ```console
+ +--------------------------------------------+
+ | Size | Rotational | Nodes |
+ |============================================|
+ | 107374182400 | True | srv3[/dev/sdb] |
+ | | | srv1[/dev/sdb] |
+ | | | srv2[/dev/sdb] |
+ +--------------------------------------------+
+ ```
+
+### 3.2. Create Storage Pools
+
+1. Create storage pools using ZFS:
+
+ ```bash
+ linstor ps cdp zfs srv1 /dev/sdb --pool-name data --storage-pool data
+ linstor ps cdp zfs srv2 /dev/sdb --pool-name data --storage-pool data
+ linstor ps cdp zfs srv3 /dev/sdb --pool-name data --storage-pool data
+ ```
+
+ It is [recommended](https://github.com/LINBIT/linstor-server/issues/463#issuecomment-3401472020)
+ to set `failmode=continue` on ZFS storage pools to allow DRBD to handle disk failures instead of ZFS.
+
+ ```bash
+ kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv2 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv3 -- zpool set failmode=continue data
+ ```
+
+1. Check the results by listing the storage pools:
+
+ ```bash
+ linstor sp l
+ ```
+
+ Example output:
+
+ ```console
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ | StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
+ |=====================================================================================================================================|
+ | DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
+ | data | srv1 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv1;data |
+ | data | srv2 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv2;data |
+ | data | srv3 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv3;data |
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ ```
+
+### 3.3. Create Storage Classes
+
+Finally, we can create a couple of storage classes, one of which will be the default class.
+
+
+1. Create a file with storage class definitions.
+ Below is a sane default example providing two classes: `local` (default) and `replicated`.
+
+ **storageclasses.yaml:**
+
+ ```yaml
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: local
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/layerList: "storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
+ volumeBindingMode: WaitForFirstConsumer
+ allowVolumeExpansion: true
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: replicated
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/autoPlace: "3"
+ linstor.csi.linbit.com/layerList: "drbd storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
+ property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
+ property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
+ volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ ```
+
+1. Apply the storage class configuration
+
+ ```bash
+ kubectl create -f storageclasses.yaml
+ ```
+
+1. Check that the storage classes were successfully created:
+
+ ```bash
+ kubectl get storageclasses
+ ```
+
+ Example output:
+
+ ```console
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
+ replicated linstor.csi.linbit.com Delete Immediate true 11m
+ ```
+
+
+## 4. Configure Networking
+
+Next, we will configure how the Cozystack cluster can be accessed.
+This step has two options depending on your available infrastructure:
+
+- For your own bare metal or self-hosted VMs, choose the MetalLB option.
+ MetalLB is Cozystack's default load balancer.
+- For VMs and dedicated servers from cloud providers, choose the public IP setup.
+ [Most cloud providers don't support MetalLB](https://metallb.universe.tf/installation/clouds/).
+
+ Check out the [provider-specific installation]({{% ref "/docs/install/providers" %}}) section.
+ It may have instructions for your provider, which you can use to deploy a production-ready cluster.
+
+### 4.a MetalLB Setup
+
+Cozystack has three types of IP addresses used:
+
+- Node IPs: persistent and valid only within the cluster.
+- Virtual floating IP: used to access one of the nodes in the cluster and valid only within the cluster.
+- External access IPs: used by LoadBalancers to expose services outside the cluster.
+
+Services with external IPs may be exposed in two modes: L2 and BGP.
+L2 mode is a simple one, but requires that nodes belong to a single L2 domain, and does not load-balance well.
+BGP has more complicated setup -- you need BGP peers ready to accept announces, but gives the ability to make proper load-balancing, and provides more options for choosing IP address ranges.
+
+Select a range of unused IPs for the services, here will use the `192.168.100.200-192.168.100.250` range.
+If you use L2 mode, these IPs should either be from the same network as the nodes, or have all necessary routes to them.
+
+For BGP mode, you will also need BGP peer IP addresses and local and remote AS numbers. Here we will use `192.168.20.254` as peer IP, and AS numbers 65000 and 65001 as local and remote.
+
+Create and apply a file describing an address pool.
+
+**metallb-ip-address-pool.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ addresses:
+ # used to expose services outside the cluster
+ - 192.168.100.200-192.168.100.250
+ autoAssign: true
+ avoidBuggyIPs: false
+```
+
+```bash
+kubectl create -f metallb-ip-address-pool.yml
+```
+
+Create and apply resources needed for an L2 or a BGP advertisement.
+
+{{< tabs name="metallb_announce" >}}
+{{% tab name="L2 mode" %}}
+L2Advertisement uses the name of the IPAddressPool resource we created previously.
+
+**metallb-l2-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+
+Apply changes.
+
+```bash
+kubectl create -f metallb-l2-advertisement.yml
+```
+{{% /tab %}}
+{{% tab name="BGP mode" %}}
+First, create a separate BGPPeer resource for **each** peer.
+
+**metallb-bgp-peer.yml**
+```yaml
+apiVersion: metallb.io/v1beta2
+kind: BGPPeer
+metadata:
+ name: peer1
+ namespace: cozy-metallb
+spec:
+ myASN: 65000
+ peerASN: 65001
+ peerAddress: 192.168.20.254
+```
+
+
+Next, create a single BGPAdvertisement resource.
+
+**metallb-bgp-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: BGPAdvertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+Apply changes.
+
+```bash
+kubectl create -f metallb-bgp-peer.yml
+kubectl create -f metallb-bgp-advertisement.yml
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+Now that MetalLB is configured, enable `ingress` in the `tenant-root`:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "ingress": true
+}}'
+```
+
+To confirm successful configuration, check the HelmReleases `ingress` and `ingress-nginx-system`:
+
+```bash
+kubectl -n tenant-root get hr ingress ingress-nginx-system
+```
+
+Example of correct output:
+```console
+NAME AGE READY STATUS
+ingress 47m True Helm upgrade succeeded for release tenant-root/ingress.v3 with chart ingress@1.8.0
+ingress-nginx-system 47m True Helm upgrade succeeded for release tenant-root/ingress-nginx-system.v2 with chart cozy-ingress-nginx@0.35.1
+```
+
+Next, check the state of service `root-ingress-controller`:
+
+```bash
+kubectl -n tenant-root get svc root-ingress-controller
+```
+
+The service should be deployed as `TYPE: LoadBalancer` and have correct external IP:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.91.83 192.168.100.200 80/TCP,443/TCP 48m
+```
+
+### 4.b. Node Public IP Setup
+
+If your cloud provider does not support MetalLB, you can expose ingress controller using external IPs on your nodes.
+
+If public IPs are attached directly to nodes, specify them.
+If public IPs are provided with a 1:1 NAT, as some clouds do, use IP addresses of **external** network interfaces.
+
+Here we will use `192.168.100.11`, `192.168.100.12`, and `192.168.100.13`.
+
+First, patch the ConfigMap with IPs to expose:
+
+```bash
+kubectl patch -n cozy-system configmap cozystack --type=merge -p '{
+ "data": {
+ "expose-external-ips": "192.168.100.11,192.168.100.12,192.168.100.13"
+ }
+}'
+```
+
+Next, enable `ingress` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '{
+ "spec":{
+ "ingress": true
+ }
+}'
+```
+
+Finally, add external IPs to the `externalIPs` list in the Ingress configuration:
+
+```bash
+kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{
+ "spec":{
+ "externalIPs": [
+ "192.168.100.11",
+ "192.168.100.12",
+ "192.168.100.13"
+ ]
+ }
+}'
+```
+
+After that, your Ingress will be available on the specified IPs.
+Check it in the following way:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+The service should be deployed as `TYPE: ClusterIP` and have the full range of external IPs:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller ClusterIP 10.96.91.83 192.168.100.11,192.168.100.12,192.168.100.13 80/TCP,443/TCP 48m
+```
+
+## 5. Finalize Installation
+
+### 5.1. Setup Root Tenant Services
+
+Enable `etcd` and `monitoring` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "monitoring": true,
+ "etcd": true
+}}'
+```
+
+### 5.2. Check the cluster state and composition
+
+Check the provisioned persistent volumes:
+
+```bash
+kubectl get pvc -n tenant-root
+```
+
+Example output:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local 2m10s
+data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local 115s
+data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local 91s
+grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local 2m41s
+grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local 113s
+vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local 2m41s
+vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local 2m40s
+vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local 2m41s
+vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local 2m41s
+```
+
+Check that all pods are running:
+
+```bash
+kubectl get pod -n tenant-root
+```
+
+example output:
+```console
+NAME READY STATUS RESTARTS AGE
+etcd-0 1/1 Running 0 2m1s
+etcd-1 1/1 Running 0 106s
+etcd-2 1/1 Running 0 82s
+grafana-db-1 1/1 Running 0 119s
+grafana-db-2 1/1 Running 0 13s
+grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
+grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
+root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
+root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
+root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
+vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
+vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
+vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
+vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
+vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
+vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
+vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
+vmselect-longterm-0 1/1 Running 0 2m12s
+vmselect-longterm-1 1/1 Running 0 2m12s
+vmselect-shortterm-0 1/1 Running 0 2m31s
+vmselect-shortterm-1 1/1 Running 0 2m30s
+vmstorage-longterm-0 1/1 Running 0 2m12s
+vmstorage-longterm-1 1/1 Running 0 2m12s
+vmstorage-shortterm-0 1/1 Running 0 2m32s
+vmstorage-shortterm-1 1/1 Running 0 2m31s
+```
+
+Get the public IP of ingress controller:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+Example output:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
+```
+
+### 5.3 Access the Cozystack Dashboard
+
+If you left this line in the ConfigMap, Cozystack Dashboard must be already available at this moment:
+
+```yaml
+data:
+ expose-services: "dashboard,api"
+```
+
+If the initial configmap did not have this line, patch it with the following command:
+
+```bash
+kubectl patch -n cozy-system cm cozystack --type=merge -p '{"data":{
+ "expose-services": "dashboard"
+ }}'
+```
+
+Open `dashboard.example.org` to access the system dashboard, where `example.org` is your domain specified for `tenant-root`.
+There you will see a login window which expects an authentication token.
+
+Get the authentication token for `tenant-root`:
+
+```bash
+kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
+```
+
+Log in using the token.
+Now you can use the dashboard as an administrator.
+
+Further on, you will be able to:
+
+- Set up OIDC to authenticate with it instead of tokens.
+- Create user tenants and grant users access to them via tokens or OIDC.
+
+### 5.4 Access metrics in Grafana
+
+Use `grafana.example.org` to access the system monitoring, where `example.org` is your domain specified for `tenant-root`.
+In this example, `grafana.example.org` is located at 192.168.100.200.
+
+- login: `admin`
+- request a password:
+
+ ```bash
+ kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
+ ```
+
+## Next Step
+
+Continue the Cozystack tutorial by [creating a user tenant]({{% ref "/docs/getting-started/create-tenant" %}}).
diff --git a/content/en/docs/operations/configuration/configmap.md b/content/en/docs/operations/configuration/configmap.md
index 5389bbfa..15c1d1dd 100644
--- a/content/en/docs/operations/configuration/configmap.md
+++ b/content/en/docs/operations/configuration/configmap.md
@@ -1,15 +1,20 @@
---
-title: "Cozystack ConfigMap Reference"
-linkTitle: "ConfigMap"
-description: "Reference for the Cozystack ConfigMap, which defines key configuration values for a Cozystack installation and operations."
+title: "Cozystack ConfigMap Reference (v0.x)"
+linkTitle: "ConfigMap (v0.x)"
+description: "Reference for the Cozystack ConfigMap used in v0.x installations"
weight: 10
aliases:
- /docs/install/cozystack/configmap
---
-This page explains the role of Cozystack's main ConfigMap and provides a full reference for its values.
+{{< alert color="info" >}}
+**Version Note**: This guide applies to Cozystack v0.x installations.
+For v1.x and later, see [Package Configuration (v1.x)]({{% ref "/docs/operations/configuration/package" %}}).
+{{< /alert >}}
-Cozystack's main configuration is defined by a single [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/).
+This page explains the role of Cozystack's main ConfigMap in v0.x and provides a full reference for its values.
+
+Cozystack v0.x configuration is defined by a single [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/).
This ConfigMap includes [Cozystack bundle]({{% ref "/docs/operations/configuration/bundles" %}}) and [components setup]({{% ref "/docs/operations/configuration/components" %}}),
key network settings, exposed services, and other options.
diff --git a/content/en/docs/operations/configuration/package.md b/content/en/docs/operations/configuration/package.md
new file mode 100644
index 00000000..2d3d0128
--- /dev/null
+++ b/content/en/docs/operations/configuration/package.md
@@ -0,0 +1,392 @@
+---
+title: "Cozystack Package Configuration (v1.x)"
+linkTitle: "Package (v1.x)"
+description: "Configuration reference for Cozystack v1.x using Package-based architecture"
+weight: 5
+aliases:
+ - /docs/install/cozystack/package
+ - /docs/operations/configuration/v1
+---
+
+This page explains the Package-based configuration system introduced in Cozystack v1.x and provides a complete reference for configuring your Cozystack installation.
+
+{{< alert color="info" >}}
+**Version Note**: This guide applies to Cozystack v1.x and later.
+For v0.x installations using ConfigMap, see [ConfigMap Reference]({{% ref "/docs/operations/configuration/configmap" %}}).
+{{< /alert >}}
+
+## Overview
+
+Cozystack v1.x introduces a unified Package-based architecture managed by `cozystack-operator`.
+Instead of multiple ConfigMaps for different aspects of configuration, v1.x uses a single `Package` resource that defines all platform settings.
+
+### Key Changes from v0.x
+
+| v0.x Approach | v1.x Approach |
+|---------------|---------------|
+| ConfigMap `cozystack` in `cozy-system` | Package `cozystack.cozystack-platform` in `cozy-system` |
+| Bundle names: `paas-full`, `paas-hosted` | Bundle variants: `isp-full`, `isp-hosted`, `distro-full` |
+| Separate ConfigMaps for branding/scheduling | Unified Package with all configuration |
+| Multiple `values-` entries | Nested `components.platform.values` structure |
+
+## Minimal Configuration Example
+
+The simplest Package configuration for a new Cozystack installation:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+ namespace: cozy-system
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://192.168.1.10:6443"
+```
+
+Replace `example.org` with your actual domain and adjust network CIDRs if needed.
+
+## Full Configuration Example
+
+Complete Package configuration showing all available options:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+ namespace: cozy-system
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ bundles:
+ system:
+ enabled: true
+ variant: "isp-full"
+ iaas:
+ enabled: true
+ paas:
+ enabled: true
+ naas:
+ enabled: true
+ disabledPackages: []
+ enabledPackages: []
+
+ networking:
+ clusterDomain: "cozy.local"
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+ kubeovn:
+ MASTER_NODES: ""
+
+ publishing:
+ host: "example.org"
+ ingressName: tenant-root
+ exposedServices:
+ - api
+ - dashboard
+ - vm-exportproxy
+ - cdi-uploadproxy
+ apiServerEndpoint: "https://api.example.org:6443"
+ externalIPs: []
+ certificates:
+ issuerType: http01 # or "cloudflare"
+
+ authentication:
+ oidc:
+ enabled: false
+ keycloakExtraRedirectUri: ""
+
+ scheduling:
+ globalAppTopologySpreadConstraints: ""
+
+ branding: {}
+
+ registries: {}
+
+ resources:
+ cpuAllocationRatio: 10
+ memoryAllocationRatio: 1
+ ephemeralStorageAllocationRatio: 40
+```
+
+## Configuration Reference
+
+### Package Variants
+
+The `spec.variant` field determines which bundle of components to install:
+
+| Variant | Description | Use Case |
+|---------|-------------|----------|
+| `isp-full` | Full platform with all system components | Production ISP/hosting deployments |
+| `isp-full-generic` | Full platform with generic settings | Testing and development |
+| `isp-hosted` | Hosted variant without system components | Multi-tenant hosted environments |
+| `distro-full` | Distribution variant | Custom distributions |
+
+### bundles
+
+Controls which functional bundles are enabled:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `system.enabled` | boolean | `true` | Enable system bundle |
+| `system.variant` | string | `isp-full` | System bundle variant |
+| `iaas.enabled` | boolean | `true` | Enable IaaS bundle (virtualization, storage) |
+| `paas.enabled` | boolean | `true` | Enable PaaS bundle (databases, message queues) |
+| `naas.enabled` | boolean | `true` | Enable NaaS bundle (networking services) |
+| `disabledPackages` | array | `[]` | List of packages to disable |
+| `enabledPackages` | array | `[]` | List of additional packages to enable |
+
+### networking
+
+Defines cluster networking configuration:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `clusterDomain` | string | `cozy.local` | Kubernetes cluster DNS domain |
+| `podCIDR` | string | `10.244.0.0/16` | Pod network CIDR |
+| `podGateway` | string | `10.244.0.1` | Pod network gateway IP |
+| `serviceCIDR` | string | `10.96.0.0/16` | Service network CIDR |
+| `joinCIDR` | string | `100.64.0.0/16` | Join network CIDR for tenant isolation |
+| `kubeovn.MASTER_NODES` | string | `""` | KubeOVN master nodes (auto-detected if empty) |
+
+{{< note >}}
+Network CIDRs must match those configured during Kubernetes bootstrap with Talm or talosctl.
+{{< /note >}}
+
+### publishing
+
+Controls service exposure and certificates:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `host` | string | `example.org` | Root domain for all Cozystack services |
+| `ingressName` | string | `tenant-root` | Ingress class name |
+| `exposedServices` | array | `[api, dashboard, vm-exportproxy, cdi-uploadproxy]` | Services to expose via ingress |
+| `apiServerEndpoint` | string | `""` | Kubernetes API endpoint for kubeconfig generation |
+| `externalIPs` | array | `[]` | External IPs for service exposure (when not using MetalLB) |
+| `certificates.issuerType` | string | `http01` | Certificate issuer: `http01` or `cloudflare` |
+
+**Available exposed services:**
+- `api` - Kubernetes API proxy
+- `dashboard` - Cozystack web UI
+- `keycloak` - OIDC authentication
+- `grafana` - Monitoring dashboards
+- `vm-exportproxy` - VM export service
+- `cdi-uploadproxy` - VM image upload service
+
+### authentication
+
+OIDC and authentication settings:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `oidc.enabled` | boolean | `false` | Enable Keycloak OIDC authentication |
+| `oidc.keycloakExtraRedirectUri` | string | `""` | Additional redirect URI for Keycloak |
+
+### scheduling
+
+Cluster scheduling configuration:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `globalAppTopologySpreadConstraints` | string | `""` | Global topology spread constraints for applications |
+
+### resources
+
+Resource allocation and overcommit ratios:
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `cpuAllocationRatio` | number | `10` | CPU overcommit ratio (10 = 10:1) |
+| `memoryAllocationRatio` | number | `1` | Memory overcommit ratio (1 = 1:1, no overcommit) |
+| `ephemeralStorageAllocationRatio` | number | `40` | Ephemeral storage overcommit ratio |
+
+{{< alert color="warning" >}}
+**Overcommit ratios** allow allocating more virtual resources than physically available.
+Use conservative values for production workloads. Higher ratios increase density but may impact performance.
+{{< /alert >}}
+
+## Runtime Configuration Changes
+
+Update configuration without reinstalling Cozystack by patching the Package resource.
+
+### Enable OIDC Authentication
+
+```bash
+kubectl patch package cozystack.cozystack-platform -n cozy-system --type merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "authentication": {
+ "oidc": {
+ "enabled": true
+ }
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+### Expose Additional Services
+
+Add Keycloak to exposed services:
+
+```bash
+kubectl patch package cozystack.cozystack-platform -n cozy-system --type merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "publishing": {
+ "exposedServices": ["api", "dashboard", "keycloak"]
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+### Change Resource Allocation Ratios
+
+Adjust CPU overcommit ratio:
+
+```bash
+kubectl patch package cozystack.cozystack-platform -n cozy-system --type merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "resources": {
+ "cpuAllocationRatio": 5
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+### Disable Specific Packages
+
+Disable a package (e.g., `clickhouse`):
+
+```bash
+kubectl patch package cozystack.cozystack-platform -n cozy-system --type merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "bundles": {
+ "disabledPackages": ["clickhouse"]
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+## Viewing Current Configuration
+
+Get the current Package configuration:
+
+```bash
+kubectl get package cozystack.cozystack-platform -n cozy-system -o yaml
+```
+
+View specific configuration values:
+
+```bash
+kubectl get package cozystack.cozystack-platform -n cozy-system \
+ -o jsonpath='{.spec.components.platform.values}' | yq
+```
+
+## Migration from v0.x
+
+To migrate from v0.x ConfigMap to v1.x Package:
+
+1. **Export existing configuration:**
+ ```bash
+ kubectl get cm cozystack -n cozy-system -o yaml > cozystack-v0-config.yaml
+ ```
+
+2. **Create equivalent Package resource** using the mapping table below
+
+3. **Apply the new Package:**
+ ```bash
+ kubectl apply -f cozystack-v1-package.yaml
+ ```
+
+### Configuration Mapping
+
+| v0.x ConfigMap Key | v1.x Package Path |
+|--------------------|-------------------|
+| `bundle-name: paas-full` | `spec.variant: isp-full` |
+| `root-host` | `spec.components.platform.values.publishing.host` |
+| `api-server-endpoint` | `spec.components.platform.values.publishing.apiServerEndpoint` |
+| `expose-services` | `spec.components.platform.values.publishing.exposedServices` |
+| `ipv4-pod-cidr` | `spec.components.platform.values.networking.podCIDR` |
+| `ipv4-pod-gateway` | `spec.components.platform.values.networking.podGateway` |
+| `ipv4-svc-cidr` | `spec.components.platform.values.networking.serviceCIDR` |
+| `ipv4-join-cidr` | `spec.components.platform.values.networking.joinCIDR` |
+| `bundle-enable` | `spec.components.platform.values.bundles.enabledPackages` |
+| `bundle-disable` | `spec.components.platform.values.bundles.disabledPackages` |
+
+## Troubleshooting
+
+### Package Not Reconciling
+
+Check Package status:
+```bash
+kubectl describe package cozystack.cozystack-platform -n cozy-system
+```
+
+Check operator logs:
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator -f
+```
+
+### Configuration Not Applied
+
+Verify Package is being watched:
+```bash
+kubectl get package -A
+```
+
+Check HelmRelease status:
+```bash
+kubectl get hr -A | grep -v True
+```
+
+### Invalid Configuration
+
+Validate Package syntax:
+```bash
+kubectl apply --dry-run=server -f cozystack-package.yaml
+```
+
+## Related Documentation
+
+- [Cozystack Bundles Reference]({{% ref "/docs/operations/configuration/bundles" %}})
+- [Components Configuration]({{% ref "/docs/operations/configuration/components" %}})
+- [ConfigMap Reference (v0.x)]({{% ref "/docs/operations/configuration/configmap" %}})
+- [Getting Started: Install Cozystack]({{% ref "/docs/getting-started/install-cozystack" %}})
diff --git a/content/en/docs/virtualization/proxmox-migration.md b/content/en/docs/virtualization/proxmox-migration.md
new file mode 100644
index 00000000..7673663a
--- /dev/null
+++ b/content/en/docs/virtualization/proxmox-migration.md
@@ -0,0 +1,212 @@
+---
+title: "Migrating Virtual Machines from Proxmox"
+linkTitle: "Proxmox Migration"
+description: "Step-by-step guide to migrating virtual machines from Proxmox VE to Cozystack"
+weight: 65
+---
+
+This guide describes the process of migrating virtual machines from Proxmox VE to Cozystack by exporting VM disk images and uploading them to the target environment.
+
+{{< note >}}
+Migration is performed by exporting VM disks to files and uploading them to Cozystack.
+VM state and snapshots are not preserved during migration.
+{{< /note >}}
+
+## Prerequisites
+
+Before starting the migration, ensure you have:
+
+1. **KubeVirt client `virtctl`** installed on your local machine:
+ - Installation guide: [KubeVirt User Guide - Virtctl Client Tool](https://kubevirt.io/user-guide/user_workloads/virtctl_client_tool/)
+
+2. **Upload proxy access configured** in your Cozystack cluster:
+ - Modify your Cozystack ConfigMap to enable `cdi-uploadproxy`:
+ ```bash
+ kubectl patch cm -n cozy-system cozystack --type merge -p='{"data":{
+ "expose-services": "dashboard,cdi-uploadproxy"
+ }}'
+ ```
+ - Configure the CDI upload proxy endpoint in your Cozystack values:
+ ```yaml
+ values-cdi: |
+ uploadProxyURL: https://cdi-uploadproxy.example.org
+ ```
+
+3. **DNS or hosts file configuration** for upload proxy access:
+ - If needed, add an entry to `/etc/hosts` on your local machine:
+ ```
+ cdi-uploadproxy.example.org
+ ```
+
+## Step 1: Export VM Disks from Proxmox
+
+Before exporting, ensure the virtual machines are stopped in Proxmox.
+
+Export the VM disk to a file in qcow2 format (or another format supported by KubeVirt):
+
+```bash
+# Example: Export VM disk from Proxmox storage
+qm disk export /tmp/vm-disk.qcow2
+```
+
+The output should be a disk image file (e.g., `vm-disk.qcow2`) ready for upload.
+
+{{< note >}}
+Specific commands for exporting disks may vary depending on your Proxmox storage backend and configuration.
+Refer to [Proxmox VE documentation](https://pve.proxmox.com/wiki/Qm_status) for details.
+{{< /note >}}
+
+## Step 2: Create a VMDisk for Upload
+
+Create a `VMDisk` resource in Cozystack with `source.upload` to prepare for image upload:
+
+```yaml
+apiVersion: apps.cozystack.io/v1alpha1
+kind: VMDisk
+metadata:
+ name: proxmox-vm-disk
+ namespace: tenant-root
+spec:
+ source:
+ upload: {}
+ storage: 10Gi
+ storageClass: replicated
+```
+
+Apply the manifest:
+
+```bash
+kubectl apply -f vmdisk-upload.yaml
+```
+
+Monitor the disk creation status:
+
+```bash
+kubectl get vmdisk -n tenant-root
+kubectl describe vmdisk proxmox-vm-disk -n tenant-root
+```
+
+## Step 3: Upload the Disk Image
+
+Once the VMDisk is created and ready for upload, use `virtctl` to upload the disk image:
+
+```bash
+virtctl image-upload dv vm-disk-proxmox-vm-disk \
+ -n tenant-root \
+ --image-path=./vm-disk.qcow2 \
+ --uploadproxy-url https://cdi-uploadproxy.example.org \
+ --insecure
+```
+
+{{< note >}}
+The DataVolume name follows the pattern `vm-disk-`.
+If your VMDisk is named `proxmox-vm-disk`, the DataVolume will be `vm-disk-proxmox-vm-disk`.
+{{< /note >}}
+
+Wait for the upload to complete. You can monitor the progress:
+
+```bash
+kubectl get dv -n tenant-root
+kubectl describe dv vm-disk-proxmox-vm-disk -n tenant-root
+```
+
+The upload is complete when the status shows `Succeeded`.
+
+## Step 4: Create a VMInstance
+
+After the disk upload is complete, create a VMInstance to boot from the uploaded disk:
+
+```yaml
+apiVersion: apps.cozystack.io/v1alpha1
+kind: VMInstance
+metadata:
+ name: migrated-vm
+ namespace: tenant-root
+spec:
+ running: true
+ instanceType: u1.medium
+ disks:
+ - name: proxmox-vm-disk
+ # Optional: configure network, cloud-init, etc.
+```
+
+Apply the manifest:
+
+```bash
+kubectl apply -f vminstance.yaml
+```
+
+Verify the VM is running:
+
+```bash
+kubectl get vm -n tenant-root
+kubectl get vmi -n tenant-root
+```
+
+## Step 5: Access the Migrated VM
+
+Access the VM console using virtctl:
+
+```bash
+# Serial console
+virtctl console vm-instance-migrated-vm -n tenant-root
+
+# VNC access
+virtctl vnc vm-instance-migrated-vm -n tenant-root
+
+# SSH (if configured)
+virtctl ssh user@vm-instance-migrated-vm -n tenant-root
+```
+
+## Migration Checklist
+
+Use this checklist to track your migration progress:
+
+- [ ] Export VM disks from Proxmox (qcow2 or compatible format)
+- [ ] Install `virtctl` on your local machine
+- [ ] Configure upload proxy access in Cozystack
+- [ ] Add DNS/hosts entry for upload proxy (if needed)
+- [ ] Create VMDisk with `source.upload` in Cozystack
+- [ ] Upload disk image using `virtctl image-upload`
+- [ ] Wait for upload to complete (status: Succeeded)
+- [ ] Create VMInstance with the uploaded disk
+- [ ] Verify VM boots successfully
+- [ ] Test VM connectivity and functionality
+
+## Troubleshooting
+
+### Upload Fails with Connection Error
+
+**Problem:** `virtctl image-upload` fails with connection refused or timeout.
+
+**Solution:**
+- Verify upload proxy is accessible: `curl -k https://cdi-uploadproxy.example.org`
+- Check `/etc/hosts` entry matches the upload proxy IP
+- Ensure Cozystack ConfigMap has `expose-services: "dashboard,cdi-uploadproxy"`
+
+### Upload Stuck at 0%
+
+**Problem:** Upload starts but never progresses.
+
+**Solution:**
+- Check DataVolume status: `kubectl describe dv vm-disk- -n tenant-root`
+- Verify storage class has available capacity
+- Check CDI pod logs: `kubectl logs -n cozy-system -l app=cdi-uploadproxy`
+
+### VM Fails to Boot After Migration
+
+**Problem:** VM boots but fails to start properly.
+
+**Solution:**
+- Check VM disk is attached as the first disk in VMInstance spec
+- Verify disk format is compatible (qcow2, raw)
+- Review VM logs: `virtctl console vm-instance- -n tenant-root`
+- Ensure VM drivers are compatible with KubeVirt (VirtIO recommended)
+
+## Next Steps
+
+After successful migration:
+
+- Configure [cloud-init]({{% ref "/docs/virtualization/virtual-machine" %}}) for automated VM setup
+- Review [instance types and profiles]({{% ref "/docs/virtualization/resources" %}}) for optimal resource allocation
+- Consider creating [golden images]({{% ref "/docs/virtualization/vm-image" %}}) for future VM deployments