You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/baremetal/index.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,28 +1,28 @@
1
1
# Baremetal Automation
2
2
3
-
The baremetal automation in IronCore is designed to provide a comprehensive solution for managing physical servers
3
+
The bare metal automation in IronCore is designed to provide a comprehensive solution for managing physical servers
4
4
in a Kubernetes-native way. It leverages the power of Kubernetes Custom Resource Definitions (CRDs) to automate:
5
5
6
6
-**Discovery**: Automatically detect and register bare metal servers.
7
7
-**Provisioning**: Deploy and configure servers using Ignition.
8
8
-**Day-2 Operations**: Manage BIOS, firmware, and hardware inventory declaratively.
9
9
-**3rd Party Integrations**: Seamlessly integrate with existing tools like vendor-specific management tools.
10
-
-**Kubernetes Support**: Run Kubernetes on bare metal servers with support for Cluster API and Gardener.
10
+
-**Kubernetes Support**: Run Kubernetes on bare metal servers with support of Cluster API and Gardener.
11
11
12
12
## Core Components
13
13
14
-
The core components of the baremetal automation in IronCore include:
14
+
The core components of the bare metal automation in IronCore include:
15
15
-[**Metal Operator**](https://github.com/ironcore-dev/metal-operator): The central component that manages the lifecycle of bare metal servers.
16
16
-[**Boot Operator**](https://github.com/ironcore-dev/boot-operator): iPXE and HTTP boot server that provides boot images and Ignition configurations.
17
-
-[**FeDHCP**](https://github.com/ironcore-dev/fedhcp): A DHCP server that provides inband and out of band network configuration to bare metal servers.
17
+
-[**FeDHCP**](https://github.com/ironcore-dev/fedhcp): A DHCP server that provides in-band and out-of-band network configuration to bare metal servers.
18
18
19
19
## Concepts and Usage Guides
20
20
21
21
Usage guides and concepts for the `metal-operator` API types can be found in the [metal-operator documentation](https://ironcore-dev.github.io/metal-operator/concepts/).
22
22
23
23
## Prerequisites
24
24
25
-
The current implementation of the baremetal automation in IronCore requires the following prerequisites:
25
+
The current implementation of the bare metal automation in IronCore requires the following prerequisites:
26
26
27
27
- In-band and out-of-band network connectivity to the bare metal servers.
28
28
- A management server in the out-of-band network that can communicate with the bare metal servers.
Copy file name to clipboardExpand all lines: docs/baremetal/kubernetes/gardener.md
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,12 @@ Apart from the [Cluster API Provider for Bare Metal](/baremetal/kubernetes/capi)
4
4
[Gardener](https://gardener.cloud), a Kubernetes-native project for managing Kubernetes clusters at scale.
5
5
6
6
There are two main components in the Gardener integration with IronCore:
7
-
-**Machine Controller Manager (MCM)**: This component is responsible for managing the lifecycle of machines in a
8
-
Kubernetes cluster. It uses the `metal-operator` API types to provision and manage bare metal servers.
9
-
-**Gardener Extension Provider**: This component provides the necessary integration points for Gardener to manage bare
10
-
metal clusters.
7
+
-**Machine Controller Manager (MCM)**: This component is responsible for managing the lifecycle of machines in a Kubernetes cluster. It uses the `metal-operator` API types to provision and manage bare metal servers.
8
+
-**Gardener Extension Provider**: This component provides the necessary integration points for Gardener to manage bare metal clusters.
11
9
12
10
## Machine Controller Manager (MCM)
13
11
14
-
The `[machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal)`
12
+
The [machine-controller-manager-provider-ironcore](https://github.com/ironcore-dev/machine-controller-manager-provider-ironcore-metal)
15
13
is responsible for managing the lifecycle of `Nodes` in a Kubernetes cluster. Here the MCM in essence is translating
16
14
Gardener `Machine` resource to `ServerClaims` and wrapping the `user-data` coming from the Gardner OS extensions into
17
15
an Ignition `Secret`.
@@ -22,7 +20,7 @@ The [`gardener-extension-provider-ironcore-metal`](https://github.com/ironcore-d
22
20
is responsible for providing the necessary integration points for Gardener to manage bare metal clusters.
23
21
24
22
Those integration points include:
25
-
- Configure and the [Cloud Controller Manager](/baremetal/kubernetes/cloud-controller-manager) to handle the `Node` lifecycle
23
+
- Configure the [Cloud Controller Manager](/baremetal/kubernetes/cloud-controller-manager) to handle the `Node` lifecycle
26
24
and topology information.
27
25
- Configure the [metal-load-balancer-controller](/baremetal/kubernetes/metal-loadbalancer-controller) to handle `Service` of type `LoadBalancer`.
28
26
- Configure the [Machine Controller Manager (MCM)](#machine-controller-manager-mcm) to manage the creation of `Nodes` in the cluster.
Copy file name to clipboardExpand all lines: docs/baremetal/kubernetes/metal-loadbalancer-controller.md
+3-6Lines changed: 3 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,15 @@ The [metal-loadbalancer-controller](https://github.com/ironcore-dev/metal-load-b
4
4
for managing the lifecycle of `Services` of type `LoadBalancer` in a Kubernetes cluster running on bare metal servers.
5
5
The project consists of two main components:
6
6
-**Controller**: The main component that watches for changes in `Service` resources and manages the lifecycle of load balancers.
7
-
-**Speaker**: Is responsible for announcing the load balancer IP address to `metalbond` which acts as a route reflector
8
-
to the bare metal servers.
7
+
-**Speaker**: Is responsible for announcing the load balancer IP address to `metalbond` which acts as a route reflector to the bare metal servers.
9
8
10
9
The `metal-loadbalancer-controller` is designed to work in an IPv6 only environment.
11
10
12
11
## Controller
13
12
14
13
The controller component has the following responsibilities:
15
-
- Watches for changes in `Service` resources of type `LoadBalancer` and uses the `ClusterIP` of a `Service` and patches the
16
-
`LoadBalancer` status using this `ClusterIP`.
17
-
- Setting the `PodCIDRs` on the `Node` resources to ensure that the load balancer can route traffic to the pods. Here it
18
-
takes the main `Node` IP address and the configured `node-cidr-mask-size` and patches the `Node.spec.podCIDRs` field.
14
+
- Watches for changes in `Service` resources of type `LoadBalancer` and uses the `ClusterIP` of a `Service` and patches the `LoadBalancer` status using this `ClusterIP`.
15
+
- Setting the `PodCIDRs` on the `Node` resources to ensure that the load balancer can route traffic to the pods. Here it takes the main `Node` IP address and the configured `node-cidr-mask-size` and patches the `Node.spec.podCIDRs` field.
* Create a branch from the `main`using 'git checkout' command.
72
+
* Create a branch from the `main` using 'git checkout' command.
76
73
!!! note
77
-
If needed, rebase to the current `main` branch before submitting your pull request. If it doesn't merge properly
74
+
If needed, rebase to the current `main` branch before submitting your pull request. If it doesn't merge properly
78
75
with `main` you may be asked to rebase your changes.
79
76
80
-
```shell
81
-
git checkout -b my_feature
82
-
# rebase if necessary
83
-
git fetch upstream main
84
-
git rebase upstream/main
85
-
```
77
+
```shell
78
+
git checkout -b my_feature
79
+
# rebase if necessary
80
+
git fetch upstream main
81
+
git rebase upstream/main
82
+
```
86
83
87
84
* Commits should be as small as possible, while ensuring that each commit is correct independently
88
85
89
-
*Commit your changes to your feature branch and push it to your fork.
86
+
* Commit your changes to your feature branch and push it to your fork.
90
87
91
88
```shell
92
89
git add .
@@ -111,8 +108,7 @@ for a review in the pull request or a comment.
111
108
112
109
## Issues and Planning
113
110
114
-
We use GitHub issues to track bugs and enhancement requests. Please provide as much context as possible when you open an issue. The information you provide must be comprehensive enough to understand, reproduce the behavior and find related reports of that issue for the assignee.
111
+
We use GitHub issues to track bugs and enhancement requests. Please provide as much context as possible when you open an issue. The information you provide must be comprehensive enough to understand, reproduce the behavior and find related reports of that issue for the assignee.
115
112
Therefore, contributors may use but aren't restricted to the issue template provided by the IronCore maintainers.
Copy file name to clipboardExpand all lines: docs/iaas/architecture/networking.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Overview
4
4
5
-
IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are baremetal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane.
5
+
IronCore's virtual networking architecture provides an end-to-end virtual networking solution for provisioned `Machine`s running in data centers, regardless they are bare metal machines or virtual machines. It is designed to enable robust, flexible and performing networking control plane and data plane.
6
6
7
7
-**Robust**: IronCore's virtual networking control plane is mainly implemented using Kubernetes controller model. Thus, it is able to survive component's failure and recover the running states by retrieving the desired networking configuration.
8
8
-**Flexible**: Thanks to the modular and layered architecture design, IronCore's virtual networking solution allows developers to implement and interchange components from the most top-level data center management system built upon defined IronCore APIs, to lowest-level packet processing engines depending on the used hardware.
@@ -14,23 +14,21 @@ IronCore's virtual networking architecture is illustrated with the following fig
14
14
15
15
The main elements involved in IronCore's networking are:
16
16
-[**ironcore**](https://github.com/ironcore-dev/ironcore): Core networking component that manages network resources and configurations. For more details, see the
-[**ironcore-net**](https://github.com/ironcore-dev/ironcore-net): Global coordination service that manages network resource in an IronCore instance.
19
19
-[**metalnet**](https://github.com/ironcore-dev/metalnet): A service that provides cluster-level networking capabilities for `Machines`.
20
20
-[**dpservice**](https://github.com/ironcore-dev/dpservice): A service that manages data plane operations, including network traffic routing and policies.
21
21
-[**metalbond**](https://github.com/ironcore-dev/metalbond): A component that handles route announcements in an IronCore instance, ensuring that networking routes are
22
-
correctly propagated across the IronCore installation.
22
+
correctly propagated across the IronCore installation.
23
23
24
24
## `ironcore` and `ironcore-net`
25
25
26
26
`ironcore-net` is a global coordination service within an IronCore installation. Therefore, it is a single instance and
27
27
the place where all network related decisions like reservation of unique IP addresses, allocation of unique network IDs, etc. are made.
28
28
29
29
`ironcore-net` has apart from its [own API](https://github.com/ironcore-dev/ironcore-net/tree/main/api/core/v1alpha1) two main components:
30
-
-**apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the
31
-
internal representation used by `ironcore-net`.
32
-
-**metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which
33
-
are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level.
30
+
-**apinetlet**: This component is responsible from translating the user-facing API objects from the `networking` resource group into the internal representation used by `ironcore-net`.
31
+
-**metalnetlet**: This component is interfacing with the `metalnet` API to manage cluster-level networking resources like `NetworkInterface` which are requested globally in the `ironcore-net` API but are implemented by `metalnet` on a hypervisor level.
34
32
35
33
### Example `apinetlet` flow
36
34
@@ -50,10 +48,10 @@ The `apinetlet` will reconcile this `VirtualIP` by performing the following step
50
48
1. Create an `IP` object in the `ironcore-net` API, which reserves a unique IP address.
51
49
2. Update the `VirtualIP` status with the allocated IP address.
52
50
53
-
The `ironcore` API server is agnostic on how the underlying global IP address is allocated and delegates this responsibility
51
+
The `IronCore` API server is agnostic on how the underlying global IP address is allocated and delegates this responsibility
54
52
to `ironcore-net`.
55
53
56
-
A similar flow happens for `Networks`, `LoadBalancer` and `NatGateways` resources, where the `apinetlet` is responsible
54
+
A similar flow happens for `Network`, `LoadBalancer` and `NatGateway` resources, where the `apinetlet` is responsible
57
55
for translating and allocating the necessary resources in `ironcore-net` to ensure that the networking requirements are met.
Copy file name to clipboardExpand all lines: docs/iaas/architecture/runtime-interface.md
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# IronCore Runtime Interface (IRI)
2
2
3
-
The IronCore Runtime Interface (IRI) is a key concept in the IronCore architecture, designed to provide a consitent
3
+
The IronCore Runtime Interface (IRI) is a key concept in the IronCore architecture, designed to provide a consistent
4
4
and unified interface for interacting with various compute and storage providers. The IRI abstracts the underlying
5
5
complexities of different providers.
6
6
@@ -12,7 +12,7 @@ There are three main runtime interfaces in IronCore:
12
12
Implementations of these interfaces are done by provider-specific components. More infomation about the provider can
13
13
be found in the [provider concept documentation](/iaas/architecture/providers/).
14
14
15
-
The definition of the runtime interfaces can be found in IronCores[`iri` package](https://github.com/ironcore-dev/ironcore/tree/main/iri/).
15
+
The definition of the runtime interfaces can be found in IronCore's[`iri` package](https://github.com/ironcore-dev/ironcore/tree/main/iri/).
16
16
17
17
## MachineRuntime Interface
18
18
@@ -40,9 +40,7 @@ service MachineRuntime {
40
40
}
41
41
```
42
42
43
-
The general idea is that a `machinepoollet` ensures that the API level dependencies are met. For example, a `Machine`s
44
-
`Volume` which is used as a root disk is in the state `Available`. If those prerequisites are met, the `poollet` will
45
-
call the corresponding `CreateMachine` method of the `RuntimeInterface` to create the `Machine` resource.
43
+
The general idea is that a `machinepoollet` ensures that the API level dependencies are met. For example, a `Machine`'s `Volume` which is used as a root disk is in the state `Available`. If those prerequisites are met, the `poollet` will call the corresponding `CreateMachine` method of the `RuntimeInterface` to create the `Machine` resource.
46
44
47
45
The `ListMachines` and `Status` methods are used to retrieve a list of all `Machine` instances managed by the provider.
48
46
The result of those methods is then used to propagate `Machine` state changes. Those methods are periodically called by
@@ -53,7 +51,7 @@ methods to attach volumes or network interfaces to a `Machine` if a change in th
53
51
54
52
## VolumeRuntime Interface
55
53
56
-
Similar to the `MachineRuntime`, the `VolumeRuntime` interface is responsible for managing storage resources in IronCore.
54
+
Similar to the `MachineRuntime`, the `VolumeRuntime` interface is responsible for managing block storage resources in IronCore.
57
55
Here the `volumepoollet` takes a similar role as the `machinepoollet` for the `MachineRuntime` and invokes `CreateVolume`,
58
56
`DeleteVolume`, `ExpandVolume`, and other methods to manage `Volume` resources.
Copy file name to clipboardExpand all lines: docs/iaas/kubernetes/cloud-controller-manager.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,10 +27,10 @@ Below is the detailed explanation on how APIs are implemented by `cloud-provider
27
27
28
28
InstanceMetadata returns metadata of a node instance, which includes :
29
29
30
-
-`ProviderID`: Provider is combination of ProviderName(Which is nothing but set to `IronCore`)
31
-
-`InstanceType`: InstanceType is set to referencing MachineClass name by the instance.
30
+
-`ProviderID`: Provider is combination of ProviderName(Which is nothing but set to `IronCore`)
31
+
-`InstanceType`: InstanceType is set to referencing MachineClass name by the instance.
32
32
-`NodeAddresses`: Node addresses are calculated from the IP information available from NetworkInterfaces of the machine.
33
-
-`Zone`: Zone is set to referenced MachinePool name.
33
+
-`Zone`: Zone is set to referenced MachinePool name.
34
34
35
35
36
36
## Load balancing for Services of type LoadBalancer
@@ -47,7 +47,7 @@ explanation on how APIs are implemented in IronCore cloud-provider.
47
47
### Ensure LoadBalancer
48
48
49
49
-`EnsureLoadBalancer` gets the LoadBalancer name based on service name.
50
-
- Checks if IronCore `LoadBalancer` object already exists. If not it gets the `port` and `protocol`, `ipFamily` information from the service and creates a new LoadBalancer object in the Ironcore.
50
+
- Checks if IronCore `LoadBalancer` object already exists. If not it gets the `port` and `protocol`, `ipFamily` information from the service and creates a new LoadBalancer object in the Ironcore.
51
51
- Newly created LoadBalancer will be associated with Network reference provided in cloud configuration.
52
52
- Then `LoadBalancerRouting` object is created with the destination IP information retrieved from the nodes (Note: `LoadBalancerRouting` is internal object to Ironcore). Later, this information is used at the Ironcore API level to describe the explicit targets in a pool traffic is routed to.
53
53
- Ironcore supports two types of LoadBalancer `Public` and `Internal`. If LoadBalancer has to be of type Internal, "service.beta.kubernetes.io/ironcore-load-balancer-internal" annotation needs to be set to true, otherwise it will be considered as public type.
0 commit comments