Kubernetes Cluster API Provider GCP
Kubernetes-native declarative infrastructure for GCP.
What is the Cluster API Provider GCP?
The Cluster API brings declarative Kubernetes-style APIs to cluster creation, configuration and management. The API itself is shared across multiple cloud providers allowing for true Google Cloud hybrid deployments of Kubernetes.
Documentation
Please see our book for in-depth documentation.
Quick Start
Checkout our Cluster API Quick Start to create your first Kubernetes cluster on Google Cloud Platform using Cluster API.
Support Policy
This provider’s versions are compatible with the following versions of Cluster API:
Cluster API v1alpha3 (v0.3.x ) | Cluster API v1alpha4 (v0.4.x ) | Cluster API v1beta1 (v1.0.x ) | |
---|---|---|---|
Google Cloud Provider v0.3.x | ✓ | ||
Google Cloud Provider v0.4.x | ✓ | ||
Google Cloud Provider v1.0.x | ✓ |
This provider’s versions are able to install and manage the following versions of Kubernetes:
Google Cloud Provider v0.3.x | Google Cloud Provider v0.4.x | Google Cloud Provider v1.0.x | |
---|---|---|---|
Kubernetes 1.15 | |||
Kubernetes 1.16 | ✓ | ||
Kubernetes 1.17 | ✓ | ✓ | |
Kubernetes 1.18 | ✓ | ✓ | ✓ |
Kubernetes 1.19 | ✓ | ✓ | ✓ |
Kubernetes 1.20 | ✓ | ✓ | ✓ |
Kubernetes 1.21 | ✓ | ✓ | |
Kubernetes 1.22 | ✓ |
Each version of Cluster API for Google Cloud will attempt to support at least two versions of Kubernetes e.g., Cluster API for GCP v0.1
may support Kubernetes 1.13 and Kubernetes 1.14.
NOTE: As the versioning for this project is tied to the versioning of Cluster API, future modifications to this policy may be made to more closely align with other providers in the Cluster API ecosystem.
Getting Involved and Contributing
Are you interested in contributing to cluster-api-provider-gcp? We, the maintainers and the community would love your suggestions, support and contributions! The maintainers of the project can be contacted anytime to learn about how to get involved.
Before starting with the contribution, please go through prerequisites of the project.
To set up the development environment, checkout the development guide.
In the interest of getting new people involved, we have issues marked as good first issue
. Although
these issues have a smaller scope but are very helpful in getting acquainted with the codebase.
For more, see the issue tracker. If you’re unsure where to start, feel free to reach out to discuss.
See also: Our own contributor guide and the Kubernetes community page.
We also encourage ALL active community participants to act as if they are maintainers, even if you don’t have ‘official’ written permissions. This is a community effort and we are here to serve the Kubernetes community. If you have an active interest and you want to get involved, you have real power!
Office hours
- Join the SIG Cluster Lifecycle Google Group for access to documents and calendars.
- Participate in the conversations on Kubernetes Discuss
- Provider implementers office hours (CAPI)
- Weekly on Wednesdays @ 10:00 am PT (Pacific Time) on Zoom
- Previous meetings: [ notes | recordings ]
- Cluster API Provider GCP office hours (CAPG)
- Monthly on first Thursday @ 09:00 am PT (Pacific Time) on Zoom
- Previous meetings: [ notes|recordings ]
Other ways to communicate with the contributors
Please check in with us in the #cluster-api-gcp on Slack.
Github Issues
Bugs
If you think you have found a bug, please follow the instruction below.
- Please give a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate.
- Get the logs from the custom controllers and please paste them in the issue.
- Open a bug report.
- Remember users might be searching for the issue in the future, so please make sure to give it a meaningful title to help others.
- Feel free to reach out to the community on slack.
Tracking new feature
We also have an issue tracker to track features. If you think you have a feature idea, that could make Cluster API provider GCP become even more awesome, then follow these steps.
- Open a feature request.
- Remember users might be searching for the issue in the future, so please make sure to give it a meaningful title to help others.
- Clearly define the use case with concrete examples. Example: type
this
and cluster-api-provider-gcp doesthat
. - Some of our larger features will require some design. If you would like to include a technical design in your feature, please go ahead.
- After the new feature is well understood and the design is agreed upon, we can start coding the feature. We would love for you to code it. So please open up a WIP (work in progress) PR and happy coding!
Code of conduct
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.
Quick Start
In this tutorial we’ll cover the basics of how to use Cluster API to create one or more Kubernetes clusters.
Installation
There are two major quickstart paths: Using clusterctl or the Cluster API Operator.
This article describes a path that uses the clusterctl
CLI tool to handle the lifecycle of a Cluster API management cluster.
The clusterctl command line interface is specifically designed for providing a simple “day 1 experience” and a quick start with Cluster API. It automates fetching the YAML files defining provider components and installing them.
Additionally it encodes a set of best practices in managing providers, that helps the user in avoiding mis-configurations or in managing day 2 operations such as upgrades.
The Cluster API Operator is a Kubernetes Operator built on top of clusterctl and designed to empower cluster administrators to handle the lifecycle of Cluster API providers within a management cluster using a declarative approach. It aims to improve user experience in deploying and managing Cluster API, making it easier to handle day-to-day tasks and automate workflows with GitOps. Visit the CAPI Operator quickstart if you want to experiment with this tool.
Common Prerequisites
Install and/or configure a Kubernetes cluster
Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the installation process the Kubernetes cluster will be transformed into a management cluster by installing the Cluster API provider components, so it is recommended to keep it separated from any application workload.
It is a common practice to create a temporary, local bootstrap cluster which is then used to provision a target management cluster on the selected infrastructure provider.
Choose one of the options below:
-
Existing Management Cluster
For production use-cases a “real” Kubernetes cluster should be used with appropriate backup and disaster recovery policies and procedures in place. The Kubernetes cluster must be at least v1.20.0.
export KUBECONFIG=<...>
OR
-
Kind
kind can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.
The installation procedure depends on the version of kind; if you are planning to use the Docker infrastructure provider, please follow the additional instructions in the dedicated tab:
Create the kind cluster:
kind create cluster
Test to ensure the local kind cluster is ready:
kubectl cluster-info
Run the following command to create a kind config file for allowing the Docker provider to access Docker on the host:
cat > kind-cluster-with-extramounts.yaml <<EOF kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: ipFamily: dual nodes: - role: control-plane extraMounts: - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock EOF
Then follow the instruction for your kind version using
kind create cluster --config kind-cluster-with-extramounts.yaml
to create the management cluster using the above file.Create the Kind Cluster
KubeVirt is a cloud native virtualization solution. The virtual machines we’re going to create and use for the workload cluster’s nodes, are actually running within pods in the management cluster. In order to communicate with the workload cluster’s API server, we’ll need to expose it. We are using Kind which is a limited environment. The easiest way to expose the workload cluster’s API server (a pod within a node running in a VM that is itself running within a pod in the management cluster, that is running inside a Docker container), is to use a LoadBalancer service.
To allow using a LoadBalancer service, we can’t use the kind’s default CNI (kindnet), but we’ll need to install another CNI, like Calico. In order to do that, we’ll need first to initiate the kind cluster with two modifications:
- Disable the default CNI
- Add the Docker credentials to the cluster, to avoid the Docker Hub pull rate limit of the calico images; read more about it in the docker documentation, and in the kind documentation.
Create a configuration file for kind. Please notice the Docker config file path, and adjust it to your local setting:
cat <<EOF > kind-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: # the default CNI will not be installed disableDefaultCNI: true nodes: - role: control-plane extraMounts: - containerPath: /var/lib/kubelet/config.json hostPath: <YOUR DOCKER CONFIG FILE PATH> EOF
Now, create the kind cluster with the configuration file:
kind create cluster --config=kind-config.yaml
Test to ensure the local kind cluster is ready:
kubectl cluster-info
Install the Calico CNI
Now we’ll need to install a CNI. In this example, we’re using calico, but other CNIs should work as well. Please see calico installation guide for more details (use the “Manifest” tab). Below is an example of how to install calico version v3.24.4.
Use the Calico manifest to create the required resources; e.g.:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml
Install clusterctl
The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.
Install clusterctl binary with curl on Linux
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-amd64 -o clusterctl
Download for ARM64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-arm64 -o clusterctl
Download for PPC64LE:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-linux-ppc64le -o clusterctl
Install clusterctl:
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl binary with curl on macOS
If you are unsure you can determine your computers architecture by running uname -a
Download for AMD64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-darwin-amd64 -o clusterctl
Download for M1 CPU (”Apple Silicon”) / ARM64:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-darwin-arm64 -o clusterctl
Make the clusterctl binary executable.
chmod +x ./clusterctl
Move the binary in to your PATH.
sudo mv ./clusterctl /usr/local/bin/clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl with homebrew on macOS and Linux
Install the latest release using homebrew:
brew install clusterctl
Test to ensure the version you installed is up-to-date:
clusterctl version
Install clusterctl binary with curl on Windows using PowerShell
Go to the working directory where you want clusterctl downloaded.
Download the latest release; on Windows, type:
curl.exe -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.6.3/clusterctl-windows-amd64.exe -o clusterctl.exe
Append or prepend the path of that directory to the PATH
environment variable.
Test to ensure the version you installed is up-to-date:
clusterctl.exe version
Initialize the management cluster
Now that we’ve got clusterctl installed and all the prerequisites in place, let’s transform the Kubernetes cluster
into a management cluster by using clusterctl init
.
The command accepts as input a list of providers to install; when executed for the first time, clusterctl init
automatically adds to the list the cluster-api
core provider, and if unspecified, it also adds the kubeadm
bootstrap
and kubeadm
control-plane providers.
Enabling Feature Gates
Feature gates can be enabled by exporting environment variables before executing clusterctl init
.
For example, the ClusterTopology
feature, which is required to enable support for managed topologies and ClusterClass,
can be enabled via:
export CLUSTER_TOPOLOGY=true
Additional documentation about experimental features can be found in Experimental Features.
Initialization for common providers
Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before getting started with Cluster API. See below for the expected settings for common providers.
Download the latest binary of clusterawsadm
from the AWS provider releases. The clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API Provider AWS.
Download the latest release; on Linux, type:
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v2.4.1/clusterawsadm-linux-amd64 -o clusterawsadm
Make it executable
chmod +x clusterawsadm
Move the binary to a directory present in your PATH
sudo mv clusterawsadm /usr/local/bin
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Download the latest release; on macOs, type:
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v2.4.1/clusterawsadm-darwin-amd64 -o clusterawsadm
Or if your Mac has an M1 CPU (”Apple Silicon”):
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v2.4.1/clusterawsadm-darwin-arm64 -o clusterawsadm
Make it executable
chmod +x clusterawsadm
Move the binary to a directory present in your PATH
sudo mv clusterawsadm /usr/local/bin
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Install the latest release using homebrew:
brew install clusterawsadm
Check version to confirm installation
clusterawsadm version
Example Usage
export AWS_REGION=us-east-1 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<your-access-key>
export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
Download the latest release; on Windows, type:
curl.exe -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v2.4.1/clusterawsadm-windows-amd64.exe -o clusterawsadm.exe
Append or prepend the path of that directory to the PATH
environment variable.
Check version to confirm installation
clusterawsadm.exe version
Example Usage in Powershell
$Env:AWS_REGION="us-east-1" # This is used to help encode your environment variables
$Env:AWS_ACCESS_KEY_ID="<your-access-key>"
$Env:AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
$Env:AWS_SESSION_TOKEN="<session-token>" # If you are using Multi-Factor Auth.
# The clusterawsadm utility takes the credentials that you set as environment
# variables and uses them to create a CloudFormation stack in your AWS account
# with the correct IAM resources.
clusterawsadm bootstrap iam create-cloudformation-stack
# Create the base64 encoded credentials using clusterawsadm.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
$Env:AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
# Finally, initialize the management cluster
clusterctl init --infrastructure aws
See the AWS provider prerequisites document for more details.
For more information about authorization, AAD, or requirements for Azure, visit the Azure provider prerequisites document.
export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
# Create an Azure Service Principal and paste the output here
export AZURE_TENANT_ID="<Tenant>"
export AZURE_CLIENT_ID="<AppId>"
export AZURE_CLIENT_SECRET="<Password>"
# Base64 encode the variables
export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
# Settings needed for AzureClusterIdentity used by the AzureCluster
export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
export CLUSTER_IDENTITY_NAME="cluster-identity"
export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
# Create a secret to include the password of the Service Principal identity created in Azure
# This secret will be referenced by the AzureClusterIdentity used by the AzureCluster
kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
# Finally, initialize the management cluster
clusterctl init --infrastructure azure
Create a file named cloud-config in the repo’s root directory, substituting in your own environment’s values
[Global]
api-url = <cloudstackApiUrl>
api-key = <cloudstackApiKey>
secret-key = <cloudstackSecretKey>
Create the base64 encoded credentials by catting your credentials file. This command uses your environment variables and encodes them in a value to be stored in a Kubernetes Secret.
export CLOUDSTACK_B64ENCODED_SECRET=`cat cloud-config | base64 | tr -d '\n'`
Finally, initialize the management cluster
clusterctl init --infrastructure cloudstack
export DIGITALOCEAN_ACCESS_TOKEN=<your-access-token>
export DO_B64ENCODED_CREDENTIALS="$(echo -n "${DIGITALOCEAN_ACCESS_TOKEN}" | base64 | tr -d '\n')"
# Initialize the management cluster
clusterctl init --infrastructure digitalocean
The Docker provider requires the ClusterTopology
and MachinePool
features to deploy ClusterClass-based clusters.
We are only supporting ClusterClass-based cluster-templates in this quickstart as ClusterClass makes it possible to
adapt configuration based on Kubernetes version. This is required to install Kubernetes clusters < v1.24 and
for the upgrade from v1.23 to v1.24 as we have to use different cgroupDrivers depending on Kubernetes version.
# Enable the experimental Cluster topology feature.
export CLUSTER_TOPOLOGY=true
# Enable the experimental Machine Pool feature
export EXP_MACHINE_POOL=true
# Initialize the management cluster
clusterctl init --infrastructure docker
In order to initialize the Equinix Metal Provider (formerly Packet) you have to expose the environment
variable PACKET_API_KEY
. This variable is used to authorize the infrastructure
provider manager against the Equinix Metal API. You can retrieve your token directly
from the Equinix Metal Console.
export PACKET_API_KEY="34ts3g4s5g45gd45dhdh"
clusterctl init --infrastructure packet
# Create the base64 encoded credentials by catting your credentials json.
# This command uses your environment variables and encodes
# them in a value to be stored in a Kubernetes Secret.
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
# Finally, initialize the management cluster
clusterctl init --infrastructure gcp
Please visit the Hetzner project.
Please visit the Hivelocity project.
In order to initialize the IBM Cloud Provider you have to expose the environment
variable IBMCLOUD_API_KEY
. This variable is used to authorize the infrastructure
provider manager against the IBM Cloud API. To create one from the UI, refer here.
export IBMCLOUD_API_KEY=<you_api_key>
# Finally, initialize the management cluster
clusterctl init --infrastructure ibmcloud
# Initialize the management cluster
clusterctl init --infrastructure k0sproject-k0smotron
# Initialize the management cluster
clusterctl init --infrastructure kubekey
Please visit the KubeVirt project for more information.
As described above, we want to use a LoadBalancer service in order to expose the workload cluster’s API server. In the example below, we will use MetalLB solution to implement load balancing to our kind cluster. Other solution should work as well.
Install MetalLB for load balancing
Install MetalLB, as described here; for example:
METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
Now, we’ll create the IPAddressPool
and the L2Advertisement
custom resources. The script below creates the CRs with
the right addresses, that match to the kind cluster addresses:
GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: capi-ip-pool
namespace: metallb-system
spec:
addresses:
- 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF
Install KubeVirt on the kind cluster
# get KubeVirt version
KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
# deploy required CRDs
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
# deploy the KubeVirt custom resource
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
Initialize the management cluster with the KubeVirt Provider
clusterctl init --infrastructure kubevirt
Please visit the Metal3 project.
Please follow the Cluster API Provider for Nutanix Getting Started Guide
Please follow the Cluster API Provider for Oracle Cloud Infrastructure (OCI) Getting Started Guide
# Initialize the management cluster
clusterctl init --infrastructure openstack
export OSC_SECRET_KEY=<your-secret-key>
export OSC_ACCESS_KEY=<your-access-key>
export OSC_REGION=<you-region>
# Create namespace
kubectl create namespace cluster-api-provider-outscale-system
# Create secret
kubectl create secret generic cluster-api-provider-outscale --from-literal=access_key=${OSC_ACCESS_KEY} --from-literal=secret_key=${OSC_SECRET_KEY} --from-literal=region=${OSC_REGION} -n cluster-api-provider-outscale-system
# Initialize the management cluster
clusterctl init --infrastructure outscale
First, we need to add the IPAM provider to your clusterctl config file ($XDG_CONFIG_HOME/cluster-api/clusterctl.yaml
):
providers:
- name: in-cluster
url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
type: IPAMProvider
# The host for the Proxmox cluster
export PROXMOX_URL="https://pve.example:8006"
# The Proxmox token ID to access the remote Proxmox endpoint
export PROXMOX_TOKEN='root@pam!capi'
# The secret associated with the token ID
# You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
# bash history
export PROXMOX_SECRET="1234-1234-1234-1234"
# Finally, initialize the management cluster
clusterctl init --infrastructure proxmox --ipam in-cluster
For more information about the CAPI provider for Proxmox, see the Proxmox project.
Please follow the Cluster API Provider for Cloud Director Getting Started Guide
EXP_CLUSTER_RESOURCE_SET: “true”
# Initialize the management cluster
clusterctl init --infrastructure vcd
clusterctl init --infrastructure vcluster
Please follow the Cluster API Provider for vcluster Quick Start Guide
# Initialize the management cluster
clusterctl init --infrastructure virtink
# The username used to access the remote vSphere endpoint
export VSPHERE_USERNAME="vi-admin@vsphere.local"
# The password used to access the remote vSphere endpoint
# You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
# bash history
export VSPHERE_PASSWORD="admin!23"
# Finally, initialize the management cluster
clusterctl init --infrastructure vsphere
For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere project.
The output of clusterctl init
is similar to this:
Fetching providers
Installing cert-manager Version="v1.11.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.0.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v1.0.0" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
Create your first workload cluster
Once the management cluster is ready, you can create your first workload cluster.
Preparing the workload cluster configuration
The clusterctl generate cluster
command returns a YAML template for creating a workload cluster.
Required configuration for common providers
Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied before configuring a cluster with Cluster API. Instructions are provided for common providers below.
Otherwise, you can look at the clusterctl generate cluster
command documentation for details about how to
discover the list of variables required by a cluster templates.
export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=default
# Select instance types
export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
export AWS_NODE_MACHINE_TYPE=t3.large
See the AWS provider prerequisites document for more details.
# Name of the Azure datacenter location. Change this value to your desired location.
export AZURE_LOCATION="centralus"
# Select VM types.
export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
# [Optional] Select resource group. The default value is ${CLUSTER_NAME}.
export AZURE_RESOURCE_GROUP="<ResourceGroupName>"
A Cluster API compatible image must be available in your CloudStack installation. For instructions on how to build a compatible image see image-builder (CloudStack)
Prebuilt images can be found here
To see all required CloudStack environment variables execute:
clusterctl generate cluster --infrastructure cloudstack --list-variables capi-quickstart
Apart from the script, the following CloudStack environment variables are required.
# Set this to the name of the zone in which to deploy the cluster
export CLOUDSTACK_ZONE_NAME=<zone name>
# The name of the network on which the VMs will reside
export CLOUDSTACK_NETWORK_NAME=<network name>
# The endpoint of the workload cluster
export CLUSTER_ENDPOINT_IP=<cluster endpoint address>
export CLUSTER_ENDPOINT_PORT=<cluster endpoint port>
# The service offering of the control plane nodes
export CLOUDSTACK_CONTROL_PLANE_MACHINE_OFFERING=<control plane service offering name>
# The service offering of the worker nodes
export CLOUDSTACK_WORKER_MACHINE_OFFERING=<worker node service offering name>
# The capi compatible template to use
export CLOUDSTACK_TEMPLATE_NAME=<template name>
# The ssh key to use to log into the nodes
export CLOUDSTACK_SSH_KEY_NAME=<ssh key name>
A full configuration reference can be found in configuration.md.
A ClusterAPI compatible image must be available in your DigitalOcean account. For instructions on how to build a compatible image see image-builder.
export DO_REGION=nyc1
export DO_SSH_KEY_FINGERPRINT=<your-ssh-key-fingerprint>
export DO_CONTROL_PLANE_MACHINE_TYPE=s-2vcpu-2gb
export DO_CONTROL_PLANE_MACHINE_IMAGE=<your-capi-image-id>
export DO_NODE_MACHINE_TYPE=s-2vcpu-2gb
export DO_NODE_MACHINE_IMAGE==<your-capi-image-id>
The Docker provider does not require additional configurations for cluster templates.
However, if you require special network settings you can set the following environment variables:
# The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.96.0.0/12"]
# The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["192.168.0.0/16"]
# The service domain, default "cluster.local"
export SERVICE_DOMAIN="k8s.test"
It is also possible but not recommended to disable the per-default enabled Pod Security Standard:
export POD_SECURITY_STANDARD_ENABLED="false"
There are several required variables you need to set to create a cluster. There are also a few optional tunables if you’d like to change the OS or CIDRs used.
# Required (made up examples shown)
# The project where your cluster will be placed to.
# You have to get one from the Equinix Metal Console if you don't have one already.
export PROJECT_ID="2b59569f-10d1-49a6-a000-c2fb95a959a1"
# This can help to take advantage of automated, interconnected bare metal across our global metros.
export METRO="da"
# What plan to use for your control plane nodes
export CONTROLPLANE_NODE_TYPE="m3.small.x86"
# What plan to use for your worker nodes
export WORKER_NODE_TYPE="m3.small.x86"
# The ssh key you would like to have access to the nodes
export SSH_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvMgVEubPLztrvVKgNPnRe9sZSjAqaYj9nmCkgr4PdK username@computer"
export CLUSTER_NAME="my-cluster"
# Optional (defaults shown)
export NODE_OS="ubuntu_18_04"
export POD_CIDR="192.168.0.0/16"
export SERVICE_CIDR="172.26.0.0/16"
# Only relevant if using the kube-vip flavor
export KUBE_VIP_VERSION="v0.5.0"
# Name of the GCP datacenter location. Change this value to your desired location
export GCP_REGION="<GCP_REGION>"
export GCP_PROJECT="<GCP_PROJECT>"
# Make sure to use same Kubernetes version here as building the GCE image
export KUBERNETES_VERSION=1.23.3
# This is the image you built. See https://github.com/kubernetes-sigs/image-builder
export IMAGE_ID=projects/$GCP_PROJECT/global/images/<built image>
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
export CLUSTER_NAME="<CLUSTER_NAME>"
See the GCP provider for more information.
# Required environment variables for VPC
# VPC region
export IBMVPC_REGION=us-south
# VPC zone within the region
export IBMVPC_ZONE=us-south-1
# ID of the resource group in which the VPC will be created
export IBMVPC_RESOURCEGROUP=<your-resource-group-id>
# Name of the VPC
export IBMVPC_NAME=ibm-vpc-0
export IBMVPC_IMAGE_ID=<you-image-id>
# Profile for the virtual server instances
export IBMVPC_PROFILE=bx2-4x16
export IBMVPC_SSHKEY_ID=<your-sshkey-id>
# Required environment variables for PowerVS
export IBMPOWERVS_SSHKEY_NAME=<your-ssh-key>
# Internal and external IP of the network
export IBMPOWERVS_VIP=<internal-ip>
export IBMPOWERVS_VIP_EXTERNAL=<external-ip>
export IBMPOWERVS_VIP_CIDR=29
export IBMPOWERVS_IMAGE_NAME=<your-capi-image-name>
# ID of the PowerVS service instance
export IBMPOWERVS_SERVICE_INSTANCE_ID=<service-instance-id>
export IBMPOWERVS_NETWORK_NAME=<your-capi-network-name>
Please visit the IBM Cloud provider for more information.
Please visit the K0smotron provider for more information.
# Required environment variables
# The KKZONE is used to specify where to download the binaries. (e.g. "", "cn")
export KKZONE=""
# The ssh name of the all instance Linux user. (e.g. root, ubuntu)
export USER_NAME=<your-linux-user>
# The ssh password of the all instance Linux user.
export PASSWORD=<your-linux-user-password>
# The ssh IP address of the all instance. (e.g. "[{address: 192.168.100.3}, {address: 192.168.100.4}]")
export INSTANCES=<your-linux-ip-address>
# The cluster control plane VIP. (e.g. "192.168.100.100")
export CONTROL_PLANE_ENDPOINT_IP=<your-control-plane-virtual-ip>
Please visit the KubeKey provider for more information.
export CAPK_GUEST_K8S_VERSION="v1.23.10"
export CRI_PATH="/var/run/containerd/containerd.sock"
export NODE_VM_IMAGE_TEMPLATE="quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}"
Please visit the KubeVirt project for more information.
Note: If you are running CAPM3 release prior to v0.5.0, make sure to export the following environment variables. However, you don’t need them to be exported if you use CAPM3 release v0.5.0 or higher.
# The URL of the kernel to deploy.
export DEPLOY_KERNEL_URL="http://172.22.0.1:6180/images/ironic-python-agent.kernel"
# The URL of the ramdisk to deploy.
export DEPLOY_RAMDISK_URL="http://172.22.0.1:6180/images/ironic-python-agent.initramfs"
# The URL of the Ironic endpoint.
export IRONIC_URL="http://172.22.0.1:6385/v1/"
# The URL of the Ironic inspector endpoint.
export IRONIC_INSPECTOR_URL="http://172.22.0.1:5050/v1/"
# Do not use a dedicated CA certificate for Ironic API. Any value provided in this variable disables additional CA certificate validation.
# To provide a CA certificate, leave this variable unset. If unset, then IRONIC_CA_CERT_B64 must be set.
export IRONIC_NO_CA_CERT=true
# Disables basic authentication for Ironic API. Any value provided in this variable disables authentication.
# To enable authentication, leave this variable unset. If unset, then IRONIC_USERNAME and IRONIC_PASSWORD must be set.
export IRONIC_NO_BASIC_AUTH=true
# Disables basic authentication for Ironic inspector API. Any value provided in this variable disables authentication.
# To enable authentication, leave this variable unset. If unset, then IRONIC_INSPECTOR_USERNAME and IRONIC_INSPECTOR_PASSWORD must be set.
export IRONIC_INSPECTOR_NO_BASIC_AUTH=true
Please visit the Metal3 getting started guide for more details.
A ClusterAPI compatible image must be available in your Nutanix image library. For instructions on how to build a compatible image see image-builder.
To see all required Nutanix environment variables execute:
clusterctl generate cluster --infrastructure nutanix --list-variables capi-quickstart
A ClusterAPI compatible image must be available in your OpenStack. For instructions on how to build a compatible image see image-builder. Depending on your OpenStack and underlying hypervisor the following options might be of interest:
To see all required OpenStack environment variables execute:
clusterctl generate cluster --infrastructure openstack --list-variables capi-quickstart
The following script can be used to export some of them:
wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc <path/to/clouds.yaml> <cloud>
Apart from the script, the following OpenStack environment variables are required.
# The list of nameservers for OpenStack Subnet being created.
# Set this value when you need create a new network/subnet while the access through DNS is required.
export OPENSTACK_DNS_NAMESERVERS=<dns nameserver>
# FailureDomain is the failure domain the machine will be created in.
export OPENSTACK_FAILURE_DOMAIN=<availability zone name>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=<flavor>
# The flavor reference for the flavor for your server instance.
export OPENSTACK_NODE_MACHINE_FLAVOR=<flavor>
# The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
export OPENSTACK_IMAGE_NAME=<image name>
# The SSH key pair name
export OPENSTACK_SSH_KEY_NAME=<ssh key pair name>
# The external network
export OPENSTACK_EXTERNAL_NETWORK_ID=<external network ID>
A full configuration reference can be found in configuration.md.
A ClusterAPI compatible image must be available in your Outscale account. For instructions on how to build a compatible image see image-builder.
# The outscale root disk iops
export OSC_IOPS="<IOPS>"
# The outscale root disk size
export OSC_VOLUME_SIZE="<VOLUME_SIZE>"
# The outscale root disk volumeType
export OSC_VOLUME_TYPE="<VOLUME_TYPE>"
# The outscale key pair
export OSC_KEYPAIR_NAME="<KEYPAIR_NAME>"
# The outscale subregion name
export OSC_SUBREGION_NAME="<SUBREGION_NAME>"
# The outscale vm type
export OSC_VM_TYPE="<VM_TYPE>"
# The outscale image name
export OSC_IMAGE_NAME="<IMAGE_NAME>"
A ClusterAPI compatible image must be available in your Proxmox cluster. For instructions on how to build a compatible VM template see image-builder.
# The node that hosts the VM template to be used to provision VMs
export PROXMOX_SOURCENODE="pve"
# The template VM ID used for cloning VMs
export TEMPLATE_VMID=100
# The ssh authorized keys used to ssh to the machines.
export VM_SSH_KEYS="ssh-ed25519 ..., ssh-ed25519 ..."
# The IP address used for the control plane endpoint
export CONTROL_PLANE_ENDPOINT_IP=10.10.10.4
# The IP ranges for Cluster nodes
export NODE_IP_RANGES="[10.10.10.5-10.10.10.50, 10.10.10.55-10.10.10.70]"
# The gateway for the machines network-config.
export GATEWAY="10.10.10.1"
# Subnet Mask in CIDR notation for your node IP ranges
export IP_PREFIX=24
# The Proxmox network device for VMs
export BRIDGE="vmbr1"
# The dns nameservers for the machines network-config.
export DNS_SERVERS="[8.8.8.8,8.8.4.4]"
# The Proxmox nodes used for VM deployments
export ALLOWED_NODES="[pve1,pve2,pve3]"
For more information about prerequisites and advanced setups for Proxmox, see the Proxmox getting started guide.
A ClusterAPI compatible image must be available in your VCD catalog. For instructions on how to build and upload a compatible image see CAPVCD
To see all required VCD environment variables execute:
clusterctl generate cluster --infrastructure vcd --list-variables capi-quickstart
export CLUSTER_NAME=kind
export CLUSTER_NAMESPACE=vcluster
export KUBERNETES_VERSION=1.23.4
export HELM_VALUES="service:\n type: NodePort"
Please see the vcluster installation instructions for more details.
To see all required Virtink environment variables execute:
clusterctl generate cluster --infrastructure virtink --list-variables capi-quickstart
See the Virtink provider document for more details.
It is required to use an official CAPV machine images for your vSphere VM templates. See uploading CAPV machine images for instructions on how to do this.
# The vCenter server IP or FQDN
export VSPHERE_SERVER="10.0.0.1"
# The vSphere datacenter to deploy the management cluster on
export VSPHERE_DATACENTER="SDDC-Datacenter"
# The vSphere datastore to deploy the management cluster on
export VSPHERE_DATASTORE="vsanDatastore"
# The VM network to deploy the management cluster on
export VSPHERE_NETWORK="VM Network"
# The vSphere resource pool for your VMs
export VSPHERE_RESOURCE_POOL="*/Resources"
# The VM folder for your VMs. Set to "" to use the root vSphere folder
export VSPHERE_FOLDER="vm"
# The VM template to use for your VMs
export VSPHERE_TEMPLATE="ubuntu-1804-kube-v1.17.3"
# The public ssh authorized key on all machines
export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
# The certificate thumbprint for the vCenter server
export VSPHERE_TLS_THUMBPRINT="97:48:03:8D:78:A9..."
# The storage policy to be used (optional). Set to "" if not required
export VSPHERE_STORAGE_POLICY="policy-one"
# The IP address used for the control plane endpoint
export CONTROL_PLANE_ENDPOINT_IP="1.2.3.4"
For more information about prerequisites, credentials management, or permissions for vSphere, see the vSphere getting started guide.
Generating the cluster configuration
For the purpose of this tutorial, we’ll name our cluster capi-quickstart.
clusterctl generate cluster capi-quickstart --flavor development \
--kubernetes-version v1.29.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
export CLUSTER_NAME=kind
export CLUSTER_NAMESPACE=vcluster
export KUBERNETES_VERSION=1.28.0
export HELM_VALUES="service:\n type: NodePort"
kubectl create namespace ${CLUSTER_NAMESPACE}
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -
As we described above, in this tutorial, we will use a LoadBalancer service in order to expose the API server of the
workload cluster, so we want to use the load balancer (lb) template (rather than the default one). We’ll use the
clusterctl’s --flavor
flag for that:
clusterctl generate cluster capi-quickstart \
--infrastructure="kubevirt" \
--flavor lb \
--kubernetes-version ${CAPK_GUEST_K8S_VERSION} \
--control-plane-machine-count=1 \
--worker-machine-count=1 \
> capi-quickstart.yaml
clusterctl generate cluster capi-quickstart \
--kubernetes-version v1.29.2 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
This creates a YAML file named capi-quickstart.yaml
with a predefined list of Cluster API objects; Cluster, Machines,
Machine Deployments, etc.
The file can be eventually modified using your editor of choice.
See clusterctl generate cluster for more details.
Apply the workload cluster
When ready, run the following command to apply the cluster manifest.
kubectl apply -f capi-quickstart.yaml
The output is similar to this:
cluster.cluster.x-k8s.io/capi-quickstart created
dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
Accessing the workload cluster
The cluster will now start provisioning. You can check status with:
kubectl get cluster
You can also get an “at glance” view of the cluster and its resources by running:
clusterctl describe cluster capi-quickstart
and see an output similar to this:
NAME PHASE AGE VERSION
capi-quickstart Provisioned 8s v1.29.2
To verify the first control plane is up:
kubectl get kubeadmcontrolplane
You should see an output is similar to this:
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
capi-quickstart-g2trk capi-quickstart true 3 3 3 4m7s v1.29.2
After the first control plane node is up and running, we can retrieve the workload cluster Kubeconfig.
clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
kind get kubeconfig --name capi-quickstart > capi-quickstart.kubeconfig
Install a Cloud Provider
The Kubernetes in-tree cloud provider implementations are being removed in favor of external cloud providers (also referred to as “out-of-tree”). This requires deploying a new component called the cloud-controller-manager which is responsible for running all the cloud specific controllers that were previously run in the kube-controller-manager. To learn more, see this blog post.
Install the official cloud-provider-azure Helm chart on the workload cluster:
helm install --kubeconfig=./capi-quickstart.kubeconfig --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=capi-quickstart --set cloudControllerManager.clusterCIDR="192.168.0.0/16"
For more information, see the CAPZ book.
Before deploying the OpenStack external cloud provider, configure the cloud.conf
file for integration with your OpenStack environment:
cat > cloud.conf <<EOF
[Global]
auth-url=<your_auth_url>
application-credential-id=<your_credential_id>
application-credential-secret=<your_credential_secret>
region=<your_region>
domain-name=<your_domain_name>
EOF
For more detailed information on configuring the cloud.conf
file, see the OpenStack Cloud Controller Manager documentation.
Next, create a Kubernetes secret using this configuration to securely store your cloud environment details. You can create this secret for example with:
kubectl -n kube-system create secret generic cloud-config --from-file=cloud.conf
Now, you are ready to deploy the external cloud provider!
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
Alternatively, refer to the helm chart.
Deploy a CNI solution
Calico is used here as an example.
Install the official Calico Helm chart on the workload cluster:
helm repo add projectcalico https://docs.tigera.io/calico/charts --kubeconfig=./capi-quickstart.kubeconfig && \
helm install calico projectcalico/tigera-operator --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --namespace tigera-operator --create-namespace
After a short while, our nodes should be running and in Ready
state,
let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
Calico not required for vcluster.
Before deploying the Calico CNI, make sure the VMs are running:
kubectl get vm
If our new VMs are running, we should see a response similar to this:
NAME AGE STATUS READY
capi-quickstart-control-plane-7s945 167m Running True
capi-quickstart-md-0-zht5j 164m Running True
We can also read the virtual machine instances:
kubectl get vmi
The output will be similar to:
NAME AGE PHASE IP NODENAME READY
capi-quickstart-control-plane-7s945 167m Running 10.244.82.16 kind-control-plane True
capi-quickstart-md-0-zht5j 164m Running 10.244.82.17 kind-control-plane True
Since our workload cluster is running within the kind cluster, we need to prevent conflicts between the kind (management) cluster’s CNI, and the workload cluster CNI. The following modifications in the default Calico settings are enough for these two CNI to work on (actually) the same environment.
- Change the CIDR to a non-conflicting range
- Change the value of the
CLUSTER_TYPE
environment variable tok8s
- Change the value of the
CALICO_IPV4POOL_IPIP
environment variable toNever
- Change the value of the
CALICO_IPV4POOL_VXLAN
environment variable toAlways
- Add the
FELIX_VXLANPORT
environment variable with the value of a non-conflicting port, e.g."6789"
.
The following script downloads the Calico manifest and modifies the required field. The CIDR and the port values are examples.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml -o calico-workload.yaml
sed -i -E 's|^( +)# (- name: CALICO_IPV4POOL_CIDR)$|\1\2|g;'\
's|^( +)# ( value: )"192.168.0.0/16"|\1\2"10.243.0.0/16"|g;'\
'/- name: CLUSTER_TYPE/{ n; s/( +value: ").+/\1k8s"/g };'\
'/- name: CALICO_IPV4POOL_IPIP/{ n; s/value: "Always"/value: "Never"/ };'\
'/- name: CALICO_IPV4POOL_VXLAN/{ n; s/value: "Never"/value: "Always"/};'\
'/# Set Felix endpoint to host default action to ACCEPT./a\ - name: FELIX_VXLANPORT\n value: "6789"' \
calico-workload.yaml
Now, deploy the Calico CNI on the workload cluster:
kubectl --kubeconfig=./capi-quickstart.kubeconfig create -f calico-workload.yaml
After a short while, our nodes should be running and in Ready
state, let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
kubectl --kubeconfig=./capi-quickstart.kubeconfig \
apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
After a short while, our nodes should be running and in Ready
state,
let’s check the status using kubectl get nodes
:
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capi-quickstart-vs89t-gmbld Ready control-plane 5m33s v1.29.2
capi-quickstart-vs89t-kf9l5 Ready control-plane 6m20s v1.29.2
capi-quickstart-vs89t-t8cfn Ready control-plane 7m10s v1.29.2
capi-quickstart-md-0-55x6t-5649968bd7-8tq9v Ready <none> 6m5s v1.29.2
capi-quickstart-md-0-55x6t-5649968bd7-glnjd Ready <none> 6m9s v1.29.2
capi-quickstart-md-0-55x6t-5649968bd7-sfzp6 Ready <none> 6m9s v1.29.2
Clean Up
Delete workload cluster.
kubectl delete cluster capi-quickstart
Delete management cluster
kind delete cluster
Next steps
- Create a second workload cluster. Simply follow the steps outlined above, but remember to provide a different name for your second workload cluster.
- Deploy applications to your workload cluster. Use the CNI deployment steps for pointers.
- See the clusterctl documentation for more detail about clusterctl supported actions.
This section contains information about the main CAPG features and how to use them.
Prerequisites
Requirements
- Linux or MacOS (Windows isn’t supported at the moment).
- A Google Cloud account.
- Packer and Ansible to build images
- Make to use
Makefile
targets - Install
coreutils
(for timeout) on OSX
Setup environment variables
export GCP_REGION="<GCP_REGION>"
export GCP_PROJECT="<GCP_PROJECT>"
# Make sure to use same kubernetes version here as building the GCE image
export KUBERNETES_VERSION=1.22.3
export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
export GCP_NODE_MACHINE_TYPE=n1-standard-2
export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
export CLUSTER_NAME="<CLUSTER_NAME>"
Setup a Network and Cloud NAT
Google Cloud accounts come with a default
network which can be found under
VPC Networks.
If you prefer to create a new Network, follow these instructions.
Cloud NAT
This infrastructure provider sets up Kubernetes clusters using a Global Load Balancer with a public ip address.
Kubernetes nodes, to communicate with the control plane, pull container images from registered (e.g. gcr.io or dockerhub) need to have NAT access or a public ip. By default, the provider creates Machines without a public IP.
To make sure your cluster can communicate with the outside world, and the load balancer, you can create a Cloud NAT in the region you’d like your Kubernetes cluster to live in by following these instructions.
NB: The following commands needs to be run if
${GCP_NETWORK_NAME}
is set todefault
# Ensure if network list contains default network
gcloud compute networks list --project="${GCP_PROJECT}"
gcloud compute networks describe "${GCP_NETWORK_NAME}" --project="${GCP_PROJECT}"
# Ensure if firewall rules are enabled
$ gcloud compute firewall-rules list --project "$GCP_PROJECT"
# Create routers
gcloud compute routers create "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" --region="${GCP_REGION}" --network="default"
# Create NAT
gcloud compute routers nats create "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" --router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter"
--nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
Create a Service Account
To create and manage clusters, this infrastructure provider uses a service account to authenticate with GCP’s APIs.
From your cloud console, follow these instructions to create a new service account with Editor
permissions.
If you plan yo use GKE the the service account will also need the iam.serviceAccountTokenCreator
role.
Afterwards, generate a JSON Key and store it somewhere safe.
Building images
NB: The following commands should not be run as
root
user.
# Export the GCP project id you want to build images in.
export GCP_PROJECT_ID=<project-id>
# Export the path to the service account credentials created in the step above.
export GOOGLE_APPLICATION_CREDENTIALS=</path/to/serviceaccount-key.json>
# Clone the image builder repository if you haven't already.
git clone https://github.com/kubernetes-sigs/image-builder.git image-builder
# Change directory to images/capi within the image builder repository
cd image-builder/images/capi
# Run the Make target to generate GCE images.
make build-gce-ubuntu-2004
# Check that you can see the published images.
gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images --filter="family:capi-ubuntu-2004-k8s"
# Export the IMAGE_ID from the above
export IMAGE_ID="projects/${GCP_PROJECT_ID}/global/images/<image-name>"
Clean-up
Delete the NAT gateway
gcloud compute routers nats delete "${CLUSTER_NAME}-mynat" --project="${GCP_PROJECT}" \
--router-region="${GCP_REGION}" --router="${CLUSTER_NAME}-myrouter" --quiet || true
Delete the router
gcloud compute routers delete "${CLUSTER_NAME}-myrouter" --project="${GCP_PROJECT}" \
--region="${GCP_REGION}" --quiet || true
Running Conformance tests
Required environment variables
- Set the GCP region
export GCP_REGION=us-east4
- Set the GCP project to use
export GCP_PROJECT=your-project-id
- Set the path to the service account
export GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account.json
Optional environment variables
- Set a specific name for your cluster
export CLUSTER_NAME=test1
- Set a specific name for your network
export NETWORK_NAME=test1-mynetwork
- Skip cleaning up the project resources
export SKIP_CLEANUP=1
Running the conformance tests
scripts/ci-conformance.sh
GKE Support in the GCP Provider
- Feature status: Experimental
- Feature gate (required): GKE=true
Overview
The GCP provider supports creating GKE based cluster. Currently the following features are supported:
- Provisioning/managing a GCP GKE Cluster
- Upgrading the Kubernetes version of the GKE Cluster
- Creating a managed node pool and attaching it to the GKE cluster
The implementation introduces the following CRD kinds:
- GCPManagedCluster - presents the properties needed to provision and manage the general GCP operating infrastructure for the cluster (i.e project, networking, iam)
- GCPManagedControlPlane - specifies the GKE Cluster in GCP and used by the Cluster API GCP Managed Control plane
- GCPManagedMachinePool - defines the managed node pool for the cluster
And a new template is available in the templates folder for creating a managed workload cluster.
SEE ALSO
Creating a GKE cluster
New “gke” cluster templates have been created that you can use with clusterctl
to create a GKE cluster.
To create a GKE cluster with a managed node group (a.k.a managed machine pool):
clusterctl generate cluster capi-gke-quickstart --flavor gke --worker-machine-count=3 > capi-gke-quickstart.yaml
Kubeconfig
When creating an GKE cluster 2 kubeconfigs are generated and stored as secrets in the management cluster.
User kubeconfig
This should be used by users that want to connect to the newly created GKE cluster. The name of the secret that contains the kubeconfig will be [cluster-name]-user-kubeconfig
where you need to replace [cluster-name] with the name of your cluster. The -user-kubeconfig in the name indicates that the kubeconfig is for the user use.
To get the user kubeconfig for a cluster named managed-test
you can run a command similar to:
kubectl --namespace=default get secret managed-test-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> managed-test.kubeconfig
Cluster API (CAPI) kubeconfig
This kubeconfig is used internally by CAPI and shouldn’t be used outside of the management server. It is used by CAPI to perform operations, such as draining a node. The name of the secret that contains the kubeconfig will be [cluster-name]-kubeconfig
where you need to replace [cluster-name] with the name of your cluster. Note that there is NO -user
in the name.
The kubeconfig is regenerated every sync-period
as the token that is embedded in the kubeconfig is only valid for a short period of time.
GKE Cluster Upgrades
Control Plane Upgrade
Upgrading the Kubernetes version of the control plane is supported by the provider. To perform an upgrade you need to update the controlPlaneVersion
in the spec of the GCPManagedControlPlane
. Once the version has changed the provider will handle the upgrade for you.
Enabling GKE Support
Enabling GKE support is done via the GKE feature flag by setting it to true. This can be done before running clusterctl init
by using the EXP_CAPG_GKE environment variable:
export EXP_CAPG_GKE=true
clusterctl init --infrastructure gcp
IMPORTANT: To use GKE the service account used for CAPG will need the
iam.serviceAccountTokenCreator
role assigned.
Disabling GKE Support
Support for GKE is disabled by default when you use the GCP infrastructure provider.
Machine Locations
This document describes how to configure the location of a CAPG cluster’s compute resources. By default, CAPG requires the user to specify a GCP region for the cluster’s machines by setting the GCP_REGION
environment variable as outlined in the CAPI quickstart guide. The provider then picks a zone to deploy the control plane and worker nodes in and generates the according portions of the cluster’s YAML manifests.
It is possible to override this default behaviour and exercise more fine-grained control over machine locations as outlined in the rest of this document.
Control Plane Machine Location
Before deploying the cluster, add a failureDomains
field to the spec
of your GCPCluster
definition, containing a list of allowed zones:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: GCPCluster
metadata:
name: capi-quickstart
spec:
network:
name: default
project: cyberscan2
region: europe-west3
+ failureDomains:
+ - europe-west3-b
In this example configuration, only a single zone has been added, ensuring the control plane is provisioned in europe-west3-b
.
Node Pool Location
Similar to the above, you can override the auto-generated GCP zone for your MachineDeployment
, by changing the value of the failureDomain
field at spec.template.spec.failureDomain
:
apiVersion: cluster.x-k8s.io/v1alpha4
kind: MachineDeployment
metadata:
name: capi-quickstart-md-0
spec:
clusterName: capi-quickstart
# [...]
template:
spec:
# [...]
clusterName: capi-quickstart
- failureDomain: europe-west3-a
+ failureDomain: europe-west3-b
When combined like this, the above configuration effectively instructs CAPG to deploy the CAPI equivalent of a zonal GKE cluster.
Preemptible Virtual Machines
GCP Preemptible Virtual Machines allows user to run a VM instance at a much lower price when compared to normal VM instances.
Compute Engine might stop (preempt) these instances if it requires access to those resources for other tasks. Preemptible instances will always stop after 24 hours.
When do I use Preemptible Virtual Machines?
A Preemptible VM works best for applications or systems that distribute processes across multiple instances in a cluster. While a shutdown would be disruptive for common enterprise applications, such as databases, it’s hardly noticeable in distributed systems that run across clusters of machines and are designed to tolerate failures.
How do I use Preemptible Virtual Machines?
To enable a machine to be backed by Preemptible Virtual Machine, add preemptible
option to GCPMachineTemplate
and set it to True.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
name: capg-md-0
spec:
region: us-west-1
template:
osDisk:
diskSizeGB: 30
managedDisk:
storageAccountType: STANDARD
osType: Linux
vmSize: E2
preemptible: true
Flannel
This document describes how to use Flannel as your CNI solution. By default, the CNI plugin is not installed for self-managed clusters, so you have to install your own (e.g. Calico with VXLAN)
Modify the Cluster resources
Before deploying the cluster, change the KubeadmControlPlane
value at spec.kubeadmConfigSpec.clusterConfiguration.controllerManager.extraArgs.allocate-node-cidrs
to "true"
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
spec:
kubeadmConfigSpec:
clusterConfiguration:
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
Modify Flannel Config
(NOTE): This is based off of the instruction at: deploying-flannel-manually
You need to make an adjustment to the default flannel configuration so that the CIDR inside your CAPG cluster matches the Flannel Network CIDR.
View your capi-cluster.yaml and make note of the Cluster Network CIDR Block. For example:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
Download the file at https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
and modify the kube-flannel-cfg
ConfigMap. Set the value at data.net-conf.json.Network
value to match your Cluster Network CIDR Block.
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Edit kube-flannel.yml and change this section so that the Network section matches your Cluster CIDR
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
data:
net-conf.json: |
{
"Network": "192.168.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
Apply kube-flannel.yml
kubectl apply -f kube-flannel.yml
Everything you need to know about contributing to CAPG.
If you are new to the project and want to help but don’t know where to start, you can refer to the Cluster API contributing guide.
Developing Cluster API Provider GCP
Setting up
Base requirements
- Install go
- Get the latest patch version for go v1.18.
- Install jq
brew install jq
on macOS.sudo apt install jq
on Windows + WSL2.sudo apt install jq
on Ubuntu Linux.
- Install gettext package
brew install gettext && brew link --force gettext
on macOS.sudo apt install gettext
on Windows + WSL2.sudo apt install gettext
on Ubuntu Linux.
- Install KIND
GO111MODULE="on" go get sigs.k8s.io/kind@v0.14.0
.
- Install Kustomize
brew install kustomize
on macOS.- install instructions on Windows + WSL2, Linux and macOS.
- Install Python 3.x, if neither is already installed.
- Install make.
brew install make
on MacOS.sudo apt install make
on Windows + WSL2.sudo apt install make
on Linux.
- Install timeout
brew install coreutils
on macOS.
When developing on Windows, it is suggested to set up the project on Windows + WSL2 and the file should be checked out on as wsl file system for better results.
Get the source
git clone https://github.com/kubernetes-sigs/cluster-api-provider-gcp
cd cluster-api-provider-gcp
Get familiar with basic concepts
This provider is modeled after the upstream Cluster API project. To get familiar with Cluster API resources, concepts and conventions (such as CAPI and CAPG), refer to the Cluster API Book.
Dev manifest files
Part of running cluster-api-provider-gcp is generating manifests to run. Generating dev manifests allows you to test dev images instead of the default releases.
Dev images
Container registry
Any public container registry can be leveraged for storing cluster-api-provider-gcp container images.
CAPG Node images
In order to deploy a workload cluster you will need to build the node images to use, for that you can reference the image-builder project, also you can read the image-builder book
Please refer to the image-builder documentation in order to get the latest requirements to build the node images.
To build the node images for GCP: https://image-builder.sigs.k8s.io/capi/providers/gcp.html
Developing
Change some code!
Modules and Dependencies
This repository uses Go Modules to track vendor dependencies.
To pin a new dependency:
- Run
go get <repository>@<version>
- (Optional) Add a replace statement in
go.mod
Makefile targets and scripts are offered to work with go modules:
make verify-modules
checks whether go modules are out of date.make modules
runsgo mod tidy
to ensure proper vendoring.hack/ensure-go.sh
checks that the Go version and environment variables are properly set.
Setting up the environment
Your environment must have the GCP credentials, check Authentication Getting Started
Tilt Requirements
Install Tilt:
brew install tilt-dev/tap/tilt
on macOS or Linuxscoop bucket add tilt-dev https://github.com/tilt-dev/scoop-bucket
&scoop install tilt
on Windows
After the installation is done, verify that you have installed it correctly with: tilt version
Install Helm:
brew install helm
on MacOSchoco install kubernetes-helm
on Windows- Install instructions for Linux
As the project lacks a lot of feature for windows, it would be suggested to follow the above steps on Windows + WSL2 rather than Windows.
Using Tilt
Both of the Tilt setups below will get you started developing CAPG in a local kind cluster. The main difference is the number of components you will build from source and the scope of the changes you’d like to make. If you only want to make changes in CAPG, then follow CAPG instructions. This will save you from having to build all of the images for CAPI, which can take a while. If the scope of your development will span both CAPG and CAPI, then follow the CAPI and CAPG instructions.
Tilt for dev in CAPG
If you want to develop in CAPG and get a local development cluster working quickly, this is the path for you.
From the root of the CAPG repository, run the following to generate a tilt-settings.json
file with your GCP
service account credentials:
$ cat <<EOF > tilt-settings.json
{
"kustomize_substitutions": {
"GCP_B64ENCODED_CREDENTIALS": "$(cat PATH_FOR_GCP_CREDENTIALS_JSON | base64 -w0)"
}
}
EOF
Set the following environment variables with the appropriate values for your environment:
$ export GCP_REGION="<GCP_REGION>" \
$ export GCP_PROJECT="<GCP_PROJECT>" \
$ export CONTROL_PLANE_MACHINE_COUNT=1 \
$ export WORKER_MACHINE_COUNT=1 \
# Make sure to use same kubernetes version here as building the GCE image
$ export KUBERNETES_VERSION=1.23.3 \
$ export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2 \
$ export GCP_NODE_MACHINE_TYPE=n1-standard-2 \
$ export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default> \
$ export CLUSTER_NAME="<CLUSTER_NAME>" \
To build a kind cluster and start Tilt, just run:
make tilt-up
Alternatively, you can also run:
./scripts/setup-dev-enviroment.sh
It will setup the network, if you already setup the network you can skip this step for that just run:
./scripts/setup-dev-enviroment.sh --skip-init-network
By default, the Cluster API components deployed by Tilt have experimental features turned off.
If you would like to enable these features, add extra_args
as specified in The Cluster API Book.
Once your kind management cluster is up and running, you can deploy a workload cluster.
To tear down the kind cluster built by the command above, just run:
make kind-reset
And if you need to cleanup the network setup you can run:
./scripts/setup-dev-enviroment.sh --clean-network
Tilt for dev in both CAPG and CAPI
If you want to develop in both CAPI and CAPG at the same time, then this is the path for you.
To use Tilt for a simplified development workflow, follow the instructions in the cluster-api repo. The instructions will walk you through cloning the Cluster API (CAPI) repository and configuring Tilt to use kind
to deploy the cluster api management components.
you may wish to checkout out the correct version of CAPI to match the version used in CAPG
Note that tilt up
will be run from the cluster-api repository
directory and the tilt-settings.json
file will point back to the cluster-api-provider-gcp
repository directory. Any changes you make to the source code in cluster-api
or cluster-api-provider-gcp
repositories will automatically redeployed to the kind
cluster.
After you have cloned both repositories, your folder structure should look like:
|-- src/cluster-api-provider-gcp
|-- src/cluster-api (run `tilt up` here)
After configuring the environment variables, run the following to generate your tilt-settings.json
file:
cat <<EOF > tilt-settings.json
{
"default_registry": "${REGISTRY}",
"provider_repos": ["../cluster-api-provider-gcp"],
"enable_providers": ["gcp", "docker", "kubeadm-bootstrap", "kubeadm-control-plane"],
"kustomize_substitutions": {
"GCP_B64ENCODED_CREDENTIALS": "$(cat PATH_FOR_GCP_CREDENTIALS_JSON | base64 -w0)"
}
}
EOF
$REGISTRY
should be in the formatdocker.io/<dockerhub-username>
The cluster-api management components that are deployed are configured at the /config
folder of each repository respectively. Making changes to those files will trigger a redeploy of the management cluster components.
Debugging
If you would like to debug CAPG you can run the provider with delve, a Go debugger tool. This will then allow you to attach to delve and troubleshoot the processes.
To do this you need to use the debug configuration in tilt-settings.json. Full details of the options can be seen here.
An example tilt-settings.json:
{
"default_registry": "gcr.io/your-project-name-her",
"provider_repos": ["../cluster-api-provider-gcp"],
"enable_providers": ["gcp", "kubeadm-bootstrap", "kubeadm-control-plane"],
"debug": {
"gcp": {
"continue": true,
"port": 30000,
"profiler_port": 40000,
"metrics_port": 40001
}
},
"kustomize_substitutions": {
"GCP_B64ENCODED_CREDENTIALS": "$(cat PATH_FOR_GCP_CREDENTIALS_JSON | base64 -w0)"
}
}
Once you have run tilt (see section below) you will be able to connect to the running instance of delve.
For vscode, you can use the a launch configuration like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "Core CAPI Controller GCP",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "",
"port": 30000,
"host": "127.0.0.1",
"showLog": true,
"trace": "log",
"logOutput": "rpc"
}
]
}
Create a new configuration and add it to the “Debug” menu to configure debugging in GoLand/IntelliJ following these instructions.
Alternatively, you may use delve straight from the CLI by executing a command like this:
delve -a tcp://localhost:30000
Deploying a workload cluster
After your kind management cluster is up and running with Tilt, ensure you have all the environment variables set as described in Tilt for dev in CAPG, and deploy a workload cluster with the following:
make create-workload-cluster
To delete the cluster:
make delete-workload-cluster
Submitting PRs and testing
Pull requests and issues are highly encouraged! If you’re interested in submitting PRs to the project, please be sure to run some initial checks prior to submission:
Do make sure to set the GOOGLE_APPLICATION_CREDENTIALS
environment variable with the path to your JSON file. Check out the this doc to generate the credential.
make lint # Runs a suite of quick scripts to check code structure
make test # Runs tests on the Go code
Executing unit tests
make test
executes the project’s unit tests. These tests do not stand up a
Kubernetes cluster, nor do they have external dependencies.
Creating cluster without clusterctl
This document describes how to create a management cluster and workload cluster without using clusterctl. For creating a cluster with clusterctl, checkout our Cluster API Quick Start
For creating a Management cluster
-
Build required images by using the following commands:
docker build --tag=gcr.io/k8s-staging-cluster-api-gcp/cluster-api-gcp-controller:e2e .
make docker-build-all
-
Set the required environment variables. For example:
export GCP_REGION=us-east4 export GCP_PROJECT=k8s-staging-cluster-api-gcp export CONTROL_PLANE_MACHINE_COUNT=1 export WORKER_MACHINE_COUNT=1 export KUBERNETES_VERSION=1.21.6 export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2 export GCP_NODE_MACHINE_TYPE=n1-standard-2 export GCP_NETWORK_NAME=default export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp_credentials.json | base64 | tr -d '\n' ) export CLUSTER_NAME="capg-test" export IMAGE_ID=projects/k8s-staging-cluster-api-gcp/global/images/cluster-api-ubuntu-2204-v1-27-3-nightly
You can check for other images to set the IMAGE_ID
of your choice.
- Run
make create-management-cluster
from root directory.
Jobs
This document provides an overview of our jobs running via Prow and Github actions.
Builds and tests running on the default branch
Legend
🟢 REQUIRED - Jobs that have to run successfully to get the PR merged.
Presubmits
Prow Presubmits:
-
🟢pull-cluster-api-provider-gcp-test
./scripts/ci-test.sh
-
🟢pull-cluster-api-provider-gcp-build
../scripts/ci-build.sh
-
🟢pull-cluster-api-provider-gcp-make
runner.sh
./scripts/ci-make.sh
-
🟢pull-cluster-api-provider-gcp-e2e-test
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-e2e.sh
-
pull-cluster-api-provider-gcp-conformance-ci-artifacts
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh --use-ci-artifacts
-
pull-cluster-api-provider-gcp-conformance
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh
-
pull-cluster-api-provider-gcp-capi-e2e
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" GINKGO_FOCUS="Cluster API E2E tests" ./scripts/ci-e2e.sh
-
pull-cluster-api-provider-gcp-test-release-0-4
./scripts/ci-test.sh
-
pull-cluster-api-provider-gcp-build-release-0-4
./scripts/ci-build.sh
-
pull-cluster-api-provider-gcp-make-release-0-4
runner.sh
./scripts/ci-make.sh
-
pull-cluster-api-provider-gcp-e2e-test-release-0-4
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-e2e.sh
-
pull-cluster-api-provider-gcp-make-conformance-release-0-4
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh --use-ci-artifacts
Github Presubmits Workflows:
-
Markdown-link-check
find . -name \*.md | xargs -I{} markdown-link-check -c .markdownlinkcheck.json {}
-
🟢Lint-check
make lint
Postsubmits
Github Postsubmit Workflows:
- Code-coverage-check
make test-cover
Periodics
Prow Periodics:
- periodic-cluster-api-provider-gcp-build
runner.sh
./scripts/ci-build.sh
- periodic-cluster-api-provider-gcp-test
runner.sh
./scripts/ci-test.sh
- periodic-cluster-api-provider-gcp-make-conformance-v1alpha4
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh
- periodic-cluster-api-provider-gcp-make-conformance-v1alpha4-k8s-ci-artifacts
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh --use-ci-artifacts
- periodic-cluster-api-provider-gcp-conformance-v1alpha4
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh
- periodic-cluster-api-provider-gcp-conformance-v1alpha4-k8s-ci-artifacts
"BOSKOS_HOST"="boskos.test-pods.svc.cluster.local" ./scripts/ci-conformance.sh --use-ci-artifacts
Release Process
Change milestone
- Create a new GitHub milestone for the next release
- Change milestone applier so new changes can be applied to the appropriate release
- Open a PR in https://github.com/kubernetes/test-infra to change this line
- Example PR: https://github.com/kubernetes/test-infra/pull/16827
- Open a PR in https://github.com/kubernetes/test-infra to change this line
Prepare branch, tag and release notes
- Update the file
metadata.yaml
if is a major or minor release - Submit a PR for the
metadata.yaml
update if needed, wait for it to be merged before continuing, and pull any changes prior to continuing. - Create tag with git
export RELEASE_TAG=v0.4.6
(the tag of the release to be cut)git tag -s ${RELEASE_TAG} -m "${RELEASE_TAG}"
-s
creates a signed tag, you must have a GPG key added to your GitHub account
git push upstream ${RELEASE_TAG}
make release
from repo, this will create the release artifacts in theout/
folder- Install the
release-notes
tool according to instructions - Export GITHUB_TOKEN
- Run the release-notes tool with the appropriate commits. Commits range from the first commit after the previous release to the new release commit.
release-notes --org kubernetes-sigs --repo cluster-api-provider-gcp \
--start-sha 1cf1ec4a1effd9340fe7370ab45b173a4979dc8f \
--end-sha e843409f896981185ca31d6b4a4c939f27d975de
--branch <RELEASE_BRANCH_OR_MAIN_BRANCH>
- Manually format and categorize the release notes
Promote image to prod repo
Promote image
- Images are built by the post push images job
- Create a PR in https://github.com/kubernetes/k8s.io to add the image and tag
- Example PR: https://github.com/kubernetes/k8s.io/pull/1462
- Location of image: https://console.cloud.google.com/gcr/images/k8s-staging-cluster-api-gcp/GLOBAL/cluster-api-gcp-controller?rImageListsize=30
To promote the image you should use a tool called cip-mm
, please refer: https://github.com/kubernetes-sigs/promo-tools/tree/main/cmd/cip-mm
For example, we want to promote v0.3.1 release, we can run the following command:
$ cip-mm --base_dir=$GOPATH/src/k8s.io/k8s.io/k8s.gcr.io --staging_repo=gcr.io/k8s-staging-cluster-api-gcp --filter_tag=v0.3.1
Release in GitHub
Create the GitHub release in the UI
- Create a draft release in GitHub and associate it with the tag that was created
- Copy paste the release notes
- Upload artifacts from the
out/
folder - Publish release
- Announce the release
Versioning
cluster-api-provider-gcp follows the semantic versioning specification.
Example versions:
- Pre-release:
v0.1.1-alpha.1
- Minor release:
v0.1.0
- Patch release:
v0.1.1
- Major release:
v1.0.0
Expected artifacts
- A release yaml file
infrastructure-components.yaml
containing the resources needed to deploy to Kubernetes - A
cluster-templates.yaml
for each supported flavor - A
metadata.yaml
which maps release series to cluster-api contract version - Release notes
Communication
Patch Releases
- Announce the release in Kubernetes Slack on the #cluster-api-gcp channel.
Minor/Major Releases
- Follow the communications process for pre-releases
- An announcement email is sent to
kubernetes-sig-cluster-lifecycle@googlegroups.com
with the subject[ANNOUNCE] cluster-api-provider-gcp <version> has been released
Cluster API GCP roadmap
This roadmap is a constant work in progress, subject to frequent revision. Dates are approximations. Features are listed in no particular order.
v0.4 (v1Alpha4)
Description | Issue/Proposal/PR |
---|
v1beta1/v1
Proposal awaits.
Lifecycle frozen
Items within this category have been identified as potential candidates for the project and can be moved up into a milestone if there is enough interest.
Description | Issue/Proposal/PR |
---|---|
Enabling GPU enabled clusters | #289 |
Publish images in GCP | #152 |
Proper bootstrap of manually deleted worker VMs | #173 |
Correct URI for subnetwork setup | #278 |
Workload identity support | #311 |
Implement GCPMachinePool using MIGs | #297 |