
Terraform module with GitHub Actions Runner Controller (ARC) deployed via Helm chart. The module is designed to deploy the GitHub Action Runner Controller and a flexible number of GitHub Actions Runners, managed by Horizontal Pod Autoscaling (HPA) in the Kubernetes cluster.
Depending on the value of controller.sync_period
(default is 300 seconds), ARC checks GitHub Actions workflow for the new events. If there are new jobs in the queue, ARC assigns the runner (according to the runner label) to process it. One runner is able to process only one job at a time. If there are more jobs in the queue, autoscaler will add more runner instances to process the queue. There is an option to set up the number of instances for autoscaler by changing the conditions of github_runners[*].min_replicas
and github_runners[*].max_replicas
variables.
Autoscaling is working according to the following conditions:
hcl scaleUpThreshold = "0.5"
scaleDownThreshold = "0.3"
scaleUpFactor = "1.7"
scaleDownFactor = "0.7"
type = "PercentageRunnersBusy"
Down scaling will take place if there are no new events in the queue during the controller.sync_period
and 10 seconds after the last sync attempt.
tf-k8s-crd | $50 |
Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:
shellterraform login solutions.corewide.com
Initialize mandatory providers:
Copy and paste into your Terraform configuration and insert the variables:
hclmodule "tf_k8s_github_actions_runner" {
source = "solutions.corewide.com/kubernetes/tf-k8s-github-actions-runner/helm"
version = "~> 2.0.0"
# specify module inputs here or try one of the examples below
...
}
Initialize the setup:
shellterraform init
Corewide DevOps team strictly follows Semantic Versioning
Specification
to
provide our clients with products that have predictable upgrades between versions. We
recommend
pinning
patch versions of our modules using pessimistic
constraint operator (~>
) to prevent breaking changes during upgrades.
To get new features during the upgrades (without breaking compatibility), use
~> 2.0
and run
terraform init -upgrade
For the safest setup, use strict pinning with version = "2.0.0"
All notable changes to this project are documented here.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
BREAKING CHANGE: now GitHub Runner and runner autoscaler Kubernetes manifests management is covered by tf-k8s-crd
module, which is incompatible with previous versions. Upgrade from an older version is possible with manual changes, see Upgrade Notes section
tf-k8s-crd
module dependencyBREAKING CHANGE: now all kubernetes
provider resources use versioned resources which aren't compatible with previous version
BREAKING CHANGE: now all Kubernetes manifest resources are managed by kubectl
provider instead of kubernetes
which are't compatible with previous version
kubectl
provider dependencykubernetes_manifest
resource with kubectl_manifest
github_runners[*].organization
and github_runners[*].repo
variablesFirst stable version
v1.x
to v2.x
Now all Kubernetes manifest resources are managed by kubectl
provider instead of kubernetes
. This approach is used to avoid cases when while deploying the module and the Kubernetes cluster at the same time, Kubernetes manifests are failing. The simplest, non-destructive way to upgrade is to remove the old kubernetes
provider resource from state and import this resource as a kubectl
provider resource, like so:
bash# Re-import runner manifest
terraform state rm module.github_runner.kubernetes_manifest.runner[0]
terraform import module.github_runner.kubectl_manifest.runner[0] actions.summerwind.dev/v1alpha1//RunnerDeployment//k8s-runners-0//github-runner-controller
# Re-import autoscaler manifest
terraform state rm module.github_runner.kubernetes_manifest.autoscaler[0]
terraform import module.github_runner.kubectl_manifest.autoscaler[0] actions.summerwind.dev/v1alpha1//HorizontalRunnerAutoscaler//runner-deployment-autoscaler-0//github-runner-controller
v2.x
to v3.x
Now all kubernetes
provider resources use versioned resources. According to kubernetes provider's suggestions
the simplest, non-destructive way to do this is to remove the old resource from state and import this resource as a version one, like so:
bash# If Kubernetes namespace was managed by the module, it must be re-imported
terraform state rm module.github_runner.kubernetes_namespace.github_runner[0]
terraform import module.github_runner.kubernetes_namespace_v1.github_runner[0] github-runner-controller
v3.x
to v4.x
The module from v4.0
has switched from "plain" kubectl_manifest
resources for GitHub Runner and runner autoscaler management to the module for CRD (Custom Resource Definitions) management. This results in a resource reference mismatch.
The CRD module uses alekc/kubectl
provider instead of gavinbunney/kubectl
. It means that the provider for all existing resources within your state must be updated and the following steps must be performed:
required_providers
sections in your main code to reflect the usage of alekc/kubectl
bashterraform state replace-provider gavinbunney/kubectl alekc/kubectl
bashterraform init
Then, TF state references of GitHub Runner and runner autostaler can be updated like so:
bashterraform state mv module.runner_controller.kubectl_manifest.runner[0] module.runner_controller.module.runner[0].kubectl_manifest.crd
terraform state mv module.runner_controller.kubectl_manifest.autoscaler[0] module.runner_controller.module.runner[0].data.kubernetes_resource.crd
Deploy complete stack with mandatory values only:
hclmodule "runner_controller" {
source = "solutions.corewide.com/kubernetes/tf-k8s-github-actions-runner/helm"
version = "~> 2.0"
github_token = "github_token"
controller = {}
github_runners = [
{
repo = "repo-owner/repo-name"
},
]
}
Deploy full stack with custom controller version, node selectors and labels, GitHub runner instance with custom repositories, labels and autoscaling conditions:
hclmodule "runner_controller" {
source = "solutions.corewide.com/kubernetes/tf-k8s-github-actions-runner/helm"
version = "~> 2.0"
github_token = "github_token"
controller = {
app_version = "v0.25.2"
chart_version = "0.20.2"
sync_period = "300"
labels = {
"app\\.kubernetes\\.io/instance" = "github-runner"
}
node_selector = {
"node\\.kubernetes\\.io/instance-type" = "m5.large"
}
}
github_runners = [
{
min_replicas = 1
max_replicas = 5
custom_labels = ["actions-runner"]
repo = "repo-owner/repo-name"
},
]
}
Deploy full stack for organization level with a custom synchronization period to reduce the time between polls:
hclmodule "runner_controller" {
source = "solutions.corewide.com/kubernetes/tf-k8s-github-actions-runner/helm"
version = "~> 2.0"
github_token = "github_token"
controller = {
sync_period = "60"
node_selector = {
"node\\.kubernetes\\.io/instance-type" = "m5.large"
}
}
github_runners = [
{
min_replicas = 1
max_replicas = 10
custom_labels = ["actions-runner"]
organization = "your-organization"
},
]
}
Pipeline example:
yaml---
name: pipeline
on:
push:
branches: [ my-branch ]
jobs:
build:
name: Build Docker
runs-on: my-awesome-custom-label
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Build Docker
run: |
docker build -t my-docker-image .
Variable | Description | Type | Default | Required | Sensitive |
---|---|---|---|---|---|
controller |
Set of parameters for the GitHub ARC | object |
yes | no | |
github_runners |
Set of parameters for the GitHub Runners | list(object) |
yes | no | |
github_token |
GitHub private access token (PAT classic) suitable for personal API. PAT should be granted with full access to repo and admin scopes |
string |
yes | no | |
name_prefix |
Naming prefix for all the resources created by the module | string |
yes | no | |
controller.app_version |
ARC application version | string |
v0.26.0 |
no | no |
controller.chart_version |
ARC Helm chart version | string |
0.21.0 |
no | no |
controller.create_secret |
Whether the ARC auth secret should be created | bool |
true |
no | no |
controller.custom_values |
Custom Helm chart values in key value format | map(string) |
{} |
no | no |
controller.high_availability |
Toggle extra ARC replica for redundancy | bool |
false |
no | no |
controller.labels |
Custom Kubernetes labels for ARC resources | map(string) |
{} |
no | no |
controller.node_selector |
Selector of node group to place ARC resources | map(string) |
{} |
no | no |
controller.runner_image |
ARC image name | string |
summerwind/actions-runner-controller |
no | no |
controller.sync_period |
The sync period in seconds between the polls | number |
300 |
no | no |
create_namespace |
Indicates creation of dedicated namespace for GitHub ARC resources | bool |
true |
no | no |
github_runners[*].custom_labels |
Custom labels to be assigned to the GitHub Runner. Use these labels as a value for the runs-on key in your pipelines |
list(string) |
['self-hosted'] |
no | no |
github_runners[*].max_replicas |
Max number of GitHub Runner replicas to be configured in autoscaler resource | number |
1 |
no | no |
github_runners[*].min_replicas |
Min number of GitHub Runner replicas to be configured in autoscaler resource | number |
1 |
no | no |
github_runners[*].organization |
GitHub organization for the runner to attach to | string |
no | no | |
github_runners[*].repo |
GitHub repository for the runner to attach to | string |
no | no | |
namespace |
The namespace to install the GitHub ARC into | string |
github-runner-controller |
no | no |
Dependency | Version | Kind |
---|---|---|
terraform |
>= 1.3 |
CLI |
gavinbunney/kubectl |
~> 1.13 |
provider |
hashicorp/helm |
~> 2.5 |
provider |
hashicorp/kubernetes |
~> 2.9 |
provider |