
The module creates a managed Kubernetes cluster (DOKS) in DigitalOcean.
Supported Kubernetes versions are 1.29
and newer. DigitalOcean Kubernetes Supported Releases.
Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:
shellterraform login solutions.corewide.com
Initialize mandatory providers:
Copy and paste into your Terraform configuration and insert the variables:
hclmodule "tf_do_k8s_doks" {
source = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
version = "~> 3.1.0"
# specify module inputs here or try one of the examples below
...
}
Initialize the setup:
shellterraform init
Corewide DevOps team strictly follows Semantic Versioning
Specification
to
provide our clients with products that have predictable upgrades between versions. We
recommend
pinning
patch versions of our modules using pessimistic
constraint operator (~>
) to prevent breaking changes during upgrades.
To get new features during the upgrades (without breaking compatibility), use
~> 3.1
and run
terraform init -upgrade
For the safest setup, use strict pinning with version = "3.1.0"
All notable changes to this project are documented here.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
1.32
BREAKING CHANGE: now the node pools management has been rearranged by unconditionally creating the default maintenance node pool and adding random suffix to the extra node pools
default_node_pool
metrics_server
variable to allow Node Selector configuration1.29
metrics_server
variable parametersvpc
is a mandatory input now1.24
BREAKING CHANGE: now all node pools have corresponding names in the state instead of abstract indexes which aren't compatible with a new version
metrics-server
deployment as Helm releasecluster_ha
variable renamed to cluster_ha_enabled
for_each
meta-argument instead of count
in order to store and reference Node Pool resources with corresponding names instead of abstract indexescluster_maintenance.surge_upgrade
variable enables the surge cluster upgrade that speeds up the upgrade and reduces disruptions to workloads using up to 10 additional nodescluster_ha
variable to enable new high-availability (multi-master) control planecreate_before_destroy
meta-argument in Cluster and Node Pools resources in order to prevent failure during cluster and node pools resources recreationcluster_version
- K8S 1.23
and newer are allowed1.23
cluster
output containing sensitive data now marked as sensitiveModule from version 4.0.0
rearranged the node pools management by unconditionally creating the default maintenance node pool and adding random suffix to the extra node pools which is changed on the node pool recreation.
After the module version is upgraded you must update the node selectors for Kubernetes resources to match new custom node pool labels:
from:
yamldoks.digitalocean.com/node-pool: <node-pool-name>
to:
yaml<cluster_name>/node-pool-name: <node-pool-name>
Then apply the new module version.
v4.0.x
to v4.1.x
The module from v4.1
has changed the minimal supported Kubernetes version to 1.32
. You can skip this chapter if you already use K8s version 1.32
or higher.
The cluster version upgrade itself should pass without downtime. Since it also depends on the apps and services hosted on the cluster, to make sure the cluster version upgrade will pass well, please align with the documentation:
* AWS K8s cluster upgrade
* K8s Deprecated API Migration Guide
* K8s deprecation policy
The upgrade should roll one version at a time: from v1.29
to v1.30
, then from v1.30
to v1.31
, and then from v1.31
to v1.32
, and so on.
Create Kubernetes cluster with a single default node pool without auto-scaling:
hclprovider "helm" {
kubernetes {
host = module.doks.cluster.kube_config[0].host
cluster_ca_certificate = base64decode(module.doks.cluster.kube_config[0].cluster_ca_certificate)
token = module.doks.cluster.kube_config[0].token
}
}
module "doks" {
source = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
version = "~> 3.1"
name_prefix = "foo"
region = "fra1"
vpc = "506f78a4-e098-11e5-ad9f-000f53306ae1"
node_pools = [
{
name = "node-pool-application"
min_size = 3
},
]
cluster_tags = [
"production",
"app",
]
}
Create Kubernetes cluster with multiple node pools with auto-scaling and auto upgrade:
hclprovider "helm" {
kubernetes {
host = module.doks.cluster.kube_config[0].host
cluster_ca_certificate = base64decode(module.doks.cluster.kube_config[0].cluster_ca_certificate)
token = module.doks.cluster.kube_config[0].token
}
}
module "doks" {
source = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
version = "~> 3.1"
name_prefix = "foo"
region = "fra1"
vpc = "506f78a4-e098-11e5-ad9f-000f53306ae1"
cluster_maintenance = {
auto_upgrade = true
surge_upgrade = true
}
cluster_tags = [
"production",
"app",
]
node_pools = [
{
name = "frontend"
min_size = 2
max_size = 3
},
{
name = "backend"
min_size = 2
max_size = 5
},
]
}
Variable | Description | Type | Default | Required | Sensitive |
---|---|---|---|---|---|
cluster_tags |
A list of tag names to be applied to the Kubernetes cluster | list(string) |
yes | no | |
name_prefix |
Naming prefix for all the resources created by the module | string |
yes | no | |
node_pools |
List of node parameters to create node pools | list(object) |
yes | no | |
region |
The region where the DigitalOcean Kubernetes cluster should be created in | string |
yes | no | |
vpc |
The ID of the VPC where the Kubernetes cluster will be located | string |
yes | no | |
cluster_ha_enabled |
Whether High Availability (multi-master) should be enabled for the cluster | bool |
false |
no | no |
cluster_maintenance |
A set of parameters that defines automatic upgrade of the Kubernetes cluster version | object |
no | no | |
cluster_maintenance.auto_upgrade |
Indicates whether the cluster will be automatically upgraded to new patch releases during its maintenance window | bool |
false |
no | no |
cluster_maintenance.maintenance_window_day |
The day of the maintenance window policy | string |
sunday |
no | no |
cluster_maintenance.maintenance_window_start_time |
The start time in UTC of the maintenance window policy in 24-hour clock format | string |
01:00 |
no | no |
cluster_maintenance.surge_upgrade |
Indicates whether the cluster will create duplicate upgraded nodes to prevent downtime while upgrading cluster to new patch releases during its maintenance window | bool |
false |
no | no |
cluster_version |
Prefix of the major version of Kubernetes used for the cluster | string |
1.29 |
no | no |
metrics_server |
Map of metrics server parameters | object |
no | no | |
metrics_server.app_version |
application version of metrics server to be installed | string |
v0.6.2 |
no | no |
metrics_server.chart_version |
chart version to create metrics server from | string |
3.8.3 |
no | no |
metrics_server.custom_values |
map of custom values to apply to metrics server | map(any) |
{} |
no | no |
node_pools[*].labels |
A map of key/value pairs to apply to nodes in the pool | map(string) |
{} |
no | no |
node_pools[*].max_size |
If auto-scaling is enabled, this represents the maximum number of nodes that the node pool can be scaled up to | number |
1 |
no | no |
node_pools[*].min_size |
If auto-scaling is enabled, this represents the minimum number of nodes that the node pool can be scaled down to | number |
1 |
no | no |
node_pools[*].name |
A name for the node pool | string |
yes | no | |
node_pools[*].node_size |
The type of Droplet to be used as workers in the node pool | string |
s-2vcpu-2gb |
no | no |
node_pools[*].tags |
A list of tag names to be applied to the node pool | list(string) |
[] |
no | no |
Output | Description | Type | Sensitive |
---|---|---|---|
cluster |
Contains attributes of Kubernetes cluster | resource |
yes |
node_pools |
Contains attributes of all node pools | computed |
no |
Dependency | Version | Kind |
---|---|---|
terraform |
>= 1.3 |
CLI |
digitalocean/digitalocean |
~> 2.16 |
provider |
hashicorp/helm |
~> 2.5 |
provider |