The module creates a managed Kubernetes cluster (DOKS) in DigitalOcean.
Supported Kubernetes versions are 1.29 and newer. DigitalOcean Kubernetes Supported Releases.

Cluster surge upgrade additional node count cannot be set. When droplet limit is reached (10 nodes), upgrade continues without surging.

DigitalOcean Kubernetes Cluster

DigitalOcean Kubernetes Cluster management
$1,000
BUY
728
Log in to Corewide IaC registry

Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:

 shellterraform login solutions.corewide.com
Provision instructions

Initialize mandatory providers:

Copy and paste into your Terraform configuration and insert the variables:

 hclmodule "tf_do_k8s_doks" {
  source  = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
  version = "~> 3.1.0"

  # specify module inputs here or try one of the examples below
  ...
}

Initialize the setup:

 shellterraform init
Define update strategy

Corewide DevOps team strictly follows Semantic Versioning Specification to provide our clients with products that have predictable upgrades between versions. We recommend pinning patch versions of our modules using pessimistic constraint operator (~>) to prevent breaking changes during upgrades.

To get new features during the upgrades (without breaking compatibility), use ~> 3.1 and run terraform init -upgrade

For the safest setup, use strict pinning with version = "3.1.0"

v3.1.0 released 6 months, 3 weeks ago
New version approx. every 13 weeks

NOTE: HPA and VPA can be processed only in case if K8S master can collect pod performance metrics. Such kind of metrics can be collected only by metrics-server software. It isn't preinstalled by default in DOKS, so the module deploys corresponding software which is handled by resources of helm provider, and authentication settings for the managed cluster must be provided as well within module setup.

Create Kubernetes cluster with a single default node pool without auto-scaling:

 hclprovider "helm" {
  kubernetes {
    host                   = module.doks.cluster.kube_config[0].host
    cluster_ca_certificate = base64decode(module.doks.cluster.kube_config[0].cluster_ca_certificate)
    token                  = module.doks.cluster.kube_config[0].token
  }
}

module "doks" {
  source  = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
  version = "~> 3.1"

  name_prefix = "foo"
  region      = "fra1"
  vpc         = "506f78a4-e098-11e5-ad9f-000f53306ae1"

  node_pools = [
    {
      name     = "node-pool-application"
      min_size = 3
    },
  ]

  cluster_tags = [
    "production",
    "app",
  ]
}

NOTE: HPA and VPA can be processed only in case if K8S master can collect pod performance metrics. Such kind of metrics can be collected only by metrics-server software. It isn't preinstalled by default in DOKS, so the module deploys corresponding software which is handled by resources of helm provider, and authentication settings for the managed cluster must be provided as well within module setup.

WARNING: When using the module with multiple node pools avoid changing the order of node pools in the node_pools input of the module. The module always takes the first node pool in the list as directly assigned to the DOKS cluster resource thus a change of the order may lead to the DOKS cluster recreation.

Create Kubernetes cluster with multiple node pools with auto-scaling and auto upgrade:

 hclprovider "helm" {
  kubernetes {
    host                   = module.doks.cluster.kube_config[0].host
    cluster_ca_certificate = base64decode(module.doks.cluster.kube_config[0].cluster_ca_certificate)
    token                  = module.doks.cluster.kube_config[0].token
  }
}

module "doks" {
  source  = "solutions.corewide.com/digitalocean/tf-do-k8s-doks/digitalocean"
  version = "~> 3.1"

  name_prefix = "foo"
  region      = "fra1"
  vpc         = "506f78a4-e098-11e5-ad9f-000f53306ae1"

  cluster_maintenance = {
    auto_upgrade  = true
    surge_upgrade = true
  }

  cluster_tags = [
    "production",
    "app",
  ]

  node_pools = [
    {
      name     = "frontend"
      min_size = 2
      max_size = 3
    },
    {
      name     = "backend"
      min_size = 2
      max_size = 5
    },
  ]
}
Variable Description Type Default Required Sensitive
cluster_tags A list of tag names to be applied to the Kubernetes cluster list(string) yes no
name_prefix Naming prefix for all the resources created by the module string yes no
node_pools List of node parameters to create node pools list(object) yes no
region The region where the DigitalOcean Kubernetes cluster should be created in string yes no
vpc The ID of the VPC where the Kubernetes cluster will be located string yes no
cluster_ha_enabled Whether High Availability (multi-master) should be enabled for the cluster bool false no no
cluster_maintenance A set of parameters that defines automatic upgrade of the Kubernetes cluster version object no no
cluster_maintenance.auto_upgrade Indicates whether the cluster will be automatically upgraded to new patch releases during its maintenance window bool false no no
cluster_maintenance.maintenance_window_day The day of the maintenance window policy string sunday no no
cluster_maintenance.maintenance_window_start_time The start time in UTC of the maintenance window policy in 24-hour clock format string 01:00 no no
cluster_maintenance.surge_upgrade Indicates whether the cluster will create duplicate upgraded nodes to prevent downtime while upgrading cluster to new patch releases during its maintenance window bool false no no
cluster_version Prefix of the major version of Kubernetes used for the cluster string 1.29 no no
metrics_server Map of metrics server parameters object no no
metrics_server.app_version application version of metrics server to be installed string v0.6.2 no no
metrics_server.chart_version chart version to create metrics server from string 3.8.3 no no
metrics_server.custom_values map of custom values to apply to metrics server map(any) {} no no
node_pools[*].labels A map of key/value pairs to apply to nodes in the pool map(string) {} no no
node_pools[*].max_size If auto-scaling is enabled, this represents the maximum number of nodes that the node pool can be scaled up to number 1 no no
node_pools[*].min_size If auto-scaling is enabled, this represents the minimum number of nodes that the node pool can be scaled down to number 1 no no
node_pools[*].name A name for the node pool string yes no
node_pools[*].node_size The type of Droplet to be used as workers in the node pool string s-2vcpu-2gb no no
node_pools[*].tags A list of tag names to be applied to the node pool list(string) [] no no
Output Description Type Sensitive
cluster Contains attributes of Kubernetes cluster resource yes
node_pools Contains attributes of all node pools computed no
Dependency Version Kind
terraform >= 1.3 CLI
digitalocean/digitalocean ~> 2.16 provider
hashicorp/helm ~> 2.5 provider

Not sure where to start?
Let's find your perfect match.