The module creates a managed Kubernetes cluster (GKE) in GCP.
Supported Kubernetes versions are 1.27 and newer.

NOTE: This module is meant to be used with an already created VPC.

NOTE: By default cluster is configured to work in a single zone of region but it's also possible to use multi-zonal cluster.
See location parameter reference

Log in to Corewide IaC registry

Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:

 shellterraform login solutions.corewide.com
Provision instructions

Initialize mandatory providers:

Copy and paste into your Terraform configuration and insert the variables:

 hclmodule "tf_gcp_k8s_gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 2.1.2"

  # specify module inputs here or try one of the examples below
  ...
}

Initialize the setup:

 shellterraform init
Define update strategy

Corewide DevOps team strictly follows Semantic Versioning Specification to provide our clients with products that have predictable upgrades between versions. We recommend pinning patch versions of our modules using pessimistic constraint operator (~>) to prevent breaking changes during upgrades.

To get new features during the upgrades (without breaking compatibility), use ~> 2.1 and run terraform init -upgrade

For the safest setup, use strict pinning with version = "2.1.2"

v2.1.2 released 8 months, 2 weeks ago
New version approx. every 9 weeks

GKE with two node pools: the first pool with autoscaling enabled and the second one disabled:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 2.1"

  name_prefix     = "foo"
  vpc             = google_compute_network.main.self_link
  release_chanel  = "STABLE"
  cluster_version = "1.27"

  node_pools = [
    {
      name        = "application"
      min_size    = 2
      max_size    = 5
      preemptible = true

      tags = [
        "application",
      ]
    },
    {
      name      = "maintenance"
      node_size = "e2-standard-4"

      tags = [
        "fixed",
        "maintenance",
      ]
    },
  ]
}

GKE cluster with workload identity enabled, one node pool with autoscaling enabled, workload identity pool created:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 2.1"

  name_prefix                   = "foo"
  vpc                           = google_compute_network.main.self_link
  release_channel               = "STABLE"
  cluster_version               = "1.27"
  workload_identity_enabled     = true
  create_workload_identity_pool = true

  node_pools = [
    {
      name        = "application"
      min_size    = 2
      max_size    = 5
      preemptible = true
      image       = "cos_containerd"
      tags        = ["application"]
    },
  ]
}

GKE with IP aliasing enabled and restricted connection to cluster API:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 2.1"

  name_prefix     = "foo"
  vpc             = google_compute_network.main.self_link
  release_channel = "STABLE"
  cluster_version = "1.27"

  allowed_mgmt_networks = {
    office = "104.22.0.0/24"
  }

  vpc_native = true

  node_pools = [
    {
      name = "application"
      tags = ["application"]
    }
  ]
}
Variable Description Type Default Required Sensitive
name_prefix Name prefix for Google service account and GKE cluster string yes no
node_pools List of node pools to create list(object) yes no
vpc VPC network self_link which will be attached to the Kubernetes cluster string yes no
allowed_mgmt_networks Map of CIDR blocks allowed to connect to cluster API map(string) no no
cluster_master_cidr CIDR block to be used for control plane components string 172.16.0.0/28 no no
cluster_private_nodes_enabled Indicates whether cluster private nodes should be enabled bool false no no
cluster_version Kubernetes version (Major.Minor) string 1.27 no no
create_workload_identity_pool Indicates whether to create a GKE workload identity pool or use the existing one (one pool per project) bool true no no
node_pools[*].disk_size Disk size of a node number 20 no no
node_pools[*].image Image type of node pools string COS_CONTAINERD no no
node_pools[*].max_size Maximum number of nodes in the pool number no no
node_pools[*].min_size Minimum number of nodes in the pool number 1 no no
node_pools[*].name Name of the node pool string yes no
node_pools[*].node_size Instance type to use for node creation string e2-standard-2 no no
node_pools[*].preemptible Whether the nodes should be preemptible bool false no no
node_pools[*].tags The list of instance tags to identify valid sources or targets for network firewalls (When is not set, the default rule set is applied) list(string) [] no no
region Specific zone which exists within the region or a single region string no no
release_channel Configuration options for the Release channel feature string UNSPECIFIED no no
subnet_id The name or self_link of the Google Compute Engine subnetwork in which the cluster's instances are launched string no no
vpc_native Indicates whether IP alliasing should be enabled bool false no no
workload_identity_enabled Indicates whether workload identity is enabled and whether nodes should store their metadata on the GKE metadata server bool false no no
Output Description Type Sensitive
cluster GKE cluster resource resource no
node_pools List of created node pools resource no
workload_identity_pool GKE workload identity pool data computed no
Dependency Version Kind
terraform >= 1.3 CLI
hashicorp/google ~> 4.13 provider
hashicorp/google-beta ~> 4.13 provider

Not sure where to start?
Let's find your perfect match.