Diagram of product resources

The module creates a managed Kubernetes cluster (GKE) in GCP.
Supported Kubernetes versions are 1.32 and newer.

By default, multi-zonal cluster is configured but it's also possible to configure cluster in a single zone of a region. See location parameter reference.

Default module configuration ensures the GKE cluster is created with enhanced security features enabled:

NOTE: To use Google Cloud Managed Service for Prometheus (GMP) to collect custom metrics, it is required to configure additional K8s PodMonitoring or ClusterPodMonitoring CRD(s)

NOTE: When secrets encryption at the application layer is enabled, secrets must be re-encrypted in case of KMS key rotation.

NOTE: Maintenance exclusion window can be specified along with the maintenance window only

Log in to Corewide IaC registry

Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:

 shellterraform login solutions.corewide.com
Provision instructions

Initialize mandatory providers:

Copy and paste into your Terraform configuration and insert the variables:

 hclmodule "tf_gcp_k8s_gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1.1"

  # specify module inputs here or try one of the examples below
  ...
}

Initialize the setup:

 shellterraform init
Define update strategy

Corewide DevOps team strictly follows Semantic Versioning Specification to provide our clients with products that have predictable upgrades between versions. We recommend pinning patch versions of our modules using pessimistic constraint operator (~>) to prevent breaking changes during upgrades.

To get new features during the upgrades (without breaking compatibility), use ~> 5.1 and run terraform init -upgrade

For the safest setup, use strict pinning with version = "5.1.1"

Diagram of product resources

The module creates a managed Kubernetes cluster (GKE) in GCP.
Supported Kubernetes versions are 1.32 and newer.

By default, multi-zonal cluster is configured but it's also possible to configure cluster in a single zone of a region. See location parameter reference.

Default module configuration ensures the GKE cluster is created with enhanced security features enabled:

NOTE: To use Google Cloud Managed Service for Prometheus (GMP) to collect custom metrics, it is required to configure additional K8s PodMonitoring or ClusterPodMonitoring CRD(s)

NOTE: When secrets encryption at the application layer is enabled, secrets must be re-encrypted in case of KMS key rotation.

NOTE: Maintenance exclusion window can be specified along with the maintenance window only

Creates a GKE cluster with workload identity enabled, a default node pool with the custom node_size, and a second node pool with autoscaling and workload identity pool enabled:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1"

  name_prefix                   = "foo"
  vpc                           = google_compute_network.main.self_link
  release_channel               = "STABLE"
  workload_identity_enabled     = true
  create_workload_identity_pool = true

  secrets_encryption = {
    key_id = google_kms_crypto_key.gke.id
  }

  node_pools = [
    {
      name        = "application"
      min_size    = 2
      max_size    = 5
      preemptible = true
      image       = "cos_containerd"
      tags        = ["application"]
    },
  ]
}

Creates a GKE cluster with IP aliasing enabled, restricted connection to the cluster API, disabled deletion protection, observability configured, and a custom cluster pod IPs:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1"

  name_prefix                 = "foo"
  vpc                         = google_compute_network.main.self_link
  release_channel             = "STABLE"
  gateway_api_config_channel  = "CHANNEL_STANDARD"
  deletion_protection_enabled = false
  vpc_native                  = true
  cluster_ipv4_cidr_block     = "10.100.0.0/20"

  allowed_mgmt_networks = {
    office = "104.22.0.0/24"
  }

  secrets_encryption = {
    key_id = google_kms_crypto_key.gke.id
  }

  default_node_pool = {
    node_size = "e2-standard-4"
  }

  node_pools = [
    {
      name        = "application"
      min_size    = 2
      max_size    = 5
      preemptible = true

      tags = [
        "application",
      ]
    },
  ]

  observability = {
    enabled                    = true
    managed_prometheus_enabled = true

    components = [
      "SYSTEM_COMPONENTS",
      "APISERVER",
      "SCHEDULER",
      "CONTROLLER_MANAGER",
      "STORAGE",
      "HPA",
      "DEPLOYMENT",
    ]
  }
}

Creates a GKE cluster with a custom maintenance window from 11:00 PM to 5:00 AM UTC, Weekly on Friday and Saturday, and exceptions to the maintenance window are set:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1"

  name_prefix                   = "foo"
  vpc                           = google_compute_network.main.self_link
  release_channel               = "STABLE"
  workload_identity_enabled     = true
  create_workload_identity_pool = true
  auto_upgrade                  = true

  maintenance_window = {
    start_time = "2006-01-02T23:00:00Z"
    end_time   = "2006-01-03T05:00:00Z"

    days = [
      "Fr",
      "Sa",
    ]
  }

  maintenance_exclusion = [
    {
      name       = "foo"
      start_time = "2025-12-20T20:00:00Z"
      end_time   = "2025-12-25T05:00:00Z"
      scope      = "NO_UPGRADES"
    },
    {
      name       = "bar"
      start_time = "2025-12-31T12:00:00Z"
      end_time   = "2026-01-01T05:00:00Z"
      scope      = "NO_MINOR_UPGRADES"
    },
  ]

  secrets_encryption = {
    key_id = google_kms_crypto_key.gke.id
  }

  node_pools = [
    {
      name        = "application"
      image       = "cos_containerd"
      min_size    = 2
      max_size    = 5
      preemptible = true
      tags        = ["application"]
    },
  ]
}

Creates a GKE cluster with Network Policies and binary authorization enabled in addition to advanced security features:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1"

  name_prefix         = "foo"
  vpc                 = google_compute_network.main.self_link
  release_channel     = "STABLE"
  binary_auth_enabled = true

  secrets_encryption = {
    key_id = google_kms_crypto_key.main.id
  }
}

GKE with one node pool: node pool is created unconditionally under maintenance pool name. The parameters like the node size can be customized using the default_node_pool variable:

 hclmodule "gke" {
  source  = "solutions.corewide.com/google-cloud/tf-gcp-k8s-gke/google"
  version = "~> 5.1"

  name_prefix     = "foo"
  vpc             = google_compute_network.main.self_link
  release_channel = "STABLE"

  secrets_encryption = {
    key_id = google_kms_crypto_key.gke.id
  }
}
Variable Description Type Default Required Sensitive
cluster_ipv4_cidr_block The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size(random /14 network) string yes no
name_prefix Name prefix for Google service account and GKE cluster string yes no
vpc VPC network self_link which will be attached to the Kubernetes cluster string yes no
allowed_mgmt_networks Map of CIDR blocks allowed to connect to cluster API map(string) no no
auto_upgrade Enables GKE cluster auto-upgrade. Can be disabled only if release_channel is UNSPECIFIED bool true no no
binary_auth_enabled Whether to enable binary authorization for GKE cluster bool false no no
block_project_ssh_keys Whether to prevent nodes from accepting SSH keys stored in project metadata bool true no no
cluster_master_cidr CIDR block to be used for control plane components string 172.16.0.0/28 no no
cluster_private_nodes_enabled Indicates whether cluster private nodes should be enabled. Must be set to true to have an option to disable control plane public endpoint bool true no no
cluster_version Kubernetes version (Major.Minor) string 1.32 no no
create_workload_identity_pool Indicates whether to create a GKE workload identity pool or use the existing one (one pool per project) bool true no no
default_node_pool Configuration of the maintenance node pool, that is created unconditionally object {} no no
default_node_pool.disk_size Disk size of a node number 20 no no
default_node_pool.max_size Maximum number of nodes in the pool number no no
default_node_pool.min_size Minimum number of nodes in the pool number 1 no no
default_node_pool.node_size Instance type to use for node creation string e2-standard-2 no no
deletion_protection_enabled Prevent cluster deletion by Terraform bool true no no
dns_endpoint_enabled Indicates whether control plane DNS endpoint is enabled. Can be used to access a private cluster control plane if the public endpoint is disabled bool false no no
gateway_api_config_channel Configuration options for the Gateway API config feature string CHANNEL_DISABLED no no
maintenance_exclusion Exceptions to the maintenance window. Non-emergency maintenance should not occur in these windows list(object) no no
maintenance_exclusion[*].end_time Maintenance exclusion interval window end date/time in UTC (RFC3339 Zulu) datetime format, e.g. 2025-09-02T02:00:00Z string 2026-01-02T10:00:00Z no no
maintenance_exclusion[*].name Defines the name of the maintenance exclusion interval string Maintenance Exclusion no no
maintenance_exclusion[*].scope The scope of automatic upgrades to restrict in the exclusion window. Possible values are: NO_UPGRADES, NO_MINOR_UPGRADES, NO_MINOR_OR_NODE_UPGRADES string NO_MINOR_UPGRADES no no
maintenance_exclusion[*].start_time Maintenance exclusion interval window start date/time in UTC (RFC3339 Zulu YYYY-MM-DDThh:mm:ssZ) datetime format, e.g. 2025-09-01T22:00:00Z string 2025-12-31T12:00:00Z no no
maintenance_window GKE maintenance window parameters. In sum, the maintenance window must be at least 48h a month object no no
maintenance_window.days List of weekdays in 2-letter format, on which the maintenance window will be applied. Possible values are: Mo, Tu, We, Th, Fr, Sa, Su list(string) ['Fr', 'Sa'] no no
maintenance_window.end_time Maintenance window end date/time in UTC (RFC3339 Zulu) datetime format, e.g. 2025-09-02T02:00:00Z. Duration = end_time - start_time. If the window crosses midnight, set the date of end_time to the next day (or further) to show the actual duration string 2006-01-02T06:00:00Z no no
maintenance_window.start_time Maintenance window start date/time in UTC (RFC3339 Zulu YYYY-MM-DDThh:mm:ssZ) datetime format, e.g. 2025-09-01T22:00:00Z. The date acts only as an anchor; the time of day defines when the window starts on each recurrence string 2006-01-02T00:00:00Z no no
maintenance_window.weekly_recurrence Defines whether the maintenance window repeats weekly (set true) or daily (set false) bool true no no
network_policies_enabled Whether network policies support is enabled in the cluster bool true no no
node_pools List of node pools to create list(object) [] no no
node_pools[*].disk_size Disk size of a node number 20 no no
node_pools[*].image Image type of node pools string COS_CONTAINERD no no
node_pools[*].max_size Maximum number of nodes in the pool number no no
node_pools[*].min_size Minimum number of nodes in the pool number 1 no no
node_pools[*].name Name of the node pool string yes no
node_pools[*].node_size Instance type to use for node creation string e2-standard-2 no no
node_pools[*].preemptible Whether the nodes should be preemptible bool false no no
node_pools[*].tags The list of instance tags to identify valid sources or targets for network firewalls (When is not set, the default rule set is applied) list(string) [] no no
observability Cluster observability configuration object {} no no
observability.components List of Kubernetes components exposing metrics to monitor list(string) ['SYSTEM_COMPONENTS', 'APISERVER', 'SCHEDULER', 'CONTROLLER_MANAGER', 'STORAGE', 'HPA', 'POD', 'DAEMONSET', 'DEPLOYMENT', 'STATEFULSET', 'KUBELET', 'CADVISOR', 'DCGM'] no no
observability.enabled Indicates whether cluster observability is enabled bool false no no
observability.managed_prometheus_enabled Indicates whether Google Cloud Managed Service for Prometheus (GMP) should be deployed bool true no no
public_endpoint_enabled Indicates whether control plane public endpoint is enabled. Can be disabled only if var.cluster_private_nodes_enabled is set to true bool true no no
region Specific zone which exists within the region or a single region string no no
release_channel Configuration options for the Release channel feature string UNSPECIFIED no no
secrets_encryption Secrets encryption at the application level configuration object {} no no
secrets_encryption.enabled Indicates whether secrets encryption is enabled bool true no no
secrets_encryption.key_id Cloud KMS key ID to use for the secrets encryption in etcd string no no
secure_boot_enabled Whether to enable secure boot feature for GKE nodes bool true no no
shielded_nodes_enabled Whether to enable shielded GKE nodes feature bool true no no
subnet_id The name or self_link of the Google Compute Engine subnetwork in which the cluster's instances are launched string no no
vpc_native Indicates whether IP alliasing should be enabled bool true no no
workload_identity_enabled Indicates whether workload identity is enabled and whether nodes should store their metadata on the GKE metadata server bool false no no
Output Description Type Sensitive
cluster GKE cluster resource resource no
node_pools List of created node pools resource no
workload_identity_pool GKE workload identity pool data computed no
Dependency Version Kind
terraform >= 1.3 CLI
hashicorp/google ~> 6.27 provider
hashicorp/google-beta ~> 6.27 provider

Not sure where to start?
Let's find your perfect match.