Terraform module to create a managed Kubernetes cluster (AKS) in MS Azure and a User Assigned Identity for it (can be used to access Azure resources from Kubernetes).
Supported Kubernetes versions are 1.32 and newer. Microsoft Azure supported Kubernetes releases.
Once you have a Corewide Solutions Portal account, this one-time action will use your browser session to retrieve credentials:
shellterraform login solutions.corewide.com
Initialize mandatory providers:
Copy and paste into your Terraform configuration and insert the variables:
hclmodule "tf_azure_k8s_aks" {
source = "solutions.corewide.com/azure/tf-azure-k8s-aks/azurerm"
version = "~> 5.2.0"
# specify module inputs here or try one of the examples below
...
}
Initialize the setup:
shellterraform init
Corewide DevOps team strictly follows Semantic Versioning
Specification
to
provide our clients with products that have predictable upgrades between versions. We
recommend
pinning
patch versions of our modules using pessimistic
constraint operator (~>) to prevent breaking changes during upgrades.
To get new features during the upgrades (without breaking compatibility), use
~> 5.2 and run
terraform init -upgrade
For the safest setup, use strict pinning with version = "5.2.0"
Terraform module to create a managed Kubernetes cluster (AKS) in MS Azure and a User Assigned Identity for it (can be used to access Azure resources from Kubernetes).
Supported Kubernetes versions are 1.32 and newer. Microsoft Azure supported Kubernetes releases.
All notable changes to this project are documented here.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
network_policies_enabled variable to toggle the network policies support in the cluster (disabled by default)1.32BREAKING CHANGE: now default node pool is configured separately and names of extra node pools have randomly generated suffix. Upgrading to this version will result in a complete re-creation of the node pools
maintenance cluster default node pool, which is configured in var.default_node_poolmaintenance). This change is used together with Terraform create_before_destroy meta-argument to provide fallback when recreating node pools<cluster-name>/node-pool-name label, which does not contain generated suffix to correctly reference the node poolsvar.node_pools from cluster defaultscluster_version to 1.29azurerm provider version to 4.0(Last version compatible with Terraform AzureRM v3)
cluster_workload_identity_enabled parametercluster_network_outbound_type parameter to choose the outbound routing methodBREAKING CHANGE: now all the managed resources follow Azure naming conventions and resource names aren't compatible with previous module versions. Upgrading to this version will result in a complete re-creation of the AKS resources.
1.25name_prefix variable to name_suffix1.24api_server_authorized_ip_ranges deprecated argument of azurerm_kubernetes_cluster resource to authorized_ip_ranges and move to api_server_access_profile blockazurerm provider version to 3.39BREAKING CHANGE: now all node pools have corresponding names in the state instead of abstract indexes which aren't compatible with a new version
node_pools input that requires at least one node pool creationcluster_version - K8s 1.23 and newer are allowedfor_each meta-argument instead of count in order to store and reference Node Pool resources with corresponding names instead of abstract indexesrg_name variable to resource_group_namedefault name of the first element in node_pools output now is dynamic and based on the name of first node pool in the node_pools module input1.23allowed_mgmt_networks) variablecluster output marked as sensitivev1.x to v2.xThe module from v2.0 has changed inputs principles and handling of node pool resource copies from count meta-argument to for_each which aren't compatible with an old version. First, update declaration of module according to the requirements and examples in order to match designed AKS configuration, then re-init module.
v2.x to v3.xSince v3.0, the module changes naming model of managed resources in accordance with Azure naming conventions, so all resources of the module will be re-created. Also, there is re-named variable name_prefix to name_suffix, it must be updated in module's declaration.
v3.x to v4.xModule from v4.0 has changed Azure provider version which isn't compatible with an old version. After the module version is upgraded, re-init module to upgrade Azure provider version.
Upgrade Azure provider version on project level to ~> 4.0:
hclterraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 4.0"
}
}
}
Upgrade project dependencies:
bashterraform init --upgrade
v4.x to v5.xThe module from v5.x uses a maintenance node pool as the default pool and introduces generated node pool suffixes. If you already have a maintenance node pool configured, remove it from var.node_pools and use the var.default_node_pool variable instead. Please note that due to these changes, all resources will be recreated.
After the module version is upgraded you must update the node selectors for Kubernetes resources to match new custom node pool labels:
from:
yamlkubernetes.azure.com/agentpooll: <node-pool-name>
to:
yaml<cluster_name>/node-pool-name: <node-pool-name>
v5.0.x to v5.1.xThe module from v5.1 has changed the minimal supported Kubernetes version to 1.32. You can skip this chapter if you already use K8s version 1.32 or higher.
The cluster version upgrade itself should pass without downtime. Since it also depends on the apps and services hosted on the cluster, to make sure the cluster version upgrade will pass well, please align with the documentation:
* AWS K8s cluster upgrade
* K8s Deprecated API Migration Guide
* K8s deprecation policy
The upgrade should roll one version at a time: from v1.29 to v1.30, then from v1.30 to v1.31, and then from v1.31 to v1.32, and so on.
Create an AKS cluster with customized maintenance and foo node pools, custom K8s version, network outbound type, enabled workload identity, and network policies:
hclresource "azurerm_resource_group" "foo" {
name = "foo"
location = "eastus"
}
module "aks" {
source = "solutions.corewide.com/azure/tf-azure-k8s-aks/azurerm"
version = "~> 5.2"
name_suffix = "bar"
resource_group_name = azurerm_resource_group.foo.name
region = azurerm_resource_group.foo.location
dns_prefix = "baz"
subnet_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Network/virtualNetworks/myvnet1/subnets/mysubnet1"
cluster_network_outbound_type = "userDefinedRouting"
cluster_workload_identity_enabled = true
network_policies_enabled = true
default_node_pool = {
node_size = "Standard_D2_v2"
min_size = 1
max_size = 1
}
node_pools = [
{
name = "foo"
min_size = 1
max_size = 3
},
]
tags = {
Layer = "Computing"
}
}
Basic AKS cluster configuration with required parameters only (default maintenance node pool is managed unconditionally):
hclresource "azurerm_resource_group" "foo" {
name = "foo"
location = "eastus"
}
module "aks" {
source = "solutions.corewide.com/azure/tf-azure-k8s-aks/azurerm"
version = "~> 5.2"
name_suffix = "bar"
resource_group_name = azurerm_resource_group.foo.name
region = azurerm_resource_group.foo.location
dns_prefix = "baz"
subnet_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Network/virtualNetworks/myvnet1/subnets/mysubnet1"
}
| Variable | Description | Type | Default | Required | Sensitive |
|---|---|---|---|---|---|
dns_prefix |
DNS prefix specified when creating the managed Kubernetes cluster | string |
yes | no | |
name_suffix |
Naming suffix for the AKS cluster and Managed Identity managed by the module | string |
yes | no | |
region |
Resource Group location where the managed Kubernetes cluster will be created | string |
yes | no | |
resource_group_name |
The Resource Group name where the managed Kubernetes cluster will exist | string |
yes | no | |
subnet_id |
The ID of the Subnet where node pools will be placed into | string |
yes | no | |
allowed_mgmt_networks |
CIDR blocks allowed to access K8s API | list(string) |
no | no | |
cluster_dns_ip |
IP address within the Kubernetes service address range that will be used by cluster service discovery | string |
no | no | |
cluster_network_outbound_type |
The outbound routing method which will be used for the managed Kubernetes cluster (loadBalancer or userDefinedRouting). Changing this parameter forces an AKS cluster recreation |
string |
loadBalancer |
no | no |
cluster_service_cidr |
The Network Range used by the AKS. Defaults to 10.0.0.0/16. Should not overlap with subnet_id. IP addressing planning |
string |
no | no | |
cluster_version |
Version of Kubernetes used for the cluster | string |
1.32 |
no | no |
cluster_workload_identity_enabled |
Indicates whether Azure AD Workload Identity should be enabled for the AKS cluster | bool |
false |
no | no |
default_node_pool |
Default node pool parameters | object |
{} |
no | no |
default_node_pool.labels |
A label map to apply to nodes in the maintenance pool | map(string) |
{} |
no | no |
default_node_pool.max_size |
The maximum number of nodes that the maintenance node pool can be scaled up to | number |
1 |
no | no |
default_node_pool.min_size |
The minimum number of nodes that the maintenance node pool can be scaled down to | number |
1 |
no | no |
default_node_pool.node_size |
The type of Droplet to be used as workers in the maintenance node pool | string |
Standard_D2_v2 |
no | no |
default_node_pool.os_type |
OS of the Kubernetes maintenance node group | string |
Linux |
no | no |
network_policies_enabled |
Whether network policies support is enabled in the cluster | bool |
false |
no | no |
node_pools |
List of node groups to create | list(object) |
[] |
no | no |
node_pools[*].labels |
A label map to apply to nodes in the pool | map(string) |
{} |
no | no |
node_pools[*].max_size |
The maximum number of nodes that the node pool can be scaled up to | number |
1 |
no | no |
node_pools[*].min_size |
The minimum number of nodes that the node pool can be scaled down to | number |
1 |
no | no |
node_pools[*].name |
A name for the node pool (can only contain lowercase alphanumeric characters and the length must be between 1-12 characters) | string |
yes | no | |
node_pools[*].node_size |
The type of Droplet to be used as workers in the node pool | string |
Standard_D2_v2 |
no | no |
node_pools[*].os_type |
OS of the Kubernetes node group | string |
Linux |
no | no |
tags |
Tags to attach to cluster resources | map(string) |
{} |
no | no |
| Output | Description | Type | Sensitive |
|---|---|---|---|
cluster |
AKS cluster resource | resource |
yes |
node_pools |
Attributes of all node pools | computed |
no |
user_assigned_identity |
Attributes of Azure User Assigned Identity resource | resource |
no |
| Dependency | Version | Kind |
|---|---|---|
terraform |
>= 1.3 |
CLI |
hashicorp/azurerm |
~> 4.0 |
provider |