Please note that this post, first published over a year ago, may now be out of date.
Whether you are an everyday or occasional Terraform user, there is exciting news and you are in for a treat, as HashiCorp have released Beta2 preview on 17th June, with General Availability (GA) already pencilled in for 15th July (although it could be postponed for a couple of weeks, depending on what bugs show up during beta).

Let’s have a quick look at what is in the upcoming 0.13 release and what to look forward to.
Upcoming flagship features
There are exciting new major and long waited improvements, that lots of people have been waiting for. Two of the most noticeable upcoming features are:
- Module expansion with
count
andfor_each
: Similar to the arguments of the same name in resource and data blocks, these create multiple instances of a module from a single module block. - Module dependencies with
depends_on
: Modules can now use the depends_on argument to ensure that all module resource changes will be applied after any changes to the depends_on targets have been applied.
Let’s look at an example how we can use those new features in code. In this
example I want to define two modules. The first module deploys multiple S3
buckets, based on variable bucket_names and use of count
. Then another
module will be used to spin up multiple Kubernetes clusters based on defined
locals
, and use a for_each
loop to fetch locals keys/values. This way we
can use data structures to deploy multiple resources with modules without
duplicating our code. To demonstrate explicit module dependency, this uses the
new depends_on
parameter to ensure that all S3 buckets are deployed before you
create the Kubernetes clusters that rely on them:
variable "bucket_names" {
type = type("string")
default = ["prod", "qa", "dev"]
}
module "bucket_deploy" {
source = "terraform-aws-modules/s3-bucket/aws"
count = length(var.bucket_names)
region = var.region
bucket = var.bucket_names[count.index]
}
locals {
resources = {
wg-prod = "prod-eks"
wg-qa = "qa-eks"
wg-dev = "dev-eks"
}
}
module "my-cluster" {
source = "terraform-aws-modules/eks/aws"
for_each = local.resources
cluster_name = each.value
cluster_version = "1.16"
subnets = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
vpc_id = "vpc-1234556abcdef"
worker_groups = [
{
name = each.key
instance_type = "m4.large"
asg_max_size = 3
}
]
depends_on = [module.bucket_deploy]
}
What’s important to note here is, in order to access modules resources (i.e. as
outputs) created by using count
or for_each
, you need to use either tuple
(you can also use splat syntax) or map syntax. Examples:
output "bucket-dev" {
value = module.bucket_deploy[2].this_s3_bucket_id
}
output "k8s_cluster-prod" {
value = module.my-cluster["wg-prod"].cluster_id
}
Another headline and very important improvement in the 0.13 release is provider
source. This allows automatic installation of providers from outside the HashiCorp
namespace. Following announcement about moving default providers to Terraform
public Registry on 31st January, you can now provide a custom provider source in
extended required_providers
block. This means you can also host and publish
providers to the Registry from your own public Git repositories.
Note: Only one required_providers
block is allowed per module!
You do not need to declare the source if you are using one of HashiCorp’s
providers. Terraform will continue to automatically download them from the
appropriate source. Instead, this feature enables you to declare a custom provider
source and it will automatically be downloaded as part of the terraform init
step.
This will simplify provider use and offer more streamlined access to partner and
community providers, while also providing clear links to the ownership of all
providers.
Let’s examine these changes in a Terraform configuration block using both the existing and new syntax:
terraform {
required_providers {
# This is the current syntax, which is still supported
random = ">= 2.7.0"
# This is the new syntax. "source" and "version" are both
# optional, though in the future "source" will be required for
# any provider that isn't maintained by HashiCorp.
random = {
source = "registry.terraform.io/hashicorp/random"
version = "2.1.0"
}
}
}
Starting from Terraform v0.12.20, you can already use the new required_providers
block syntax, although any source attribute will be ignored silently until you
switch to v0.13.
Other enhancements
Coming as a stable feature in v0.13, you can now also set a custom validation rules for input variables. A new validation block type inside variable blocks allows module authors to define validation rules at the public interface into a module, so that errors in the calling configuration can be reported in the caller’s context rather than inside the implementation details of the module.
For example, you can use a validation check to fail early upon detecting an invalid AWS AMI image ID:
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for the server."
validation {
# regex(...) fails if it cannot find a match
# can(...) returns false if the code it contains produces an error
condition = can(regex("^ami-", var.image_id))
error_message = "Must be an AMI id, starting with \"ami-\"."
}
}
This feature was introduced as experimental in a v0.12 minor release, but as of
v0.13 it is now considered a stable feature and no longer requires explicitly
opting in to the associated experiment. It required an explicit opt-in using the
experiment keyword variable_validation
which is not needed anymore:
terraform {
experiments = [variable_validation]
}
Here are a few other interesting enhancements I would also like to mention:
- Terraform Cloud authentication process has been streamlined and supports automatically saving credentials and logging in when using v0.13 CLI.
- The Terraform CLI now supports TLS 1.3 and supports Ed25519 certificates when making outgoing connections to remote TLS servers. Both of these changes should be backwards compatible and only affect Terraform CLI itself, for example connecting to module registries or backends. Provider plugins have separate TLS implementations that will gain these features later on.
- A new subcommand,
terraform providers mirror
, can automatically construct or update a local filesystem mirror directory containing the providers required for the current configuration.
Breaking changes
Before the end of this article, let’s have a quick look at the potential for breaking changes when planning your upgrade. For full details, please also check the official CHANGELOG, as these are just a sample of the full list, which may in any case change before the final GA release. Some of the more noticeable items are:
- As part of new decentralized namespace for providers, Terraform now requires an explicit source specification for any provider that is not in the hashicorp namespace in the main public registry.
- Locking was improved and changes to the TableStore schema now require a primary key named LockID of type String
- The official macOS builds of Terraform CLI are no longer compatible with macOS 10.10 Yosemite; Terraform now requires at least macOS 10.11 El Capitan. Terraform 0.13 is the last major release that will support 10.11 El Capitan, so if you are upgrading your OS we recommend upgrading to macOS 10.12 Sierra or later.
- The official FreeBSD builds of Terraform CLI are no longer compatible with FreeBSD 10.x, which has reached end-of-life. Terraform now requires FreeBSD 11.2 or later.
How to get started
If you want to give it a go before GA, you can already download and install the
appropriate binary from releases.hashicorp.com! There is already a draft
upgrade guide with some initial information about upgrades. In a similar way
to the upgrade from v0.11 to v0.12, you can run terraform 0.13upgrade
locally
to perform check and rewrite code to v0.13. This should outline any
inconsistencies before doing Terraform upgrade.
To improve upcoming release it is also important to give feedback. Please use the community discussion forum created thread, or report bugs via GitHub.
We offer hands-on AWS training as part of our SaaS Growth subscription, to help your team make the best use of the AWS cloud. Book a free chat to find out more.
For some topics, you can also get the same training just on the topic you need - see our Terraform training and Kubernetes training pages.
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.