📢 Webinar Alert! Reliability Automation - AI, ML, & Workflows in Incident Management. Register Here
Blog
DevOps
How to Create Your First Terraform Module: A Step-by-Step Guide

How to Create Your First Terraform Module: A Step-by-Step Guide

September 21, 2021
How to Create Your First Terraform Module: A Step-by-Step Guide
In This Article:
Our Products
On-Call Management
Incident Response
Continuous Learning
Workflow Automation

Before the advent of cloud and DevOps, most companies managed and deployed their infrastructure manually. This used to be risky because not only was it error-prone, it also slowed down the entire infrastructure cycle. The good news is that now, most companies are not deploying infrastructure manually but instead using tools like Terraform. In this blog, we are going to cover Terraform and the use of terraform modules.

Introduction to Infrastructure as Code(IaC)

The key idea behind Infrastructure as Code(IaC) is to manage almost ‘everything’ as code, where everything involves your servers, network devices, databases, application configuration, automated tests, deployment process, etc. This consists of every stage of your infrastructure lifecycle, starting from defining, deploying, updating, and destroying. The advantage of defining every resource as IaC is you can now version control it, reuse it, validate it and build a self-service model in your organization.

Intro to Terraform and how it fits into IaC space

Terraform is an open-source tool written in Go language, created by HashiCorp and is used to provision or manage infrastructure as code. It supports multiple providers like AWS, Google Cloud, Azure, Openstack, etc. To know about the complete list of providers, check this link: https://www.terraform.io/docs/language/providers/index.html

Now that you have a brief idea about Terraform, let's understand how Terraform fits into IaC space and how it's different from other tools(Chef, Puppet, Ansible, CloudFormation) in its space. Some of the key differences are:

  • Ansible, Chef, and Puppet are configuration management tools(used to push/pull configuration changes), whereas Terraform is used to provision infrastructure. Conversely, you can use configuration management to build infrastructure and Terraform to run configuration scripts, but that is not ideal. The better approach is to use them in conjunction, for example, Terraform to build infrastructure and then run Puppet on the newly built infrastructure to configure it.
  • The next significant difference is mutable vs. immutable infrastructure. Terraform creates immutable infrastructure, which means every time you push changes via Terraform, it builds an entirely new resource. On the other hand, if changes are pushed via Puppet, it will update the existing software version, leading to configuration drift in the long run.
  • Another difference is open source vs. proprietary; Terraform is an open-source tool and works with almost all the major providers, as we discussed above, whereas tools like CloudFormation are specific to AWS and are proprietary.

Terraform Installation

Installing Terraform is pretty straightforward as it comes with a single binary and you need to choose the binary depending upon your platform using this link: https://www.terraform.io/downloads.html

  • Download the binary(in case of Mac)
    wget https://releases.hashicorp.com/terraform/1.0.6/terraform_1.0.6_darwin_amd64.zip
  • Unzip the package
    unzip terraform_1.0.6_darwin_amd64.zip
  • Copy the binary on your Operating system path
    echo $PATH
    /opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

    sudo cp terraform /opt/homebrew/bin
  • Logout and login from the terminal and verify the terraform installation
    terraform version
    Terraform v1.0.6

How Terraform Works

Terraform works by making an API call on your behalf to the provider(AWS, GCP, Azure, etc.) you defined. Now to make an API call, it first needs to be authenticated, and that is done with the help of API keys(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). To create an IAM user and its corresponding keys, please check this doc: https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html

Now how much permission users have will be defined with the help of an IAM policy. To attach an existing policy to the user, check this doc: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console

Now in order to use these keys, you can export these as environment variables

$ export AWS_ACCESS_KEY_ID="abcxxx"
$ export AWS_SECRET_ACCESS_KEY="xyzasdd"

There are other ways to configure these credentials; check this doc for more info: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication

Now the next question, how does Terraform know which API to call? This is where you need to define the code in a Terraform configuration file(ends typically with .tf). These configuration files are the code in Infrastructure as Code(IaC).

How Terraform helps in creating immutable infrastructure using state files

Every time you run Terraform, it records the information about your infrastructure in a terraform state file(terraform.tfstate). This file stores information in json format and contains a mapping of your terraform resource in your configuration file vs. the real world in AWS cloud. Now when you run the terraform command, it fetches the resource's latest status, compares it with the tfstate file, and determines what changes need to be applied. If Terraform sees a drift, it will re-create or modify the resource.

Note: As you can see, this file is critically important. It is always good to store this file in some remote storage, for example, S3. In this way, your team member should have access to the same state file. Also, to avoid race conditions, i.e., two team members running terraform simultaneously and updating the state file, it's a good idea to apply locking, for example, via DynamoDB. For more information on how to do that, please check this doc: https://www.terraform.io/docs/language/settings/backends/s3.html

Introduction to Terraform module

Terraform module is a set of Terraform configuration files (*.tf) in a directory. The advantage of using modules is reusability. You can use the terraform module available in the terraform registry or share the module you created with your team members.

Writing your first terraform code

With all the prerequisites in place(configuring aws credentials access and secret keys), it’s time to write the first terraform code. Before we start writing our first terraform code, let's see how we are going to organize the files:

  • main.tf: This is our main configuration file where we are going to define our resource definition.
  • variables.tf: This is the file where we are going to define our variables.
  • outputs.tf: This file contains output definitions for our resources.

NOTE: Filename doesn’t have any special meaning for terraform as long as it ends with .tf extension, but this is a standard naming convention followed in terraform community.

Let first start with main.tf

provider
"aws"
{
    
region
=
"us-west-2"

}
Figure 1: Terraform provider
  • This will tell Terraform that we will use AWS as a provider, and we want to deploy our infrastructure in the us-west-2(Oregon) region.
  • AWS has data centers all over the world, which are grouped in region and availability zones. Region is a separate geographic area(Oregon, Virginia, Sydney) and each region has multiple isolated data centers(us-west-2a,us-west-2b..). For more info about regions and availability zones, please refer to this doc: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
  • The next step is to define the resource we want to create, and in this example, we will build an EC2 instance. The general syntax of creating a resource in terraform looks like this:
resource
"<PROVIDER_TYPE>" "NAME"
{
    [CONFIG...]
}
Figure 2: Terraform resource general syntax
  • PROVIDER is the name of the provider, ‘AWS’ in this case
  • TYPE is the type of resource we want to create, for example, instance
  • NAME is the name of the identifier we are going to use throughout our terraform code to refer
  • CONFIG is the argument that is specific to the particular resource

Now that you understand the syntax for creating a resource, it’s time to write our first terraform code.

provider
"aws"
{
    
region
=
"us-west-2"

}

resource
"aws_instance" "ec2-instance"
{
    
ami
=
"ami-0c2d06d50ce30b442"

    
instance_type
=
"t2.micro"

    
vpc_security_group_ids
=
"aws_security_group.mysg.id"

}

resource
"aws_security_group" "mysg"
{
    
name
=
"allow-ssh"

    
description
=
"Allow ssh traffic"

    
vpc_id
=
"vpc-07142bf09e3b0cf4b"


    
ingress
{
        
description
=
"Allow inbound ssh traffic"

        
from_port
=
22

        
to_port
=
22

        
protocol
=
"tcp"

        
cidr_blocks
=
"[0.0.0.0/0]"

    }

    
tags
= {
        
name
=
"allow_ssh"

    }
}
Figure 3: First terraform code to create an ec2 instance
  • Here we are using the aws_instance resource to create an ec2 instance. ec2-instance is the name of the identifier we will use in the rest of the code. For more information about aws_instance resource, please check this link: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
  • ami: ami stands for amazon machine image and is used to run an ec2 instance. In this case, we are using Amazon Linux ami, but please feel free to use ami based on your requirement. For more information about ami, please check this link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
  • Instance_type: AWS provides different instance types based on workload requirements. For example, the t2.micro instance provides 1GB of Memory and 1 Virtual CPU. For more information about instance type, please check this link: https://aws.amazon.com/ec2/instance-types/
  • Vpc_security_group_ids: Here, we are using interpolation to refer back to the security group. Here we are getting the security group id by referencing the aws_security_group resource and using mysg identifier.
aws_security_group.mysg.id

In the next section, we create a security group using the aws_security_group resource that allows inbound traffic on port 22.

  • Name: This is the name of a security group. If you omit it, terraform will assign some random unique name.
  • Description: This is a description of the security group. If you don’t assign any value, then the default value of “Managed by Terraform” is set.
  • Vpc_id: This is the Virtual Private Cloud id of your AWS account where you want to create this security group.
  • Ingress: In this block, you will define what port you want to allow for incoming connections.
  • From_port: This is the start range for port
  • To_port: This is the end range of the port
  • Protocol: The protocol for the port range
  • Cidr_block: List of CIDR blocks from where you want to allow traffic
  • Tags: Assigning tags to the resource. Tags are a great way to identify resources.

As you can see in the above code, we are hardcoding the value of ami id, instance type, port, and vpc id. Later on, if we need to change these values, we must modify our main configuration file main.tf. It is much better to store these values in a separate file, and that is what we are going to do in the next step by storing all these variables and their definition in a separate file, variables.tf. Syntax of terraform variables look like this:

variable
"NAME"
{
    [CONFIG...]
}

So if you need to define a variable for ami id, it looks like this:

variable
"ami_id"
{
    
default
=
"ami-0c2d06d50ce30b442"

}
Figure 4: Terraform variable definition
  • name: ami_id is the name of the variable, and it can be any name
  • default: There are several ways you can pass value to the variable, for example, via environment variable, using -var option. If no value is specified, then it will use the default value.

Our variables.tf after modifying these values will look like this:

variable
"ami_id"
{
    
default
=
"ami-0c2d06d50ce30b442"

}

variable
"instance_type"
{
    
default
=
"t2.micro"

}

variable
"vpc_id"
{
    
default
=
"vpc-bc102dc4"

}

variable
"port"
{
    
default
=
22

}

variable
"cidr_block"
{
    
default
=
"0.0.0.0/0"

}
Figure 5: Modified variable definition file

To reference these values in main.tf we just need to add var in front of the variable.

ami
=
"var.ami_id"

Final main.tf file will look like this:

provider
"aws"
{
    
region
=
"us-west-2"

}

resource
"aws_instance" "ec2-instance"
{
    
ami
=
"var.ami_id"

    
instance_type
=
"var.instance_type"

    
vpc_security_group_ids
=
"[aws_security_group.mysg.id]"

}

resource
"aws_security_group" "mysg"
{
    
name
=
"allow-ssh"

    
description
=
"Allow ssh traffic"

    
vpc_id
=
"var.vpc_id"


    
ingress
{
        
description
=
"Allow inbound ssh traffic"

        
from_port
=
var.port

        
to_port
=
var.port

        
protocol
=
"tcp"

        
cidr_blocks
=
"[var.cidr_block]"

    }

    
tags
= {
        
name
=
"allow_ssh"

    }
}
Figure 6: Modified ec2 instance creation file with variable reference

The last file we are going to check is outputs.tf, whose syntax will look like this:

Output
"<NAME>"
{
    
value
=
<VALUE>

}

Where NAME is the name of the output variable and VALUE can be any terraform expression that we would like to be the output.

Now the question is, why do we need it? Take a look with the help of a simple example; when we create this ec2 instance, we don't want to go back to the aws console to grab its public IP; in this case, we can provide the IP address to an output variable.

output
"instance_id"
{
    
value
=
"aws_instance.ec2-instance.public_ip"

}
Figure 7: Output definition for instance id

In the example, we refer to aws_instance resource, ec2-instance identifier, and public_ip attribute. To get more information about the exported attributes, please check this link: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#public_ip

Similarly to get the id of the security group:

output
"security_group"
{
    
value
=
"aws_security_group.mysg.id"

}
Figure 8: Output definition for security group

Now our terraform code is ready, the first command we are going to execute is

  • terraform init: It’s going to download code for a provider(aws) that we will use.
  • terraform fmt: This command is optional but recommended. This will rewrite terraform configuration files in a canonical format.
  • terraform plan: This command will tell what terraform does before making any changes.
    1: (+ sign): Resource going to be created
    2: (- sign): Resource going to be deleted
    3: (~ sign): Resource going to be modified
  • terraform apply: To apply the changes, run terraform apply command.

Terraform is reading code and translating it to API calls to providers(aws in this case). Go to your AWS console https://us-west-2.console.aws.amazon.com/ec2/ and you will see your instance should be in the creating stage.

Figure 9: EC2 instance via AWS Console

NOTE: If you are executing these commands in a test environment and want to save cost, run terraform destroy command to clean up infrastructure.

Converting terraform code into a module with the help of AWS EC2

In the above example, we have created our first terraform code, now convert this code into modules. Syntax of the module will look like this:

module “<NAME>”
source = “<SOURCE>”
[CONFIG…]
}
  • NAME: The name of the identifier that you can use throughout your terraform code to refer to this module.
  • SOURCE: This is the path where the module code can be found
  • CONFIG: It consists of one or more arguments that are specific to that module.

Let's understand this with the help of an example. Create a directory ec2-instance and move all the *.tf files(main.tf, variables.tf and outputs.tf) inside it.

mkdir ec2-instance
mv *.tf ec2-instance

Now in the main directory create a file main.tf, so your directory structure will look like this:

touch main.tf

ls -ltr drwxr-xr-x 5 plakhera staff 160 Sep 12 18:55 ec2-instance -rw-r--r-- 1 plakhera staff 0 Sep 12 18:57 main.tf

Our module code file will look like this:

provider
"aws"
{
    
region
=
"us-west-2"

}

module
"ec2-instance"
{
    
source
=
"./ec2-instance"

    
ami_id
=
"ami-0c2d06d50ce30b442"

    
instance_type
=
"t2.micro"

    
vpc_id
=
"vpc-bc102dc4"

    
port
=
"22"

    
cidr_block
=
"0.0.0.0/0"

}
Figure 10: Terraform Module for EC2 instance
  • ec2-instance is the name of the module, and as a source its referencing the directory we have created earlier(where we moved all the *.tf files)
  • Next, we are referencing all the values of variables that we have defined inside the variables.tf. This is the advantage of using modules, as now we don’t need to go inside the variables.tf to modify the value, and we have one single place where we can refer and modify it.

variables.tf after the change will look like this:

variable
"ami_id"
{
}

variable
"instance_type"
{
}

variable
"vpc_id"
{
}

variable
"port"
{
}

variable
"cidr_block"
{
}
Figure 11: Modified variable definition file for module definition

How to version your modules and how it helps in creating a separate environment(Production vs. Staging)

In the previous example, when we created a module, we gave it a location of our local filesystem under the source. But in a real production environment, we can refer to it from a remote location, for example, GitHub, where we can even version control it.

source = “github.com/abc/modules//ec2-instance”

If you again check the previous example, we use instance type as t2.micro, which is good for test or development environments but may not be suitable for production environments. To overcome this problem, what you can do is tag your module. For example, all the odd tags are for the development environment, and all the even tags are for the production environment.

For development:

$ git tag -a "v0.0.1" -m "Creating ec2-instance module for development environment"
$ git push --follow-tags

For Production:

$ git tag -a "v0.0.2" -m "Creating ec2-instance module for production environment"
$ git push --follow-tags

This is how your module code will look like for the Production environment with changes made under source and instance_type.

module
"ec2-instance"
{
    
source
=
"github.com/abc/modules/ec2-instance"

    
ami_id
=
"ami-0c2d06d50ce30b442"

    
instance_type
=
"c4.4xlarge"

    
vpc_id
=
"vpc-bc102dc4"

    
port
=
"22"

    
cidr_block
=
"0.0.0.0/0"

}
Figure 12: Module definition for production environment

Terraform registry

In the previous step, we have created our own module. If someone in the company needs to build up an ec2 instance, they shouldn’t write the same terraform code from scratch. Software development encourages the practice where we can reuse the code. To reuse the code, most programming languages encourage developers to push the code to centralize the registry. For example, in Python, we have pip, and in node.js, we have npm. In the case of terraform, the centralized registry is called Terraform registry, which acts as a central repository for module sharing and makes it easier to reuse and discover: https://registry.terraform.io/

Read more: How to Deploy Multiple EC2 Instances using Terraform

Conclusion

As you have learned, creating modules in Terraform requires minimal effort. By creating modules, we build reusable components in the form of IaC, but we can also version control it. Each module change before pushing to production will go through a code review and automated process. You can create a separate module for each environment and safely roll it back in case of any issue.

Written By:
September 21, 2021
Squadcast Community
Squadcast Community
September 21, 2021
DevOps
Share this blog:
In This Article:
Get reliability insights delivered straight to your inbox.
Get ready for the good stuff! No spam, no data sale and no promotion. Just the awesome content you signed up for.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
If you wish to unsubscribe, we won't hold it against you. Privacy policy.
Get reliability insights delivered straight to your inbox.
Get ready for the good stuff! No spam, no data sale and no promotion. Just the awesome content you signed up for.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
If you wish to unsubscribe, we won't hold it against you. Privacy policy.
Get the latest scoop on Reliability insights. Delivered straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
If you wish to unsubscribe, we won't hold it against you. Privacy policy.
Squadcast is a leader in Incident Management on G2 Squadcast is a leader in Mid-Market IT Service Management (ITSM) Tools on G2 Squadcast is a leader in Americas IT Alerting on G2 Best IT Management Products 2024 Squadcast is a leader in Europe IT Alerting on G2 Squadcast is a leader in Enterprise Incident Management on G2 Users love Squadcast on G2
Squadcast is a leader in Incident Management on G2 Squadcast is a leader in Mid-Market IT Service Management (ITSM) Tools on G2 Squadcast is a leader in Americas IT Alerting on G2 Best IT Management Products 2024 Squadcast is a leader in Europe IT Alerting on G2 Squadcast is a leader in Enterprise Incident Management on G2 Users love Squadcast on G2
Squadcast is a leader in Incident Management on G2 Squadcast is a leader in Mid-Market IT Service Management (ITSM) Tools on G2 Squadcast is a leader in Americas IT Alerting on G2
Best IT Management Products 2024 Squadcast is a leader in Europe IT Alerting on G2 Squadcast is a leader in Enterprise Incident Management on G2
Users love Squadcast on G2
Copyright © Squadcast Inc. 2017-2024

How to Create Your First Terraform Module: A Step-by-Step Guide

Sep 21, 2021
Last Updated:
November 21, 2024
Share this post:
How to Create Your First Terraform Module: A Step-by-Step Guide

Learn how to create your first Terraform module with our comprehensive guide including main.tf example. Follow step-by-step instructions to streamline your infrastructure as code.

Table of Contents:

    Before the advent of cloud and DevOps, most companies managed and deployed their infrastructure manually. This used to be risky because not only was it error-prone, it also slowed down the entire infrastructure cycle. The good news is that now, most companies are not deploying infrastructure manually but instead using tools like Terraform. In this blog, we are going to cover Terraform and the use of terraform modules.

    Introduction to Infrastructure as Code(IaC)

    The key idea behind Infrastructure as Code(IaC) is to manage almost ‘everything’ as code, where everything involves your servers, network devices, databases, application configuration, automated tests, deployment process, etc. This consists of every stage of your infrastructure lifecycle, starting from defining, deploying, updating, and destroying. The advantage of defining every resource as IaC is you can now version control it, reuse it, validate it and build a self-service model in your organization.

    Intro to Terraform and how it fits into IaC space

    Terraform is an open-source tool written in Go language, created by HashiCorp and is used to provision or manage infrastructure as code. It supports multiple providers like AWS, Google Cloud, Azure, Openstack, etc. To know about the complete list of providers, check this link: https://www.terraform.io/docs/language/providers/index.html

    Now that you have a brief idea about Terraform, let's understand how Terraform fits into IaC space and how it's different from other tools(Chef, Puppet, Ansible, CloudFormation) in its space. Some of the key differences are:

    • Ansible, Chef, and Puppet are configuration management tools(used to push/pull configuration changes), whereas Terraform is used to provision infrastructure. Conversely, you can use configuration management to build infrastructure and Terraform to run configuration scripts, but that is not ideal. The better approach is to use them in conjunction, for example, Terraform to build infrastructure and then run Puppet on the newly built infrastructure to configure it.
    • The next significant difference is mutable vs. immutable infrastructure. Terraform creates immutable infrastructure, which means every time you push changes via Terraform, it builds an entirely new resource. On the other hand, if changes are pushed via Puppet, it will update the existing software version, leading to configuration drift in the long run.
    • Another difference is open source vs. proprietary; Terraform is an open-source tool and works with almost all the major providers, as we discussed above, whereas tools like CloudFormation are specific to AWS and are proprietary.

    Terraform Installation

    Installing Terraform is pretty straightforward as it comes with a single binary and you need to choose the binary depending upon your platform using this link: https://www.terraform.io/downloads.html

    • Download the binary(in case of Mac)
      wget https://releases.hashicorp.com/terraform/1.0.6/terraform_1.0.6_darwin_amd64.zip
    • Unzip the package
      unzip terraform_1.0.6_darwin_amd64.zip
    • Copy the binary on your Operating system path
      echo $PATH
      /opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

      sudo cp terraform /opt/homebrew/bin
    • Logout and login from the terminal and verify the terraform installation
      terraform version
      Terraform v1.0.6

    How Terraform Works

    Terraform works by making an API call on your behalf to the provider(AWS, GCP, Azure, etc.) you defined. Now to make an API call, it first needs to be authenticated, and that is done with the help of API keys(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). To create an IAM user and its corresponding keys, please check this doc: https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html

    Now how much permission users have will be defined with the help of an IAM policy. To attach an existing policy to the user, check this doc: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console

    Now in order to use these keys, you can export these as environment variables

    $ export AWS_ACCESS_KEY_ID="abcxxx"
    $ export AWS_SECRET_ACCESS_KEY="xyzasdd"

    There are other ways to configure these credentials; check this doc for more info: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication

    Now the next question, how does Terraform know which API to call? This is where you need to define the code in a Terraform configuration file(ends typically with .tf). These configuration files are the code in Infrastructure as Code(IaC).

    How Terraform helps in creating immutable infrastructure using state files

    Every time you run Terraform, it records the information about your infrastructure in a terraform state file(terraform.tfstate). This file stores information in json format and contains a mapping of your terraform resource in your configuration file vs. the real world in AWS cloud. Now when you run the terraform command, it fetches the resource's latest status, compares it with the tfstate file, and determines what changes need to be applied. If Terraform sees a drift, it will re-create or modify the resource.

    Note: As you can see, this file is critically important. It is always good to store this file in some remote storage, for example, S3. In this way, your team member should have access to the same state file. Also, to avoid race conditions, i.e., two team members running terraform simultaneously and updating the state file, it's a good idea to apply locking, for example, via DynamoDB. For more information on how to do that, please check this doc: https://www.terraform.io/docs/language/settings/backends/s3.html

    Introduction to Terraform module

    Terraform module is a set of Terraform configuration files (*.tf) in a directory. The advantage of using modules is reusability. You can use the terraform module available in the terraform registry or share the module you created with your team members.

    Writing your first terraform code

    With all the prerequisites in place(configuring aws credentials access and secret keys), it’s time to write the first terraform code. Before we start writing our first terraform code, let's see how we are going to organize the files:

    • main.tf: This is our main configuration file where we are going to define our resource definition.
    • variables.tf: This is the file where we are going to define our variables.
    • outputs.tf: This file contains output definitions for our resources.

    NOTE: Filename doesn’t have any special meaning for terraform as long as it ends with .tf extension, but this is a standard naming convention followed in terraform community.

    Let first start with main.tf

    provider
    "aws"
    {
        
    region
    =
    "us-west-2"

    }
    Figure 1: Terraform provider
    • This will tell Terraform that we will use AWS as a provider, and we want to deploy our infrastructure in the us-west-2(Oregon) region.
    • AWS has data centers all over the world, which are grouped in region and availability zones. Region is a separate geographic area(Oregon, Virginia, Sydney) and each region has multiple isolated data centers(us-west-2a,us-west-2b..). For more info about regions and availability zones, please refer to this doc: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
    • The next step is to define the resource we want to create, and in this example, we will build an EC2 instance. The general syntax of creating a resource in terraform looks like this:
    resource
    "<PROVIDER_TYPE>" "NAME"
    {
        [CONFIG...]
    }
    Figure 2: Terraform resource general syntax
    • PROVIDER is the name of the provider, ‘AWS’ in this case
    • TYPE is the type of resource we want to create, for example, instance
    • NAME is the name of the identifier we are going to use throughout our terraform code to refer
    • CONFIG is the argument that is specific to the particular resource

    Now that you understand the syntax for creating a resource, it’s time to write our first terraform code.

    provider
    "aws"
    {
        
    region
    =
    "us-west-2"

    }

    resource
    "aws_instance" "ec2-instance"
    {
        
    ami
    =
    "ami-0c2d06d50ce30b442"

        
    instance_type
    =
    "t2.micro"

        
    vpc_security_group_ids
    =
    "aws_security_group.mysg.id"

    }

    resource
    "aws_security_group" "mysg"
    {
        
    name
    =
    "allow-ssh"

        
    description
    =
    "Allow ssh traffic"

        
    vpc_id
    =
    "vpc-07142bf09e3b0cf4b"


        
    ingress
    {
            
    description
    =
    "Allow inbound ssh traffic"

            
    from_port
    =
    22

            
    to_port
    =
    22

            
    protocol
    =
    "tcp"

            
    cidr_blocks
    =
    "[0.0.0.0/0]"

        }

        
    tags
    = {
            
    name
    =
    "allow_ssh"

        }
    }
    Figure 3: First terraform code to create an ec2 instance
    • Here we are using the aws_instance resource to create an ec2 instance. ec2-instance is the name of the identifier we will use in the rest of the code. For more information about aws_instance resource, please check this link: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
    • ami: ami stands for amazon machine image and is used to run an ec2 instance. In this case, we are using Amazon Linux ami, but please feel free to use ami based on your requirement. For more information about ami, please check this link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
    • Instance_type: AWS provides different instance types based on workload requirements. For example, the t2.micro instance provides 1GB of Memory and 1 Virtual CPU. For more information about instance type, please check this link: https://aws.amazon.com/ec2/instance-types/
    • Vpc_security_group_ids: Here, we are using interpolation to refer back to the security group. Here we are getting the security group id by referencing the aws_security_group resource and using mysg identifier.
    aws_security_group.mysg.id

    In the next section, we create a security group using the aws_security_group resource that allows inbound traffic on port 22.

    • Name: This is the name of a security group. If you omit it, terraform will assign some random unique name.
    • Description: This is a description of the security group. If you don’t assign any value, then the default value of “Managed by Terraform” is set.
    • Vpc_id: This is the Virtual Private Cloud id of your AWS account where you want to create this security group.
    • Ingress: In this block, you will define what port you want to allow for incoming connections.
    • From_port: This is the start range for port
    • To_port: This is the end range of the port
    • Protocol: The protocol for the port range
    • Cidr_block: List of CIDR blocks from where you want to allow traffic
    • Tags: Assigning tags to the resource. Tags are a great way to identify resources.

    As you can see in the above code, we are hardcoding the value of ami id, instance type, port, and vpc id. Later on, if we need to change these values, we must modify our main configuration file main.tf. It is much better to store these values in a separate file, and that is what we are going to do in the next step by storing all these variables and their definition in a separate file, variables.tf. Syntax of terraform variables look like this:

    variable
    "NAME"
    {
        [CONFIG...]
    }

    So if you need to define a variable for ami id, it looks like this:

    variable
    "ami_id"
    {
        
    default
    =
    "ami-0c2d06d50ce30b442"

    }
    Figure 4: Terraform variable definition
    • name: ami_id is the name of the variable, and it can be any name
    • default: There are several ways you can pass value to the variable, for example, via environment variable, using -var option. If no value is specified, then it will use the default value.

    Our variables.tf after modifying these values will look like this:

    variable
    "ami_id"
    {
        
    default
    =
    "ami-0c2d06d50ce30b442"

    }

    variable
    "instance_type"
    {
        
    default
    =
    "t2.micro"

    }

    variable
    "vpc_id"
    {
        
    default
    =
    "vpc-bc102dc4"

    }

    variable
    "port"
    {
        
    default
    =
    22

    }

    variable
    "cidr_block"
    {
        
    default
    =
    "0.0.0.0/0"

    }
    Figure 5: Modified variable definition file

    To reference these values in main.tf we just need to add var in front of the variable.

    ami
    =
    "var.ami_id"

    Final main.tf file will look like this:

    provider
    "aws"
    {
        
    region
    =
    "us-west-2"

    }

    resource
    "aws_instance" "ec2-instance"
    {
        
    ami
    =
    "var.ami_id"

        
    instance_type
    =
    "var.instance_type"

        
    vpc_security_group_ids
    =
    "[aws_security_group.mysg.id]"

    }

    resource
    "aws_security_group" "mysg"
    {
        
    name
    =
    "allow-ssh"

        
    description
    =
    "Allow ssh traffic"

        
    vpc_id
    =
    "var.vpc_id"


        
    ingress
    {
            
    description
    =
    "Allow inbound ssh traffic"

            
    from_port
    =
    var.port

            
    to_port
    =
    var.port

            
    protocol
    =
    "tcp"

            
    cidr_blocks
    =
    "[var.cidr_block]"

        }

        
    tags
    = {
            
    name
    =
    "allow_ssh"

        }
    }
    Figure 6: Modified ec2 instance creation file with variable reference

    The last file we are going to check is outputs.tf, whose syntax will look like this:

    Output
    "<NAME>"
    {
        
    value
    =
    <VALUE>

    }

    Where NAME is the name of the output variable and VALUE can be any terraform expression that we would like to be the output.

    Now the question is, why do we need it? Take a look with the help of a simple example; when we create this ec2 instance, we don't want to go back to the aws console to grab its public IP; in this case, we can provide the IP address to an output variable.

    output
    "instance_id"
    {
        
    value
    =
    "aws_instance.ec2-instance.public_ip"

    }
    Figure 7: Output definition for instance id

    In the example, we refer to aws_instance resource, ec2-instance identifier, and public_ip attribute. To get more information about the exported attributes, please check this link: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#public_ip

    Similarly to get the id of the security group:

    output
    "security_group"
    {
        
    value
    =
    "aws_security_group.mysg.id"

    }
    Figure 8: Output definition for security group

    Now our terraform code is ready, the first command we are going to execute is

    • terraform init: It’s going to download code for a provider(aws) that we will use.
    • terraform fmt: This command is optional but recommended. This will rewrite terraform configuration files in a canonical format.
    • terraform plan: This command will tell what terraform does before making any changes.
      1: (+ sign): Resource going to be created
      2: (- sign): Resource going to be deleted
      3: (~ sign): Resource going to be modified
    • terraform apply: To apply the changes, run terraform apply command.

    Terraform is reading code and translating it to API calls to providers(aws in this case). Go to your AWS console https://us-west-2.console.aws.amazon.com/ec2/ and you will see your instance should be in the creating stage.

    Figure 9: EC2 instance via AWS Console

    NOTE: If you are executing these commands in a test environment and want to save cost, run terraform destroy command to clean up infrastructure.

    Converting terraform code into a module with the help of AWS EC2

    In the above example, we have created our first terraform code, now convert this code into modules. Syntax of the module will look like this:

    module “<NAME>”
    source = “<SOURCE>”
    [CONFIG…]
    }
    • NAME: The name of the identifier that you can use throughout your terraform code to refer to this module.
    • SOURCE: This is the path where the module code can be found
    • CONFIG: It consists of one or more arguments that are specific to that module.

    Let's understand this with the help of an example. Create a directory ec2-instance and move all the *.tf files(main.tf, variables.tf and outputs.tf) inside it.

    mkdir ec2-instance
    mv *.tf ec2-instance

    Now in the main directory create a file main.tf, so your directory structure will look like this:

    touch main.tf

    ls -ltr drwxr-xr-x 5 plakhera staff 160 Sep 12 18:55 ec2-instance -rw-r--r-- 1 plakhera staff 0 Sep 12 18:57 main.tf

    Our module code file will look like this:

    provider
    "aws"
    {
        
    region
    =
    "us-west-2"

    }

    module
    "ec2-instance"
    {
        
    source
    =
    "./ec2-instance"

        
    ami_id
    =
    "ami-0c2d06d50ce30b442"

        
    instance_type
    =
    "t2.micro"

        
    vpc_id
    =
    "vpc-bc102dc4"

        
    port
    =
    "22"

        
    cidr_block
    =
    "0.0.0.0/0"

    }
    Figure 10: Terraform Module for EC2 instance
    • ec2-instance is the name of the module, and as a source its referencing the directory we have created earlier(where we moved all the *.tf files)
    • Next, we are referencing all the values of variables that we have defined inside the variables.tf. This is the advantage of using modules, as now we don’t need to go inside the variables.tf to modify the value, and we have one single place where we can refer and modify it.

    variables.tf after the change will look like this:

    variable
    "ami_id"
    {
    }

    variable
    "instance_type"
    {
    }

    variable
    "vpc_id"
    {
    }

    variable
    "port"
    {
    }

    variable
    "cidr_block"
    {
    }
    Figure 11: Modified variable definition file for module definition

    How to version your modules and how it helps in creating a separate environment(Production vs. Staging)

    In the previous example, when we created a module, we gave it a location of our local filesystem under the source. But in a real production environment, we can refer to it from a remote location, for example, GitHub, where we can even version control it.

    source = “github.com/abc/modules//ec2-instance”

    If you again check the previous example, we use instance type as t2.micro, which is good for test or development environments but may not be suitable for production environments. To overcome this problem, what you can do is tag your module. For example, all the odd tags are for the development environment, and all the even tags are for the production environment.

    For development:

    $ git tag -a "v0.0.1" -m "Creating ec2-instance module for development environment"
    $ git push --follow-tags

    For Production:

    $ git tag -a "v0.0.2" -m "Creating ec2-instance module for production environment"
    $ git push --follow-tags

    This is how your module code will look like for the Production environment with changes made under source and instance_type.

    module
    "ec2-instance"
    {
        
    source
    =
    "github.com/abc/modules/ec2-instance"

        
    ami_id
    =
    "ami-0c2d06d50ce30b442"

        
    instance_type
    =
    "c4.4xlarge"

        
    vpc_id
    =
    "vpc-bc102dc4"

        
    port
    =
    "22"

        
    cidr_block
    =
    "0.0.0.0/0"

    }
    Figure 12: Module definition for production environment

    Terraform registry

    In the previous step, we have created our own module. If someone in the company needs to build up an ec2 instance, they shouldn’t write the same terraform code from scratch. Software development encourages the practice where we can reuse the code. To reuse the code, most programming languages encourage developers to push the code to centralize the registry. For example, in Python, we have pip, and in node.js, we have npm. In the case of terraform, the centralized registry is called Terraform registry, which acts as a central repository for module sharing and makes it easier to reuse and discover: https://registry.terraform.io/

    Read more: How to Deploy Multiple EC2 Instances using Terraform

    Conclusion

    As you have learned, creating modules in Terraform requires minimal effort. By creating modules, we build reusable components in the form of IaC, but we can also version control it. Each module change before pushing to production will go through a code review and automated process. You can create a separate module for each environment and safely roll it back in case of any issue.

    What you should do now
    • Schedule a demo with Squadcast to learn about the platform, answer your questions, and evaluate if Squadcast is the right fit for you.
    • Curious about how Squadcast can assist you in implementing SRE best practices? Discover the platform's capabilities through our Interactive Demo.
    • Enjoyed the article? Explore further insights on the best SRE practices.
    • Schedule a demo with Squadcast to learn about the platform, answer your questions, and evaluate if Squadcast is the right fit for you.
    • Curious about how Squadcast can assist you in implementing SRE best practices? Discover the platform's capabilities through our Interactive Demo.
    • Enjoyed the article? Explore further insights on the best SRE practices.
    • Get a walkthrough of our platform through this Interactive Demo and see how it can solve your specific challenges.
    • See how Charter Leveraged Squadcast to Drive Client Success With Robust Incident Management.
    • Share this blog post with someone you think will find it useful. Share it on Facebook, Twitter, LinkedIn or Reddit
    • Get a walkthrough of our platform through this Interactive Demo and see how it can solve your specific challenges.
    • See how Charter Leveraged Squadcast to Drive Client Success With Robust Incident Management
    • Share this blog post with someone you think will find it useful. Share it on Facebook, Twitter, LinkedIn or Reddit
    • Get a walkthrough of our platform through this Interactive Demo and see how it can solve your specific challenges.
    • See how Charter Leveraged Squadcast to Drive Client Success With Robust Incident Management
    • Share this blog post with someone you think will find it useful. Share it on Facebook, Twitter, LinkedIn or Reddit
    What you should do now?
    Here are 3 ways you can continue your journey to learn more about Unified Incident Management
    Discover the platform's capabilities through our Interactive Demo.
    See how Charter Leveraged Squadcast to Drive Client Success With Robust Incident Management.
    Share the article
    Share this blog post on Facebook, Twitter, Reddit or LinkedIn.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Experience the benefits of Squadcast's Incident Management and On-Call solutions firsthand.
    Compare our plans and find the perfect fit for your business.
    See Redis' Journey to Efficient Incident Management through alert noise reduction With Squadcast.
    Discover the platform's capabilities through our Interactive Demo.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Experience the benefits of Squadcast's Incident Management and On-Call solutions firsthand.
    Compare Squadcast & PagerDuty / Opsgenie
    Compare and see if Squadcast is the right fit for your needs.
    Compare our plans and find the perfect fit for your business.
    Learn how Scoro created a solid foundation for better on-call practices with Squadcast.
    Discover the platform's capabilities through our Interactive Demo.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Experience the benefits of Squadcast's Incident Management and On-Call solutions firsthand.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Learn how Scoro created a solid foundation for better on-call practices with Squadcast.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Discover the platform's capabilities through our Interactive Demo.
    Enjoyed the article? Explore further insights on the best SRE practices.
    We’ll show you how Squadcast works and help you figure out if Squadcast is the right fit for you.
    Experience the benefits of Squadcast's Incident Management and On-Call solutions firsthand.
    Enjoyed the article? Explore further insights on the best SRE practices.
    Written By:
    September 21, 2021
    September 21, 2021
    Share this post:
    Subscribe to our LinkedIn Newsletter to receive more educational content
    Subscribe now
    ant-design-linkedIN

    Subscribe to our latest updates

    Enter your Email Id
    Thank you! Your submission has been received!
    Oops! Something went wrong while submitting the form.
    FAQs
    More from
    Squadcast Community
    Beyond the Blue Screen: Insights from the Microsoft-CrowdStrike Incident
    Beyond the Blue Screen: Insights from the Microsoft-CrowdStrike Incident
    August 29, 2024
    Squadcast leads the IT Alerting and Incident Management Landscape in G2's Summer 2024 Report
    Squadcast leads the IT Alerting and Incident Management Landscape in G2's Summer 2024 Report
    July 15, 2024
    How Do You Migrate from RBAC to OBAC with Terraform?
    How Do You Migrate from RBAC to OBAC with Terraform?
    May 6, 2024
    Learn how organizations are using Squadcast
    to maintain and improve upon their Reliability metrics
    Learn how organizations are using Squadcast to maintain and improve upon their Reliability metrics
    mapgears
    "Mapgears simplified their complex On-call Alerting process with Squadcast.
    Squadcast has helped us aggregate alerts coming in from hundreds...
    bibam
    "Bibam found their best PagerDuty alternative in Squadcast.
    By moving to Squadcast from Pagerduty, we have seen a serious reduction in alert fatigue, allowing us to focus...
    tanner
    "Squadcast helped Tanner gain system insights and boost team productivity.
    Squadcast has integrated seamlessly into our DevOps and on-call team's workflows. Thanks to their reliability...
    Alexandre Lessard
    System Analyst
    Martin do Santos
    Platform and Architecture Tech Lead
    Sandro Franchi
    CTO
    Squadcast is a leader in Incident Management on G2 Squadcast is a leader in Mid-Market IT Service Management (ITSM) Tools on G2 Squadcast is a leader in Americas IT Alerting on G2 Best IT Management Products 2022 Squadcast is a leader in Europe IT Alerting on G2 Squadcast is a leader in Mid-Market Asia Pacific Incident Management on G2 Users love Squadcast on G2
    Squadcast awarded as "Best Software" in the IT Management category by G2 🎉 Read full report here.
    What our
    customers
    have to say
    mapgears
    "Mapgears simplified their complex On-call Alerting process with Squadcast.
    Squadcast has helped us aggregate alerts coming in from hundreds of services into one single platform. We no longer have hundreds of...
    Alexandre Lessard
    System Analyst
    bibam
    "Bibam found their best PagerDuty alternative in Squadcast.
    By moving to Squadcast from Pagerduty, we have seen a serious reduction in alert fatigue, allowing us to focus...
    Martin do Santos
    Platform and Architecture Tech Lead
    tanner
    "Squadcast helped Tanner gain system insights and boost team productivity.
    Squadcast has integrated seamlessly into our DevOps and on-call team's workflows. Thanks to their reliability metrics we have...
    Sandro Franchi
    CTO
    Revamp your Incident Response.
    Peak Reliability
    Easier, Faster, More Automated with SRE.