Simplify Your Development on AWS with Terraform

When I wrote my data lake demo series (part 1, part 2 and part 3) recently, I used an Aurora PostgreSQL, MSK and EMR cluster. All of them were deployed to private subnets and dedicated infrastructure was created using CloudFormation. Using the infrastructure as code (IaC) tool helped a lot but it resulted in creating 7 CloudFormation stacks, which was a bit harder to manage in the end. Then I looked into how to simplify building infrastructure and managing resources on AWS and decided to use Terraform instead. I find it has useful constructs (e.g. meta-arguments) to make it simpler to create and manage resources. It also has a wide range of useful modules that facilitate development significantly. In this post, we’ll build an infrastructure for development on AWS with Terraform. A VPN server will also be included in order to improve developer experience by accessing resources in private subnets from developer machines.


The infrastructure that we’ll discuss in this post is shown below. The database is deployed in a private subnet and it is not possible to access it from the developer machine. We can construct a PC-to-PC VPN with SoftEther VPN. The VPN server runs in a public subnet and it is managed by an autoscaling group where only a single instance will be maintained. An elastic IP address is associated by a bootstrap script so that its public IP doesn’t change even if the EC2 instance is recreated. We can add users with the server manager program and they can access the server with the client program. Access from the VPN server to the database is allowed by adding an inbound rule where the source security group ID is set to the VPN server’s security group ID. Note that another option is AWS Client VPN but it is way more expensive. We’ll create 2 private subnets and it’ll cost $0.30/hour for endpoint association in the Sydney region. It also charges $0.05/hour for each connection and the minimum charge will be $0.35/hour. On the other hand, the SorftEther VPN server runs in the t3.nano instance and its cost is only $0.0066/hour.

Even developing a single database can result in a stack of resources and Terraform can be of great help to create and manage those resources. Also VPN can improve developer experience significantly as it helps access them from developer machines. In this post, it’ll be illustrated how to access a database but access to other resources such as MSK, EMR, ECS and EKS can also be made.


Terraform can be installed in multiple ways and the CLI has intuitive commands to manage AWS infrastructure. Key commands are


  • init – It is used to initialize a working directory containing Terraform configuration files.

  • plan – It creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

  • apply – It executes the actions proposed in a Terraform plan.

  • destroy – It is a convenient way to destroy all remote objects managed by a particular Terraform configuration.


The GitHub repository for this post has the following directory structure. Terraform resources are grouped into 4 files and they’ll be discussed further below. The remaining files are supporting elements and their details can be found in the language reference.


$ tree
├── scripts
│   └──

1 directory, 10 files


We can use the AWS VPC module to construct a VPC. A Terraform module is a container for multiple resources and it makes it easier to manage related resources. A VPC with 2 availability zones is defined and private/public subnets are configured to each of them. Optionally a NAT gateway is added only to a single availability zone.



module “vpc” {
  source = “terraform-aws-modules/vpc/aws”

  name = “${local.resource_prefix}-vpc”
  cidr = “10.${var.class_b}.0.0/16”

  azs             = [“${var.aws_region}a”, “${var.aws_region}b”]
  private_subnets = [“10.${var.class_b}.0.0/19”, “10.${var.class_b}.32.0/19”]
  public_subnets  = [“10.${var.class_b}.64.0/19”, “10.${var.class_b}.96.0/19”]

  enable_nat_gateway = true
  single_nat_gateway = true
  one_nat_gateway_per_az = false

Key Pair

An optional key pair is created. It can be used to access an EC2 instance via SSH. The PEM file will be saved to the key-pair folder once created.



resource “tls_private_key” “pk” {
  count     = var.key_pair_create ? 1 : 0
  algorithm = “RSA”
  rsa_bits  = 4096

resource “aws_key_pair” “key_pair” {
  count      = var.key_pair_create ? 1 : 0
  key_name   = “${local.resource_prefix}-key”
  public_key =[0].public_key_openssh

resource “local_file” “pem_file” {
  count             = var.key_pair_create ? 1 : 0
  filename          = pathexpand(“${path.module}/key-pair/${local.resource_prefix}-key.pem”)
  file_permission   = “0400”
  sensitive_content =[0].private_key_pem


The AWS Auto Scaling Group (ASG) module is used to manage the SoftEther VPN server. The ASG maintains a single EC2 instance in one of the public subnets. The user data script ( is configured to run at launch and it’ll be discussed below. Note that there are other resources that are necessary to make the VPN server to work correctly and those can be found in the Also note that the VPN resource requires a number of configuration values. While most of them have default values or are automatically determined, the IPsec Pre-Shared key (vpn_psk) and administrator password (admin_password) do not have default values. They need to be specified while running the plan, apply and destroy commands. Finally, if the variable vpn_limit_ingress is set to true, the inbound rules of the VPN security group is limited to the running machine’s IP address.


variable “vpn_create” {
  description = “Whether to create a VPN instance”
  default = true

variable “vpn_limit_ingress” {
  description = “Whether to limit the CIDR block of VPN security group inbound rules.”
  default = true

variable “vpn_use_spot” {
  description = “Whether to use spot or on-demand EC2 instance”
  default = false

variable “vpn_psk” {
  description = “The IPsec Pre-Shared Key”
  type        = string
  sensitive   = true

variable “admin_password” {
  description = “SoftEther VPN admin / database master password”
  type        = string
  sensitive   = true

locals {
  local_ip_address  = “${chomp(data.http.local_ip_address.body)}/32”
  vpn_ingress_cidr  = var.vpn_limit_ingress ? local.local_ip_address : “”
  vpn_spot_override = [
    { instance_type: “t3.nano” },
    { instance_type: “t3a.nano” },   

module “vpn” {
  source  = “terraform-aws-modules/autoscaling/aws”
  count   = var.vpn_create ? 1 : 0

  name = “${local.resource_prefix}-vpn-asg”

  key_name            = var.key_pair_create ? aws_key_pair.key_pair[0].key_name : null
  vpc_zone_identifier = module.vpc.public_subnets
  min_size            = 1
  max_size            = 1
  desired_capacity    = 1

  image_id                  =
  instance_type             = element([for s in local.vpn_spot_override: s.instance_type], 0)
  security_groups           = [aws_security_group.vpn[0].id]
  iam_instance_profile_arn  = aws_iam_instance_profile.vpn[0].arn

  # Launch template
  create_lt              = true
  update_default_version = true

  user_data_base64 = base64encode(join(“\n”, [
      write_files : [
          path : “/opt/vpn/”,
          content : templatefile(“${path.module}/scripts/”, {
            aws_region      = var.aws_region,
            allocation_id   = aws_eip.vpn[0].allocation_id,
            vpn_psk         = var.vpn_psk,
            admin_password  = var.admin_password
          permissions : “0755”,
      runcmd : [

  # Mixed instances
  use_mixed_instances_policy = true
  mixed_instances_policy = {
    instances_distribution = {
      on_demand_base_capacity                  = var.vpn_use_spot ? 0 : 1
      on_demand_percentage_above_base_capacity = var.vpn_use_spot ? 0 : 100
      spot_allocation_strategy                 = “capacity-optimized”
    override = local.vpn_spot_override

  tags_as_map = {
    “Name” = “${local.resource_prefix}-vpn-asg”

resource “aws_eip” “vpn” {
  count = var.vpn_create ? 1 : 0
  tags  = {
    “Name” = “${local.resource_prefix}-vpn-eip”


The bootstrap script associates the elastic IP address followed by starting the SoftEther VPN server by a Docker container. It accepts the pre-shared key (vpn_psk) and administrator password (admin_password) as environment variables. Also the Virtual Hub name is set to DEFAULT.


# scripts/

#!/bin/bash -ex

## Allocate elastic IP and disable source/destination checks
TOKEN=$(curl –silent –max-time 60 -X PUT -H “X-aws-ec2-metadata-token-ttl-seconds: 30”)
INSTANCEID=$(curl –silent –max-time 60 -H “X-aws-ec2-metadata-token: $TOKEN”
aws –region ${aws_region} ec2 associate-address –instance-id $INSTANCEID –allocation-id ${allocation_id}
aws –region ${aws_region} ec2 modify-instance-attribute –instance-id $INSTANCEID –source-dest-check “{\”Value\”: false}”

## Start SoftEther VPN server
yum update -y && yum install docker -y
systemctl enable docker.service && systemctl start docker.service

docker pull siomiz/softethervpn:debian
docker run -d \
  –cap-add NET_ADMIN \
  –name softethervpn \
  –restart unless-stopped \
  -p 500:500/udp -p 4500:4500/udp -p 1701:1701/tcp -p 1194:1194/udp -p 5555:5555/tcp -p 443:443/tcp \
  -e PSK=${vpn_psk} \
  -e SPW=${admin_password} \


An Aurora PostgreSQL cluster is created using the AWS RDS Aurora module. It is set to have only a single instance and is deployed to a private subnet. Note that a security group (vpn_access) is created that allows access from the VPN server and it is added to vpc_security_group_ids.


module “aurora” {
  source  = “terraform-aws-modules/rds-aurora/aws”

  name                        = “${local.resource_prefix}-db-cluster”
  engine                      = “aurora-postgresql”
  engine_version              = “13”
  auto_minor_version_upgrade  = false

  instances = {
    1 = {
      instance_class = “db.t3.medium”

  vpc_id                 = module.vpc.vpc_id
  db_subnet_group_name   =
  create_db_subnet_group = false
  create_security_group  = true
  vpc_security_group_ids = []

  iam_database_authentication_enabled = false
  create_random_password              = false
  master_password                     = var.admin_password
  database_name                       = local.database_name

  apply_immediately   = true
  skip_final_snapshot = true

  db_cluster_parameter_group_name =
  enabled_cloudwatch_logs_exports = [“postgresql”]

  tags = {
    Name = “${local.resource_prefix}-db-cluster”

resource “aws_db_subnet_group” “aurora” {
  name       = “${local.resource_prefix}-db-subnet-group”
  subnet_ids = module.vpc.private_subnets

  tags = {
    Name = “${local.resource_prefix}-db-subnet-group”

resource “aws_security_group” “vpn_access” {
  name   = “${local.resource_prefix}-db-security-group”
  vpc_id = module.vpc.vpc_id

  lifecycle {
    create_before_destroy = true

resource “aws_security_group_rule” “aurora_vpn_inbound” {
  count                    = var.vpn_create ? 1 : 0
  type                     = “ingress”
  description              = “VPN access”
  security_group_id        =
  protocol                 = “tcp”
  from_port                = “5432”
  to_port                  = “5432”
  source_security_group_id = aws_security_group.vpn[0].id

VPN Configuration

Both the VPN Server Manager and Client can be obtained from the download centre. The server and client configuration are illustrated below.

VPN Server

We can begin with adding a new setting.

We need to fill in the input fields in the red boxes below. It’s possible to use the elastic IP address as the host name and the administrator password should match to what is used for Terraform.

Then we can make a connection to the server by clicking the connect button.

If it’s the first attempt, we’ll see the following pop-up message and we can click yes to set up the IPsec.

In the dialog, we just need to enter the IPsec Pre-Shared key and click ok.

Once a connection is made successfully, we can manage the Virtual Hub by clicking the manage virtual hub button. Note that we created a Virtual Hub named DEFAULT and the session will be established on that Virtual Hub.

We can create a new user by clicking the manage users button.

And clicking the new button.

For simplicity, we can use Password Authentication as the auth type and enter the username and password.

A new user is created and we can use the credentials on the client program to make a connection to the server.

VPN Client

We can add a VPN connection by clicking the menu shown below.

We’ll need to create a Virtual Network Adapter and should click the yes button.

In the new dialog, we can add the adapter name and hit ok. Note we should have the administrator privilege to create a new adapter.

Then a new dialog box will be shown. We can add a connection by entering the input fields in the red boxes below. The VPN server details should match to what are created by Terraform and the user credentials that are created in the previous section can be used.

Once a connection is added, we can make a connection to the VPN server by right-clicking the item and clicking the connect menu.

We can see that the status is changed into connected.

Once the VPN server is connected, we can access the database that is deployed in the private subnet. A connection is tested by a database client and it is shown that the connection is successful.


In this post, we discussed how to set up a development infrastructure on AWS with Terraform. Terraform is used as an effective way of managing resources on AWS. An Aurora PostgreSQL cluster is created in a private subnet and SoftEther VPN is configured to access the database from the developer machine.