Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4186.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0ee0f9520ac8b76f7 Launch Stack
    HVM (arm64) ami-08d23d94337209230 Launch Stack
    ap-east-1 HVM (amd64) ami-03ac28f684c94ef0a Launch Stack
    HVM (arm64) ami-012bba41aba16e9a5 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0303c079dc48a30c0 Launch Stack
    HVM (arm64) ami-0890bf6381002f563 Launch Stack
    ap-northeast-2 HVM (amd64) ami-08d6600f617ad917e Launch Stack
    HVM (arm64) ami-063cc0def18a841ef Launch Stack
    ap-south-1 HVM (amd64) ami-080597a6983b05e80 Launch Stack
    HVM (arm64) ami-0fe8244eba1541054 Launch Stack
    ap-southeast-1 HVM (amd64) ami-09981540106bc31f3 Launch Stack
    HVM (arm64) ami-023a4283d370beb7c Launch Stack
    ap-southeast-2 HVM (amd64) ami-0bcf0c91e591a09a4 Launch Stack
    HVM (arm64) ami-08455af36a1dd9103 Launch Stack
    ap-southeast-3 HVM (amd64) ami-036912298c92738d4 Launch Stack
    HVM (arm64) ami-01d31475df225977f Launch Stack
    ca-central-1 HVM (amd64) ami-0b744641d5a181d6d Launch Stack
    HVM (arm64) ami-0886e613c542d8f3b Launch Stack
    eu-central-1 HVM (amd64) ami-0b3f7edbb2fef4d29 Launch Stack
    HVM (arm64) ami-08a08effc6b3fef3f Launch Stack
    eu-north-1 HVM (amd64) ami-06761622ec3dde8c4 Launch Stack
    HVM (arm64) ami-05a9d51455a81e58b Launch Stack
    eu-south-1 HVM (amd64) ami-0ee6ebb6404324c5a Launch Stack
    HVM (arm64) ami-00a15b371ce6c6eec Launch Stack
    eu-west-1 HVM (amd64) ami-0a10281c94cafacd6 Launch Stack
    HVM (arm64) ami-02c34ced13cc71356 Launch Stack
    eu-west-2 HVM (amd64) ami-0f47ade5f5934cef6 Launch Stack
    HVM (arm64) ami-0db133acb27669202 Launch Stack
    eu-west-3 HVM (amd64) ami-041cb5792c3b7b0bc Launch Stack
    HVM (arm64) ami-0d0aa1e2e11a9afe9 Launch Stack
    me-south-1 HVM (amd64) ami-0b817f51a992fc068 Launch Stack
    HVM (arm64) ami-0d64a18832a7b540c Launch Stack
    sa-east-1 HVM (amd64) ami-071e0eefd4271ff32 Launch Stack
    HVM (arm64) ami-0ec16c7247afc4645 Launch Stack
    us-east-1 HVM (amd64) ami-0a15b36822c8a6666 Launch Stack
    HVM (arm64) ami-02813a5044ff79ba6 Launch Stack
    us-east-2 HVM (amd64) ami-09c639896caaa0cd8 Launch Stack
    HVM (arm64) ami-0a997e0baafc8b465 Launch Stack
    us-west-1 HVM (amd64) ami-02ed46bfe6de306e6 Launch Stack
    HVM (arm64) ami-011749a1c0c3f1c79 Launch Stack
    us-west-2 HVM (amd64) ami-08cfcc4dcc503d1c0 Launch Stack
    HVM (arm64) ami-0cd693790822d08b7 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4152.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0beb22a59e0d1c23f Launch Stack
    HVM (arm64) ami-08f97fcbcf604935d Launch Stack
    ap-east-1 HVM (amd64) ami-0980928f73dda3999 Launch Stack
    HVM (arm64) ami-0e2268f4f3cbe6001 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0c11de1d8b868849f Launch Stack
    HVM (arm64) ami-07f286b1c79dc6d00 Launch Stack
    ap-northeast-2 HVM (amd64) ami-00f5f42983be739d0 Launch Stack
    HVM (arm64) ami-085af537953b2c91c Launch Stack
    ap-south-1 HVM (amd64) ami-093ca4b1ea8469702 Launch Stack
    HVM (arm64) ami-050360f5c5c20dc94 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0594b48c8797376ad Launch Stack
    HVM (arm64) ami-0d9412677a3cf253a Launch Stack
    ap-southeast-2 HVM (amd64) ami-0037d73ebd0d0f11e Launch Stack
    HVM (arm64) ami-0a54fcc835204140d Launch Stack
    ap-southeast-3 HVM (amd64) ami-0f9aa577cfba51795 Launch Stack
    HVM (arm64) ami-005a76e54d0741801 Launch Stack
    ca-central-1 HVM (amd64) ami-0bac318c9e7632a7d Launch Stack
    HVM (arm64) ami-0cad74e92d5acd624 Launch Stack
    eu-central-1 HVM (amd64) ami-072d438fccf7de032 Launch Stack
    HVM (arm64) ami-0fed253deb666c7ea Launch Stack
    eu-north-1 HVM (amd64) ami-07c25dfbb95e10ad5 Launch Stack
    HVM (arm64) ami-0cfd6a71590a2fd05 Launch Stack
    eu-south-1 HVM (amd64) ami-09d4be573a7bb279a Launch Stack
    HVM (arm64) ami-023abeb93018c4fe5 Launch Stack
    eu-west-1 HVM (amd64) ami-042162dcceed4b7d2 Launch Stack
    HVM (arm64) ami-0330a490f02fde6a7 Launch Stack
    eu-west-2 HVM (amd64) ami-00f714bf598d075e9 Launch Stack
    HVM (arm64) ami-06f14796c8559b8df Launch Stack
    eu-west-3 HVM (amd64) ami-084b59dc35250c821 Launch Stack
    HVM (arm64) ami-0a5507799cea7cfa0 Launch Stack
    me-south-1 HVM (amd64) ami-05a4cc8a84acc671f Launch Stack
    HVM (arm64) ami-097ef58b41a5fb8ab Launch Stack
    sa-east-1 HVM (amd64) ami-09cc78d2fef7f13f3 Launch Stack
    HVM (arm64) ami-0cf0d1b47fe6dc59c Launch Stack
    us-east-1 HVM (amd64) ami-0be6c6aff58f5387c Launch Stack
    HVM (arm64) ami-073fab43155436770 Launch Stack
    us-east-2 HVM (amd64) ami-0be03550266171440 Launch Stack
    HVM (arm64) ami-0bf48c8fa7e65f7c7 Launch Stack
    us-west-1 HVM (amd64) ami-0fecc3771f683be9e Launch Stack
    HVM (arm64) ami-0a7f16b02251ed94c Launch Stack
    us-west-2 HVM (amd64) ami-0ccf6a5e29fa4f1d6 Launch Stack
    HVM (arm64) ami-0d3151f6e81a4d97c Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4081.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0c3ae1318f0173548 Launch Stack
    HVM (arm64) ami-0d55d78896b039359 Launch Stack
    ap-east-1 HVM (amd64) ami-0ddac50375e1b5851 Launch Stack
    HVM (arm64) ami-048206fd9e39dc4c7 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0f7d97cea17e68080 Launch Stack
    HVM (arm64) ami-0fc0a3816c6238713 Launch Stack
    ap-northeast-2 HVM (amd64) ami-09f67e9d160952c16 Launch Stack
    HVM (arm64) ami-09e3c666aa9099b3d Launch Stack
    ap-south-1 HVM (amd64) ami-0e8b82a8b0de6989f Launch Stack
    HVM (arm64) ami-01a5f69700c47a9bb Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bf6dda556325ad5f Launch Stack
    HVM (arm64) ami-088642a7dcbe2918a Launch Stack
    ap-southeast-2 HVM (amd64) ami-0966222b5c1faa384 Launch Stack
    HVM (arm64) ami-0ea739775a646411f Launch Stack
    ap-southeast-3 HVM (amd64) ami-0f8670ad3cc735f6a Launch Stack
    HVM (arm64) ami-0486b4281aa7fd42d Launch Stack
    ca-central-1 HVM (amd64) ami-0a865d14da95bbfa2 Launch Stack
    HVM (arm64) ami-0d864e1b9c77711ac Launch Stack
    eu-central-1 HVM (amd64) ami-0e16254399de615f5 Launch Stack
    HVM (arm64) ami-0858fe39d16d89edf Launch Stack
    eu-north-1 HVM (amd64) ami-092741ececcd701c1 Launch Stack
    HVM (arm64) ami-085f51d7b37fcffd6 Launch Stack
    eu-south-1 HVM (amd64) ami-0e5d8e6a102399f32 Launch Stack
    HVM (arm64) ami-04d41fdacb5a15637 Launch Stack
    eu-west-1 HVM (amd64) ami-0e51c0cef0871e92a Launch Stack
    HVM (arm64) ami-0f45ded0e29423862 Launch Stack
    eu-west-2 HVM (amd64) ami-005ac31828b1e7e52 Launch Stack
    HVM (arm64) ami-068c958314af62b48 Launch Stack
    eu-west-3 HVM (amd64) ami-0d6c8b62174e4fa4c Launch Stack
    HVM (arm64) ami-0f92b53a9da75d379 Launch Stack
    me-south-1 HVM (amd64) ami-074af5614e4d006cd Launch Stack
    HVM (arm64) ami-049897c43d5f69f45 Launch Stack
    sa-east-1 HVM (amd64) ami-041030ef8c2525c26 Launch Stack
    HVM (arm64) ami-0e6031802d6d564a2 Launch Stack
    us-east-1 HVM (amd64) ami-0bee750fbb686de1d Launch Stack
    HVM (arm64) ami-010b2ffd514b49533 Launch Stack
    us-east-2 HVM (amd64) ami-087364d1d496b483b Launch Stack
    HVM (arm64) ami-0fe99530c1e52e125 Launch Stack
    us-west-1 HVM (amd64) ami-09f138d45cd9df6ce Launch Stack
    HVM (arm64) ami-064fb3a611d433f93 Launch Stack
    us-west-2 HVM (amd64) ami-00ee437ceefdb671e Launch Stack
    HVM (arm64) ami-0756f9f5596ef4256 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0a15b36822c8a6666 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0a15b36822c8a6666 (amd64), Beta ami-0be6c6aff58f5387c (amd64), or Stable ami-0bee750fbb686de1d (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .