Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4230.0.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-085b8a13f346ce737 Launch Stack
    HVM (arm64) ami-0db2434c6cc4a57f1 Launch Stack
    ap-east-1 HVM (amd64) ami-006e8c95a80d26b45 Launch Stack
    HVM (arm64) ami-0cf61d851a467be46 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0639ff2cbfb9dc74f Launch Stack
    HVM (arm64) ami-0385ac011b0975eee Launch Stack
    ap-northeast-2 HVM (amd64) ami-09d92241c56e9b533 Launch Stack
    HVM (arm64) ami-0706452e76dad1fc8 Launch Stack
    ap-south-1 HVM (amd64) ami-03f757f89f79a7a34 Launch Stack
    HVM (arm64) ami-08f36cf34fe09e382 Launch Stack
    ap-southeast-1 HVM (amd64) ami-06da550884dad8cbc Launch Stack
    HVM (arm64) ami-05a23aad7cba71fe1 Launch Stack
    ap-southeast-2 HVM (amd64) ami-06b9f4e2bccab15bd Launch Stack
    HVM (arm64) ami-03471f55085cc7841 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b83aec672911501e Launch Stack
    HVM (arm64) ami-02444a6df23dc886a Launch Stack
    ca-central-1 HVM (amd64) ami-0d13bb1a953771e54 Launch Stack
    HVM (arm64) ami-003c7e1f9ecf2ec44 Launch Stack
    eu-central-1 HVM (amd64) ami-0e0ca8d44478f93bf Launch Stack
    HVM (arm64) ami-0036ab132c0422820 Launch Stack
    eu-north-1 HVM (amd64) ami-0808144378a01f613 Launch Stack
    HVM (arm64) ami-0ce5314c79b20bdb1 Launch Stack
    eu-south-1 HVM (amd64) ami-08ce40afd58d26b1a Launch Stack
    HVM (arm64) ami-0a806723ca3da2df1 Launch Stack
    eu-west-1 HVM (amd64) ami-0f649bc390b9a608e Launch Stack
    HVM (arm64) ami-0bedffe56ca902e70 Launch Stack
    eu-west-2 HVM (amd64) ami-0a2f0ec94cc365e64 Launch Stack
    HVM (arm64) ami-0904422b6134541b4 Launch Stack
    eu-west-3 HVM (amd64) ami-07a304efcf9c0208d Launch Stack
    HVM (arm64) ami-0aeb2de3b5388a313 Launch Stack
    me-south-1 HVM (amd64) ami-055f34b3aee287aae Launch Stack
    HVM (arm64) ami-098f6b9d533112c46 Launch Stack
    sa-east-1 HVM (amd64) ami-020325201130cb471 Launch Stack
    HVM (arm64) ami-0cb8a2300c2fbe677 Launch Stack
    us-east-1 HVM (amd64) ami-0a1cf7b6397ea3779 Launch Stack
    HVM (arm64) ami-02cc941bdd158eb4b Launch Stack
    us-east-2 HVM (amd64) ami-0c73f98db87808452 Launch Stack
    HVM (arm64) ami-011d6d7c6055d382c Launch Stack
    us-west-1 HVM (amd64) ami-00c48494735061fbb Launch Stack
    HVM (arm64) ami-0e91221c986198a5d Launch Stack
    us-west-2 HVM (amd64) ami-00e2bd02f77ff99bf Launch Stack
    HVM (arm64) ami-0999fd4685a172235 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4186.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-033efc9ba6cf16a65 Launch Stack
    HVM (arm64) ami-08874965c121f6356 Launch Stack
    ap-east-1 HVM (amd64) ami-01349b40bb3ef90ea Launch Stack
    HVM (arm64) ami-05dfa2d2eeb4da7ce Launch Stack
    ap-northeast-1 HVM (amd64) ami-00988c3ede0ba61b8 Launch Stack
    HVM (arm64) ami-009eaad21a55d7eef Launch Stack
    ap-northeast-2 HVM (amd64) ami-0898bdcca8f3b81be Launch Stack
    HVM (arm64) ami-07c44ea71b8b84023 Launch Stack
    ap-south-1 HVM (amd64) ami-0f70e5d2831ab3d7d Launch Stack
    HVM (arm64) ami-07faf0144f1370df8 Launch Stack
    ap-southeast-1 HVM (amd64) ami-069115904bc83e952 Launch Stack
    HVM (arm64) ami-0b7601cb1d1769545 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0d1f165d75b10e8cf Launch Stack
    HVM (arm64) ami-071e5f9ea6e53535a Launch Stack
    ap-southeast-3 HVM (amd64) ami-00eb4e9e6497f0f16 Launch Stack
    HVM (arm64) ami-05433378462cf3f90 Launch Stack
    ca-central-1 HVM (amd64) ami-0bba7b0b6178b3216 Launch Stack
    HVM (arm64) ami-0a59b2eebbb50c14b Launch Stack
    eu-central-1 HVM (amd64) ami-01c94e7ddc42783ae Launch Stack
    HVM (arm64) ami-07e15c36d914ff774 Launch Stack
    eu-north-1 HVM (amd64) ami-08674782e9fb8c272 Launch Stack
    HVM (arm64) ami-084acb98c391fd8b3 Launch Stack
    eu-south-1 HVM (amd64) ami-0c29ee4a6831d6331 Launch Stack
    HVM (arm64) ami-0ceb7787155cd17fe Launch Stack
    eu-west-1 HVM (amd64) ami-08625f97cc75d940f Launch Stack
    HVM (arm64) ami-01ebe769a92995b30 Launch Stack
    eu-west-2 HVM (amd64) ami-0edb4357441227c79 Launch Stack
    HVM (arm64) ami-006fe2f12b76a9af5 Launch Stack
    eu-west-3 HVM (amd64) ami-0b48a0c2f98fc5a13 Launch Stack
    HVM (arm64) ami-071cb07b6e4aa994f Launch Stack
    me-south-1 HVM (amd64) ami-0a4fb64738fb9055f Launch Stack
    HVM (arm64) ami-0274e3122b3ed6d2d Launch Stack
    sa-east-1 HVM (amd64) ami-097e17f4bc8e3093d Launch Stack
    HVM (arm64) ami-0c71324983bba275a Launch Stack
    us-east-1 HVM (amd64) ami-01b2afef19fbf9d4a Launch Stack
    HVM (arm64) ami-012b019b982cbcb58 Launch Stack
    us-east-2 HVM (amd64) ami-00a8a7ffe5a2a6042 Launch Stack
    HVM (arm64) ami-0c93666bdaf836f99 Launch Stack
    us-west-1 HVM (amd64) ami-0dc01c03d2b7bc703 Launch Stack
    HVM (arm64) ami-0fa222103376577c4 Launch Stack
    us-west-2 HVM (amd64) ami-09e169c4110ccb050 Launch Stack
    HVM (arm64) ami-0ada7701ac143c083 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4152.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0fc868beedd439ff9 Launch Stack
    HVM (arm64) ami-0aa9d73ef23b4b056 Launch Stack
    ap-east-1 HVM (amd64) ami-0a69eaeb165f068b5 Launch Stack
    HVM (arm64) ami-06c3529a88e4b7652 Launch Stack
    ap-northeast-1 HVM (amd64) ami-001a6d1796eea928f Launch Stack
    HVM (arm64) ami-0226d3dde5a5be9e7 Launch Stack
    ap-northeast-2 HVM (amd64) ami-09f22c2cf864aa01e Launch Stack
    HVM (arm64) ami-0e47b563394f10713 Launch Stack
    ap-south-1 HVM (amd64) ami-02e0021b905747894 Launch Stack
    HVM (arm64) ami-06abb5620a2fe286f Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bb6585eb5b1b8aa6 Launch Stack
    HVM (arm64) ami-06e206fc21fdc5691 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0c1d2997dc4666825 Launch Stack
    HVM (arm64) ami-0af6ccabefc9bdeb9 Launch Stack
    ap-southeast-3 HVM (amd64) ami-062ecc685e80cd42c Launch Stack
    HVM (arm64) ami-0e3b54f23d48976d9 Launch Stack
    ca-central-1 HVM (amd64) ami-0658783b03f22b2bb Launch Stack
    HVM (arm64) ami-037acda78371211bf Launch Stack
    eu-central-1 HVM (amd64) ami-0ed93013ce5ada70d Launch Stack
    HVM (arm64) ami-0d41dbbf311bb3a3e Launch Stack
    eu-north-1 HVM (amd64) ami-00805e785aa1cffd5 Launch Stack
    HVM (arm64) ami-08df81b2ee3e64687 Launch Stack
    eu-south-1 HVM (amd64) ami-0d269c0f754d521ac Launch Stack
    HVM (arm64) ami-03454c50328ac7172 Launch Stack
    eu-west-1 HVM (amd64) ami-025e61f01924c7350 Launch Stack
    HVM (arm64) ami-00a525190fc883cf9 Launch Stack
    eu-west-2 HVM (amd64) ami-0d97f0e29f3c9f95f Launch Stack
    HVM (arm64) ami-00a1b270f1a4ae610 Launch Stack
    eu-west-3 HVM (amd64) ami-0723a2606230396f9 Launch Stack
    HVM (arm64) ami-0d18c89210e9643da Launch Stack
    me-south-1 HVM (amd64) ami-0c5e083ad6d4acf88 Launch Stack
    HVM (arm64) ami-0db2f7936e78ea81c Launch Stack
    sa-east-1 HVM (amd64) ami-076dd58e3fdd8706f Launch Stack
    HVM (arm64) ami-0169d1165f888712a Launch Stack
    us-east-1 HVM (amd64) ami-07a3b4d9f157849c5 Launch Stack
    HVM (arm64) ami-0ab720d5330d10749 Launch Stack
    us-east-2 HVM (amd64) ami-01723a3ffcd5434f7 Launch Stack
    HVM (arm64) ami-0a84db3bc286f0834 Launch Stack
    us-west-1 HVM (amd64) ami-01c4fe82f791faef1 Launch Stack
    HVM (arm64) ami-05c401fcf79430439 Launch Stack
    us-west-2 HVM (amd64) ami-0003461193af90d92 Launch Stack
    HVM (arm64) ami-093a24ee325787e1b Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0a1cf7b6397ea3779 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0a1cf7b6397ea3779 (amd64), Beta ami-01b2afef19fbf9d4a (amd64), or Stable ami-07a3b4d9f157849c5 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .