Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4230.2.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-071ad160208da202a Launch Stack
    HVM (arm64) ami-0bdafed3c32a019a5 Launch Stack
    ap-east-1 HVM (amd64) ami-03919fe54d81031e4 Launch Stack
    HVM (arm64) ami-0a64d3d71d4c66e10 Launch Stack
    ap-northeast-1 HVM (amd64) ami-05a558f0de608dcf2 Launch Stack
    HVM (arm64) ami-0a851bbe8d1342522 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0c30b2dd3be871880 Launch Stack
    HVM (arm64) ami-0a773f5de62d0f75d Launch Stack
    ap-south-1 HVM (amd64) ami-0fe51af5d8e22fa55 Launch Stack
    HVM (arm64) ami-0edff51e32d97cfab Launch Stack
    ap-southeast-1 HVM (amd64) ami-0056f14117bb0b836 Launch Stack
    HVM (arm64) ami-0d3ae0689f7270f7d Launch Stack
    ap-southeast-2 HVM (amd64) ami-0007392346f49f0c8 Launch Stack
    HVM (arm64) ami-0ff34287d58160e98 Launch Stack
    ap-southeast-3 HVM (amd64) ami-024a9564163d6daa1 Launch Stack
    HVM (arm64) ami-0684af82be548e9f7 Launch Stack
    ca-central-1 HVM (amd64) ami-005ec9e772ba58aa3 Launch Stack
    HVM (arm64) ami-05bb5673e7e851ded Launch Stack
    eu-central-1 HVM (amd64) ami-09be8ae7214eb032e Launch Stack
    HVM (arm64) ami-0144442b0c9a4d924 Launch Stack
    eu-north-1 HVM (amd64) ami-07daa5116a271ef83 Launch Stack
    HVM (arm64) ami-0a7d2f30a77dfe770 Launch Stack
    eu-south-1 HVM (amd64) ami-084d9101100edf692 Launch Stack
    HVM (arm64) ami-01204ade4ca83aa9a Launch Stack
    eu-west-1 HVM (amd64) ami-065e0c7335748ac93 Launch Stack
    HVM (arm64) ami-0aa84eb329412712f Launch Stack
    eu-west-2 HVM (amd64) ami-0f3e818489fbf52ad Launch Stack
    HVM (arm64) ami-0b2b53e8e1c24c70a Launch Stack
    eu-west-3 HVM (amd64) ami-0f8aaa06756a64939 Launch Stack
    HVM (arm64) ami-0a33c20c354dda659 Launch Stack
    me-south-1 HVM (amd64) ami-0f16d09a03ca089b7 Launch Stack
    HVM (arm64) ami-09aaf59d1cbf43b95 Launch Stack
    sa-east-1 HVM (amd64) ami-0ed24557f1bd5b540 Launch Stack
    HVM (arm64) ami-0138d3362c2fc4295 Launch Stack
    us-east-1 HVM (amd64) ami-05e772ae74c668445 Launch Stack
    HVM (arm64) ami-023739aee4cd789f0 Launch Stack
    us-east-2 HVM (amd64) ami-022fd3d8e12a31c1c Launch Stack
    HVM (arm64) ami-0adb350e248b2e6da Launch Stack
    us-west-1 HVM (amd64) ami-0161f85c8cbee64a7 Launch Stack
    HVM (arm64) ami-08e70725e7247e8ba Launch Stack
    us-west-2 HVM (amd64) ami-0b5d20242fb3c2dc6 Launch Stack
    HVM (arm64) ami-095505112c4e91967 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4426.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a03dfc8a6fd1b6fb Launch Stack
    HVM (arm64) ami-0b4f00fb133dff2d9 Launch Stack
    ap-east-1 HVM (amd64) ami-08289b1032af5c581 Launch Stack
    HVM (arm64) ami-07b1c56fd4a302b75 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0262faa9fc8600c7b Launch Stack
    HVM (arm64) ami-0d6045f7ea5730145 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0e93f22a13af5dbae Launch Stack
    HVM (arm64) ami-0f5779acf512e8c95 Launch Stack
    ap-south-1 HVM (amd64) ami-08082fe19f6374420 Launch Stack
    HVM (arm64) ami-0f0363eb2168a629b Launch Stack
    ap-southeast-1 HVM (amd64) ami-0548cf9dc83502312 Launch Stack
    HVM (arm64) ami-091881850a84a73b2 Launch Stack
    ap-southeast-2 HVM (amd64) ami-033297f2751ad8424 Launch Stack
    HVM (arm64) ami-07b6917932aeb1fcc Launch Stack
    ap-southeast-3 HVM (amd64) ami-0943095a20ff57416 Launch Stack
    HVM (arm64) ami-013f3f02ac08a111d Launch Stack
    ca-central-1 HVM (amd64) ami-012521513edd6ce9d Launch Stack
    HVM (arm64) ami-0846e7d0140996570 Launch Stack
    eu-central-1 HVM (amd64) ami-02a774a9333241c86 Launch Stack
    HVM (arm64) ami-0734fd99833b7d570 Launch Stack
    eu-north-1 HVM (amd64) ami-0e54eb4adc25a0fce Launch Stack
    HVM (arm64) ami-04186af807f5eb1d9 Launch Stack
    eu-south-1 HVM (amd64) ami-0951cc2003d7e5382 Launch Stack
    HVM (arm64) ami-0ed3ee12ae503d104 Launch Stack
    eu-west-1 HVM (amd64) ami-05896e2e97d807edb Launch Stack
    HVM (arm64) ami-005998c4cfdc88dc7 Launch Stack
    eu-west-2 HVM (amd64) ami-0d6396c7f4a52eb3c Launch Stack
    HVM (arm64) ami-0cb3fccd511d83039 Launch Stack
    eu-west-3 HVM (amd64) ami-0d540702ab87f6932 Launch Stack
    HVM (arm64) ami-0ac45d12c15edc911 Launch Stack
    me-south-1 HVM (amd64) ami-0bddd05e17a976590 Launch Stack
    HVM (arm64) ami-0878ea5bbd98f648e Launch Stack
    sa-east-1 HVM (amd64) ami-00513f76c42b6e783 Launch Stack
    HVM (arm64) ami-04f023ef363f17f47 Launch Stack
    us-east-1 HVM (amd64) ami-00656462449cd6854 Launch Stack
    HVM (arm64) ami-0ebd5227c68cc10ac Launch Stack
    us-east-2 HVM (amd64) ami-0ea8d17134c936bb8 Launch Stack
    HVM (arm64) ami-0719897f1629cc224 Launch Stack
    us-west-1 HVM (amd64) ami-0d89b5e626e99e2e9 Launch Stack
    HVM (arm64) ami-07edc61a9aa422934 Launch Stack
    us-west-2 HVM (amd64) ami-0bd2e682164e6e18f Launch Stack
    HVM (arm64) ami-07457aa2eb56cddff Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4459.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0482bcddc1ccf697b Launch Stack
    HVM (arm64) ami-01f52504bfd3cb801 Launch Stack
    ap-east-1 HVM (amd64) ami-0d0e9cfc483a6846e Launch Stack
    HVM (arm64) ami-090b61d3b4a4864c6 Launch Stack
    ap-northeast-1 HVM (amd64) ami-05aab913ec9cf9152 Launch Stack
    HVM (arm64) ami-0de75bdc7206f82d9 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0dc708b0ff750fa1a Launch Stack
    HVM (arm64) ami-0be2801ab64f97f8b Launch Stack
    ap-south-1 HVM (amd64) ami-0ebacca62cbcec910 Launch Stack
    HVM (arm64) ami-0eb79f1ff59e62566 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0a648b9dd4073fd89 Launch Stack
    HVM (arm64) ami-0ef77798347a9cd03 Launch Stack
    ap-southeast-2 HVM (amd64) ami-027cb6ac5302b6e05 Launch Stack
    HVM (arm64) ami-048b5d8aeb15d78a0 Launch Stack
    ap-southeast-3 HVM (amd64) ami-003fdd6d6032af17f Launch Stack
    HVM (arm64) ami-0730963b06f79c460 Launch Stack
    ca-central-1 HVM (amd64) ami-043383c630a0ebe76 Launch Stack
    HVM (arm64) ami-0a6ec3800bcae5f85 Launch Stack
    eu-central-1 HVM (amd64) ami-0fb32333c14c38d72 Launch Stack
    HVM (arm64) ami-0275afc33ad5c5e53 Launch Stack
    eu-north-1 HVM (amd64) ami-01c528e1d7d65d82a Launch Stack
    HVM (arm64) ami-0cebe03eeab064e6d Launch Stack
    eu-south-1 HVM (amd64) ami-0672c3eedffaa80c8 Launch Stack
    HVM (arm64) ami-05b28af74427123d2 Launch Stack
    eu-west-1 HVM (amd64) ami-0876263b70995c554 Launch Stack
    HVM (arm64) ami-046db16966ba390b8 Launch Stack
    eu-west-2 HVM (amd64) ami-08f3752061d6ce0e1 Launch Stack
    HVM (arm64) ami-06425af4f64c6f1d4 Launch Stack
    eu-west-3 HVM (amd64) ami-0b54c74223704b2fe Launch Stack
    HVM (arm64) ami-00ebeb0f2372acd2b Launch Stack
    me-south-1 HVM (amd64) ami-0f56eda4a53544494 Launch Stack
    HVM (arm64) ami-02be2604fddef920d Launch Stack
    sa-east-1 HVM (amd64) ami-0386b423c8da24dba Launch Stack
    HVM (arm64) ami-04605154b465bd780 Launch Stack
    us-east-1 HVM (amd64) ami-00b50f4ce6ef4b45a Launch Stack
    HVM (arm64) ami-0c265d1885532a167 Launch Stack
    us-east-2 HVM (amd64) ami-0e2c814371c91258c Launch Stack
    HVM (arm64) ami-0d4af302288a626a3 Launch Stack
    us-west-1 HVM (amd64) ami-08628f8519b21cb89 Launch Stack
    HVM (arm64) ami-018535a246bc4d6ab Launch Stack
    us-west-2 HVM (amd64) ami-0b33c2277c80a28bc Launch Stack
    HVM (arm64) ami-07ee48a384d8f676f Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.6.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-065d742b53d039f10 Launch Stack
    HVM (arm64) ami-031e6aa017e3d66a4 Launch Stack
    ap-east-1 HVM (amd64) ami-05d861bfa50523be9 Launch Stack
    HVM (arm64) ami-00376960872d79ace Launch Stack
    ap-northeast-1 HVM (amd64) ami-05dd5c8176aae392e Launch Stack
    HVM (arm64) ami-0d187650ed489eb63 Launch Stack
    ap-northeast-2 HVM (amd64) ami-082997538fee72535 Launch Stack
    HVM (arm64) ami-03cc0c6cbfd15b96b Launch Stack
    ap-south-1 HVM (amd64) ami-05a8e27ad68c7c095 Launch Stack
    HVM (arm64) ami-0b2d1b5a81d288101 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bbc11922d35e88f7 Launch Stack
    HVM (arm64) ami-019dbbc6398ee063e Launch Stack
    ap-southeast-2 HVM (amd64) ami-0453f031a5311e96c Launch Stack
    HVM (arm64) ami-09d8d953473bdd4bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-06a63dc511c9781f3 Launch Stack
    HVM (arm64) ami-074bb47a98f1747b4 Launch Stack
    ca-central-1 HVM (amd64) ami-080a9e8c39c377a17 Launch Stack
    HVM (arm64) ami-05895f696017a8301 Launch Stack
    eu-central-1 HVM (amd64) ami-0099e069036c934fa Launch Stack
    HVM (arm64) ami-0c6adc94939c2f348 Launch Stack
    eu-north-1 HVM (amd64) ami-0eb12fd4cf77da266 Launch Stack
    HVM (arm64) ami-00c4b52eb4c77f737 Launch Stack
    eu-south-1 HVM (amd64) ami-06548dff7a06688c4 Launch Stack
    HVM (arm64) ami-00c72fd113bab908e Launch Stack
    eu-west-1 HVM (amd64) ami-01b7787bc0f8621e5 Launch Stack
    HVM (arm64) ami-03448c137612fac2a Launch Stack
    eu-west-2 HVM (amd64) ami-0061694a1f70ac69b Launch Stack
    HVM (arm64) ami-0e6da03e8bfc266bd Launch Stack
    eu-west-3 HVM (amd64) ami-028ac53f4abd50a0a Launch Stack
    HVM (arm64) ami-08ff956abf5f1b861 Launch Stack
    me-south-1 HVM (amd64) ami-0597951317c148292 Launch Stack
    HVM (arm64) ami-09584968f1259e17c Launch Stack
    sa-east-1 HVM (amd64) ami-0e79099b46011b2a7 Launch Stack
    HVM (arm64) ami-0a3e84660861b4e0f Launch Stack
    us-east-1 HVM (amd64) ami-08f4bc25055494068 Launch Stack
    HVM (arm64) ami-086c5cca4129f4102 Launch Stack
    us-east-2 HVM (amd64) ami-0da2ef08fd5010737 Launch Stack
    HVM (arm64) ami-02da50159337b6b16 Launch Stack
    us-west-1 HVM (amd64) ami-08befc8df1e62f5a9 Launch Stack
    HVM (arm64) ami-08292a8b7fd99dd25 Launch Stack
    us-west-2 HVM (amd64) ami-033de58d5bfead60e Launch Stack
    HVM (arm64) ami-008bca8970ab8471d Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-00b50f4ce6ef4b45a (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-00b50f4ce6ef4b45a (amd64), Beta ami-00656462449cd6854 (amd64), or Stable ami-05e772ae74c668445 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .