Local ssh Config and Terraform


It was always painful to login into newly created VMs. You had to find out the dynamically allocated IP address, remember your keys etc. Here is a simple way to login in instances that were created by terraform.

I had a little project with a public accessable server (puppetmaster) and a server (postgresdb) in a private network.

NOTE: Yes, I know, a puppetserver should never be public accessable :-)

Create a template file, let’s say ssh_config.tmpl:

Host puppetmaster
    HostName ${puppetmaster_public_ip}
    User ec2-user
    IdentityFile       ~/.ssh/${key_name}.pem

Host database
    HostName ${database_private_ip}
    User ubuntu
    IdentityFile       ~/.ssh/${key_name}.pem
    ProxyCommand ssh -W %h:%p puppetmaster

Substitute the placeholders with the returned IPs from the generated resources:

data "template_file" "ssh_config" {
  template = file("templates/local/ssh_config.tmpl")
  vars = {
    database_private_ip = aws_instance.postgresdb.private_ip,
    puppetmaster_public_ip = aws_instance.puppetmaster.public_ip
    key_name = var.key_name
  }
}

Write this out as an ssh config file:

variable ssh_include_path {}

resource "local_file" "ssh_config" {
  content = data.template_file.ssh_config.rendered
  filename = "${var.ssh_include_path}/ssh_config.out"
}

Including the generated file by your ssh config you should now be able to login with:

ssh puppetmaster

You can also log in to the database server that is in a private subnet, because of the ProxyCommand ssh -W %h:%p puppetmaster in the ssh config. You will be automatically proxied with the appropriate user and private key!

ssh database