Or, what I provision to run this blog
I'm a big fan of Terraform. Terraform is an Infrastructure as Code tool that, when paired with a good source control tool like git, ensures that your project always perfectly describes the environment in which it operates. This is extremely useful with complex infrastructure, but also as it turns out, with weekend projects.
Most of my side projects have bursts of weekend activity and then tend to languish for months, so when I come back to the project I've completely lost context on what environment I set up. Terraform helps me bridge that information gap by having everything easily setup with a single command. I tend to have all my setup scripts hook into Terraform via their remote-exec
provisioner as well, so my environment all set up with a single command.
In this post, I'll cover how I built out the f1 micro instance that serves this blog. In a follow up post, I'll cover how I set up my provisioners to ensure that a single invocation of terraform apply
is all I need to get up and running again.
Configuring your F1 Micro Instance
We need to decided on a few key properties for our instance. This includes our base OS, SSH Access, Disk/Instance size, etc. We'll dive into each piece by piece.
SSH Access
Google has a system for allowing SSH access to an instance based on your IAM role called OS Login, but I provision just a single SSH Key and weave that into my instance's metadata. First, I have my Terraform script provide me an SSH key to use as an input variable
variable "google-ssh-pub-key-path" {
type = string
description = "Path to the ssh key to use when logging into your GCE instance"
}
locals {
username = "akshay"
private-key-path = trimsuffix(var.google-ssh-pub-key-path, ".pub")
}
I then transform that input data into fields that may be useful down the line, like creating the private key path from the public key path. I store this massaged data in a locals block.
Note that I could make the username parametrized as well. In fact, I could pull the username from the public key file via the file
and split
function, but I'm generally just going to use akshay
as the user account for any gcloud instance I have, and I'd rather this be explicit in my file when I come back to it six months later.
We can use the local values later on down the line when we configure our instance.
Operating System and Disk Size
We can specify that we want a Ubuntu 18.04 OS with 10 gigabytes of disk space via the following HCL block
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
// size = 30
}
}
Note that the size parameter is commented out. The current configuration powering this blog did not set a size parameter, which caused disk size to default to 10 gigabytes for an f1 instance. However, the Google Cloud Free Tier allows up 30 gigabytes of free disk space, so there's no harm in bumping it up all the way to that value and still getting a free server.
Network
network_interface {
network = "default"
access_config {}
}
This is a fairly vanilla network setup that provides an ephemeral public ip address for your instance. The ephemeral ip gets provisioned by the empty access_config block (weird, I know).
The default network provides SSH access to your instance, but nothing else. Naturally, this will be problematic if we're trying to spin up a webserver, so let's configure a network which also allows HTTP and HTTPS traffic.
resource "google_compute_network" "vpc_network" {
name = "terraform-network"
auto_create_subnetworks = "true"
}
resource "google_compute_firewall" "default" {
name = "ghostblog-firewall-rules"
network = google_compute_network.vpc_network.id
allow {
protocol = "tcp"
ports = ["22", "80", "443"]
}
}
We can then update our network configuration to point to our more open network. We link the newly created network resource to our network dynamically, via the path google_compute_network.vpc_network.self_link
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {}
}
Putting It All Together
resource "google_compute_instance" "ghost" {
name = "ghostblog-akd"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
metadata = {
ssh-keys = "${local.username}:${file(var.google-ssh-pub-key-path)}"
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {}
}
}
In order to allow SSH to our instance using our SSH key, we need to set some instance metadata that puts our SSH key into the instance's authorized_keys
. We actually need the contents of the public key file to set the required metadata, so we use the file
function provided by HCL to grab the data.
Note that the boot_disk and network_interface configurations are nested inside the ghost
instance. Those configurations only have value within a google_compute_instance
Terraform resource.
Accessing the Instance
To give us a nice copyable SSH command we can use to access our instance, we can just set an output variable in our Terraform script. We can also use our previously defined local block to get the path of our private key file.
output "ssh-command" {
value = "ssh -i ${local.private-key-path} ${local.username}@${google_compute_instance.ghost.network_interface.0.access_config.0.nat_ip}"
}
Once you apply, Terraform should give you the ssh command as the script terminates
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
ssh-command = ssh -i /Users/akshay/.ssh/id_rsa_blog akshay@xxx.xxx.xxx.xxx
This should be all you need to get hacking, but in a follow up post, we will cover how to use Terraform's provisioners to get one step deploys of your infrastructure.