Just-in-Time Nomad: Configuring Nomad/Vault integration on HashiQube
Down the Rabbit Hole
As part of my dabblings with HashiQube, I recently found myself writing a jobspec that required me to needed to pull secrets from Vault. I soon realized that while HashiQube bootstraps Nomad, Vault, Consul (à la Vagrant) on my local machine, it does not configure Vault/Nomad integration.
No integration? No problem! This was the perfect opportunity for me to learn how to configure Vault/Nomad integration, and now, my dear readers, I wish to share my learnings with you!
Are you psyched?! Let’s do this!
Objective
In today’s tutorial, you will learn how to configure Vault/Nomad integration, so that you can pull Vault secrets from your Nomad jobspecs.
I will demonstrate this by using HashiQube to do this configuration on a Vagrant VM running Vault, Consul, and Nomad.
Assumptions
Before we move on, I am assuming that you have a basic understanding of:
- Nomad. If not, mozy on over to my Nomad intro post.
- HashiQube. If not, mozy on over to my HashiQube post.
Pre-Requisites
In order to run the example in this tutorial, you’ll need the following:
- Oracle VirtualBox (version 6.1.30 at the time of this writing)
- Vagrant (version 2.2.19 at the time of this writing)
Tutorial Repo
I will be using a Modified HashiQube Repo (fork of servian/hashiqube
) for today’s tutorial.
Vault/Nomad Integration Explained
Although HashiQube installs Vault and Nomad, it doesn’t configure them to talk to each other right out of the box. Not to worry, because I’ll explain everything step-by-step for all y’alls. All of the source files are also in the tutorial repo, which I reference throughout this post.
The Vault/Nomad integration magic happens in two files: nomad.sh, and vault.sh.
Vault Configuration
First, let’s look at vault.sh:
That is one monster file! Not to worry. Let’s zero in on the stuff that we care about, which takes place in lines 171–190.
1- Set the VAULT_TOKEN
environment variable
This happens on line 171. We pull the root token from /etc/vault/init.file
, thanks to some fancy Linux footwork (fancy for me, anyway 😉 ):
export VAULT_TOKEN=$(cat /etc/vault/init.file | grep Root | rev | cut -d' ' -f1 | rev)
2- Create Vault policies
First we create the nomad-server-policy
(line 175), which gives Nomad permission to access Vault. More specifically, we will be generating a token which will be used by Nomad for the express purpose of accessing Vault. This token will be granted the permissions in nomad-server-policy
.
vault policy write nomad-server /vagrant/hashicorp/vault/config/nomad-server-policy.hcl
The nomad-server-policy.hcl
file referenced above looks like this:
In case you’re wondering how I came up with that policy file, I got it from the Hashi docs here.
We will also be creating app-specific policies (lines 178–179). For today’s example, let’s just look at the 2048-game
policy, which set on (line 179):
vault policy write 2048-game /vagrant/hashicorp/vault/config/2048-policy.hcl
The 2048-policy.hcl
file referenced above looks like this:
path "kv/data/2048-game/*" {
capabilities = ["read", "update", "create"]
}
The above policy states that Nomad has read
, update
, and create
permissions to any secrets created under the path kv/2048
in Vault. If you’re wondering why I said kv/2048-game
and not kv/data/2048-game
, it’s because the data
portion indicates that we’re using secrets engine version 2 (more on that below). When you see the path in the vault UI or reference it in the CLI, data
is omitted from the path.
3- Create the Nomad token role
This happens on line 182. The Nomad token role tells you what Vault policies your Nomad jobs can access:
vault write /auth/token/roles/nomad-cluster @/vagrant/hashicorp/vault/config/nomad-cluster-role.json
The nomad-cluster-role.json
file referenced above looks like this:
{
"disallowed_policies": "nomad-server",
"token_explicit_max_ttl": 0,
"name": "nomad-cluster",
"orphan": true,
"token_period": 259200,
"renewable": true
}
The above role states that jobs running under Nomad have access to every polic except the nomad-server
policy defined in Step 2. We do this because we don’t want to give our Nomad jobs super duper access to do (potentially damaging/unwanted) stuff on Vault. This also means that Nomad has access to the 2048-game
policy defined in Step 2.
FYI — I swiped the above policy JSON from the Hashi docs here.
4- Enable the Secrets Engine
This happens on line 185.
vault secrets enable -version=2 kv
The Vault Secrets Engine is used to store, generate, and encrypt data. It’s not enabled by default, so we need to enable it before we can use it. You may have noticed from the command above that we’re enabling version 2 of the Secrets Engine, which is the newer version.
We do we need to do this? Because by default the Vault secrets engine is not enabled.
5- Create the token
With all this talk about tokens, you might be wondering where this magical token comes from. The token is created as follows:
vault token create -policy nomad-server -period 72h -orphan -format json
Which gives us an output that looks something like this:
Key Value
--- -----
token f02f01c2-c0d1-7cb7-6b88-8a14fada58c0
token_accessor 8cb7fcb3-9a4f-6fbf-0efc-83092bb0cb1c
token_duration 259200s
token_renewable true
token_policies [default nomad-server]
That’s all well and good, but it sucks for automation, which is why I used my mad Linux skillz to do this, which we see in lines 188–189:
export VAULT_TOKEN_INFO=$(vault token create -policy nomad-server -period 72h -orphan -format json)export VAULT_TOKEN=$(echo $VAULT_TOKEN_INFO | jq .auth.client_token | tr -d '"')
Translation: we capture the output of our vault token create
command as a JSON string, and parse it using jq
to grab the field we need, which is auth.client_token
. Here’s the JSON version of the vault token create
output for your reference:
{
"request_id": "db79684a-3686-04ba-a1ca-cedc0986d7d4",
"lease_id": "",
"lease_duration": 0,
"renewable": false,
"data": null,
"warnings": [
"period of \"72h\" exceeded the effective max_ttl of \"10h\"; period value is capped accordingly"
],
"auth": {
"client_token": "f02f01c2-c0d1-7cb7-6b88-8a14fada58c0",
"accessor": "8cb7fcb3-9a4f-6fbf-0efc-83092bb0cb1c",
"policies": [
"default",
"nomad-server"
],
"token_policies": [
"default",
"nomad-server"
],
"identity_policies": null,
"metadata": null,
"orphan": true,
"entity_id": "",
"lease_duration": 36000,
"renewable": true
}
}
Interestingly enough, the JSON looks TOTALLY different from the tabular output.
Welp, that’s it for the Vault configuration. Now we need to make sure that Nomad can talk to Vault.
Nomad Configuration
Let’s take a looksie at nomad.sh
:
There’s a lot of stuff happening there. Let’s zero in on what we need. This one’s actually pretty short and sweet. All we need to do is enable Vault connectivity from Nomad, which happens on lines 64–71:
vault {
enabled = true
address = "http://${VAGRANT_IP}:8200"
task_token_ttl = "1h"
create_from_role = "nomad-cluster"
token = "${VAULT_TOKEN}"
tls_skip_verify = true
}
Highlights from the above snippet:
- Enable Vault integration
- Tell us where Nomad can find Vault
- Tell Nomad which security policy (Step 3 from the previous section) it can use to access Vault
- The token used to let Nomad talk to Vault (Step 5 from the previous section)
And that’s it! Now let’s test everything!
Showtime! Running on HashiQube
Now that we understand the Nomad/Vault integration configurations, it’s time to put things into practice by standing up our environment using HashiQube. To make sure that everything works, we will be creating a secret in Vault, and then we’ll deploy an app that uses the secret. The app will be my time-honoured favourite app, the 2048-game.
I will be using a modified version of the HashiQube Repo (a fork of servian/hashiqube
) for today’s tutorial. If you’re curious, you can see what modifications I’ve made here.
1- Provision a Local Hashi Environment with HashiQube
Start HashiQube by following the detailed instructions here.
Note: Be sure to check out the “Gotchas” section, if you get stuck.
Once everything is up and running (this will take several minutes, by the way), you’ll see this in the tail-end of the startup sequence, to indicate that you are good to go:
Final output of the Vagrant VM startup sequence
You can now access the services below:
- Vault: http://localhost:8200
- Nomad: http://localhost:4646
- Consul: http://localhost:8500
- Traefik: http://traefik.localhost
- Waypoint: https://192.168.56.192:9702
2- Install the Nomad and Vault CLIs on your host machine
If you’re on a Mac, you can install the Vault and Nomad CLIs via Homebrew like this:
brew tap hashicorp/tap
brew install hashicorp/tap/vault
brew install hashicorp/tap/nomad
If you’re not on a Mac, you can find your OS-specific instructions for Vault here and for Nomad here. Note that these are binary installs, and they also contain the CLIs.
3- Add our secret to the vault
Before we can add our secret to vault, we must set some environment variables on our host machine.
First, set the VAULT_ADDR
environment variable:
export VAULT_ADDR=http://localhost:8200
Next, set the VAULT_TOKEN
environment variable.
In case you’re wondering where the heck that value comes from, let’s recall the output we got when the Vagrant VM startup sequence was completed:
Just copy that Root Token value, and set your VAULT_TOKEN
like this:
export VAULT_TOKEN="<initial_root_token>"
But what if you cleared your terminal after the startup sequence? (I do that rather obsessively, so I’m definitely in that category.) Then what? Never fear! You can still get that token value. It is located in /etc/vault/init.file
on the guest machine.
Log in to your guest machine:
vagrant ssh
From the guest machine, get the root token value:
cat /etc/vault/init.file | grep Root | rev | cut -d' ' -f1 | rev > /vagrant/hashicorp/token.txt
The above snippet saves the token to token.txt
(which is .gitignored
), and is accessible to both the host machine (/vagrant/hashicorp/token.txt
) and the guest machine (hashicorp/token.txt
).
Now switch over to the host machine. Run the following:
export VAULT_TOKEN=$(cat hashicorp/token.txt) && \
rm hashicorp/token.txt
Notice how we deleted hashicorp/token.txt
…just to be safe. 😉
Note: In real life, you would never use the root token to set
VAULT_TOKEN
. But we’re on our own dev environment, so it’s not the end of the world.
NOW, we can add our API keys like this:
vault kv put kv/2048-game greeting="Hello, I'm a secret!"
Result:
Let’s take a look at it in Vault! Go to http://localhost:8200
on your host machine. You’ll get this lovely screen:
Use that root token to log in — the one from the VAULT_TOKEN
environment variable.
Note: In real life, you would never use the root token to log into vault. But we’re on our own dev environment, so it’s not the end of the world.
Once logged in, we can see this:
They key vault (kv
folder) was created automagically for you during the bootstrapping process, when we enabled the secrets engine.
Now, click on kv
. You’ll see something like this:
And then click on 2048-game
followed by stuff
to see the greeting
secret we just created:
Or, if you prefer the command line:
vault kv get kv/2048-game/stuff
Which gives us something like this:
4- Deploy the Sample app
Now let’s deploy our 2048-game job so that we can see those secrets. Here’s the jobspec:
There are a few key items to take special note of.
First, Lines 8–10 specify a Vault policy. It means that the job only has access to secrets in Vault that meet the requirements of the 2048-game policy.
vault {
policies = ["2048-game"]
}
Recall that our policy tells Nomad that it has read
, update
, and create
permissions to any secrets created under the path kv/2048
in Vault.
Second, lines 51–58 specify a template stanza in our Nomad jobspec in which we pull our secret from Vault, and save it to local/2048-game.txt
in the container instance. The template
stanza provides us with a way to pull configs from the likes of environment variables, Consul data, and Vault data. It’s kind of like when configure volume mounts in your Kubernetes Deployments
to access Secrets
and ConfigMaps
.
template {
data = <<EOFmy secret: "{{ with secret "kv/data/2048-game/stuff" }}{{ .Data.data.greeting }}{{ end }}"EOF
destination = "local/2048-game.txt"
}
When we reference the secret, note that we say that it’s located at kv/data/2048-game/stuff
. Again, using data
in the path tells us that we’re using version 2 of the Secrets Engine. When we pull the value from the greeting field, note that we need to prefix it with .Data.data
. Again, this is a Secrets Engine v2 thing.
The local/2048-game.txt
file should contain the following once we deploy the jobspec:
my secret: "Hello, I'm a secret!"
Okay…we’re ready to deploy! Open up a terminal window on your host machine, and run this:
nomad job run hashicorp/nomad/jobs/2048-game
You’ll see something like this:
Woo hoo! It’s deployed!
5- Test the secret
It’s all well and good that we deployed this app, but how do we know that it worked? Good question! Let’s find out!
Since all we did was store the secret to the container instance’s filesystem, we can’t really verify anything from pulling up the app (i.e. at http://2048-game.localhost). The only way to test this is to take a peek into the container instance.
Upon deployment, Nomad attempts to allocate (schedule) your job, and it assigns it an allocation ID. So to be able to peek into our container, we first need to get our job’s allocation ID:
export ALLOCATION_ID=$(nomad job allocs -json otel-collector | jq -r '.[0].ID')
Now we can peek into the container instance:
nomad alloc exec -i -t -task 2048 $ALLOCATION_ID /bin/sh
You’ll see this:
At the #
prompt type:
cat local/2048-game.txt
Remember that local/2048-game.txt
is where we saved our secret to in the container instance, per line 57 of our jobspec. You should see the following output:
Ta-da! Congratulations! You have successfully configured Nomad/Vault integration, and were able to successfully pull a secret from Vault from your Nomad Job!
Conclusion
Whew! We made it! Nomad/Vault integration wasn’t so bad, was it? Here’s a recap of what we learned about configuring Nomad/Vault integration:
- We created a policy to allow Nomad and Vault to talk to each other (defined in
nomad-server-policy.hcl
) - We created a policy specific to our 2048 game (defined in
2048-policy.hcl
). - We created a Nomad role to tell Nomad what Nomad jobs can and can’t do
- We created a special token for Nomad to be able to talk to Vault
- We enabled the Secrets Engine (v2)
In our sample app:
- We limited where it could access its secrets from by binding the jobspec to a policy (defined in
2048-policy.hcl
) - We used the
template
stanza to pull our secret and save it to a text file in the container instance.
And now, I shall reward you with a picture of a yak against a lovely mountainous backdrop:
Peace, love, and code.