This blogpost is a follow up on my previous post about setting up a cluster, if you haven’t read the previous ones, I strongly suggest to read them first:

In this series of blogposts, I will explain how I configured my homeservers as a Nomad cluster with Consul as a DNS resolver for the cluster nodes and services.

As an example, I will show how to run Radicale, a FOSS CalDAV/CardDAV server on a Nomad cluster. I use Radicale to sync my calendars and contacts across my devices.

A fully functional Nomad job, looks like this:

datacenters = ["<DATACENTER>"],
type="service",

group "caldav" {
count = 1,

driver = "docker",
config {
port_map = {
http = 5232
}
volumes = [
"local:/config"
]
}
resources {
cpu = 300,
memory = 128,
network {
port "http" {}
}
}
env {
"GIT_EMAIL" = "<GIT EMAIL>",
"GIT_REPOSITORY" = "<GIT REPO>"
}
template {
data = <<EOF
[server]
hosts = <IP>:<PORT>

[auth]
type = htpasswd
htpasswd_filename = /config/users
htpasswd_encryption = bcrypt
delay = 10

[storage]
filesystem_folder = /data/collections
hook = git add -A && (git diff --cached --quiet || git commit -m "Changes by "%(user)s) && git push origin master

[web]
type = internal
EOF

destination = "local/config"
}
template {
data = <<EOF
EOF
destination = "local/users"
}
service {
port = "http"

tags = [
"traefik.enable=true",
"traefik.http.routers.radicale.rule=Host(<DOMAIN NAME>)",
]

check {
type= "http"
path= "/"
interval= "15s"
timeout="5s"
}
}
}
}
}

Let’s go over every part, one-by-one.

### job

job {
datacenter = ["<DATACENTER>"]
type = "service"
...
}

The job stanza defines the type of service, the datacenter to be used for the job, etc. The documentation is available at: https://www.nomadproject.io/docs/job-specification/job

You can also define a priority for the job which will be used by the scheduler to schedule your jobs.

### group

group "<NAME>" {
...
}

The group stanza defines to which group a task belongs. All tasks in the same group are co-located on the same Nomad node. This allows to configure how many instances have to run on a node or specify the network requirements. The documentation is available at: https://www.nomadproject.io/docs/job-specification/group

driver = "docker"  # raw_exec, exec, java, etc.
config {
image = "<DOCKER IMAGE>"
port_map {
http = <PORT>  # Specify the service port from the Docker container
}
volumes = [  # Mount configuration into the Docker container
"local:/config"
]
}
resources {  # Reserve resources on the cluster
cpu = <CPU MHZ TO RESERVE>
mem = <RAM MB TO RESERVER>
network { # Network ports to assign, 'http' is linked to <PORT> from above
port "http" {}
}
}
env {  # Environment variables
"VARIABLE" = "VALUE"
}
template {
data = <<EOF
<CONFIGURATION FILE TO INSERT>
EOF
location = "local/config"
}
}

The task stanza defines a single task for a group. A task can be a Docker container, web application, etc. with a provided environment and mounted data volumes. The documentation is available at: https://www.nomadproject.io/docs/job-specification/task

### service

service {
port = "http"

tags = [
"traefik.enable=true",
"traefik.http.routers.radicale.rule=Host(caldav.dylanvanassche.be)",
]

check {
type= "http"
path= "/"
interval= "15s"
timeout="5s"
}
}

The service stanza defines how Nomad has to register the job with Consul. The tags section can be used to configure Traefik for example, which we will use later on. The check section is used by Nomad to check if the service is alive, if not, Nomad will try to restart it. The documentation is available at: https://www.nomadproject.io/docs/job-specification/service

## Running the job

Nomad makes it really easy to run a job on a cluster, connect to one of the cluster nodes first.

Run as the nomad user the following command: