Gitlab Setup
I assume you’re already running gitlab somehow somewhere. If you’re using Gitlab as SaaS, your Atlantis Endpoint must be reachable from the internet. Please bear in mind, that the status GUI of atlantis does not provide authentication, if you don’t protect that route, you might expose sensitive information about your repos to the world. (That was the case while writing the article, might have changed now.)
Create Access-Token Using a Service-Account / Bot
In your project settings or group settings of your gitlab repo, you need to create a new Access Token. You might need admin access to do this. Remember the name of the token, you will need it in a minute. Save the Token in your password manager, we will need it to configure atlantis.
After you created the Token check the members settings of your repo to check all people who have access to the repo. In the search bar enter the name of the Token you created. You will find a bot user with a similar name of your token. Save that username next to your token, well need it for Atlantis.
Create Webhook Secrets
In your Gitlab Project visit the settings for webhooks and create a new webhook.
You need to generate the secret yourself, it’s a shared secret between gitlab and Atlantis. This is configured per atlantis node, so the secret needs to be the same for all repos.
You can generate a secret with this command for example.
openssl rand -base64 63
The path to atlantis depends on how you plan to reach the node, you can use DNS or you can use the IP. Doesn’t matter. However your URL looks like add a /events
to the path so the webhook url looks similar to https://$URL/events
.
Check the boxes
- Push events
- Comments
- Merge Request events
Done.
[Official Atlantis Docs](https://www.runatlantis.io/docs/configuring-webhooks.html
We need this, to allow Atlantis to retrieve the code and get an info about newly pushed changes.
Atlantis Setup
First of all: Atlantis is still in an early development phase during the time I’m writing this (Version 0.28.5) and as it is open source: support the team behind atlantis if you’re enjoying it!
We deployed Atlantis in the same subnet as our Gitlab Server (yes we’re hosting Gitlab ourselves, it’s not that hard) which is also the same subnet where we have our Vault Cluster. Neither Vault nor Atlantis need to be reachable via the Internet in that case. We don’t expose nodes to the internet if it’s not necessary. (Obviously that doesn’t mean, that the risk factors are significantly lower, you still need have your security measurements in place).
We are using Docker to deploy Atlantis, as k8s is a bit of an overkill (almost always ;) ).
Docker Compose ftw
We created a docker compose file for easier deployment.
services:
dockge:
image: ghcr.io/runatlantis/atlantis
restart: unless-stopped
ports:
- 4141:4141
volumes:
- ./data:/home/atlantis
env_file:
- path: ./atlantis.env
required: true
healthcheck:
test: curl --fail http://localhost:4141/healthz || exit 1
interval: 60s
retries: 5
start_period: 20s
timeout: 10s
├── atlantis.env
└── docker-compose.yaml
We tried to move as much config in the environment file, as this makes deployment easy with AWS ECS and SecretsManager in the future.
ATLANTIS_ENABLE_DIFF_MARKDOWN_FORMAT=true
ATLANTIS_ATLANTIS_URL=https://atlantis.acme.com
ATLANTIS_GITLAB_WEBHOOK_SECRET="my-super-secret-webhook-secret"
ATLANTIS_GITLAB_USER="@group_5_bot_8767867"
ATLANTIS_GITLAB_TOKEN="the-pat-for-the-gitlab-bot"
ATLANTIS_GITLAB_HOSTNAME="gitlab.com"
ATLANTIS_REPO_ALLOWLIST="gitlab.com/foo/*"
ATLANTIS_SLACK_TOKEN="some-slack-token"
ATLANTIS_WEB_BASIC_AUTH=true
ATLANTIS_WEB_USERNAME=basic-auth-user
ATLANTIS_WEB_PASSWORD=basic-auth-password
ATLANTIS_CONFIG=/home/atlantis/config.yaml
ATLANTIS_REPO_CONFIG=/home/atlantis/repos.yaml
ATLANTIS_EMOJI_REACTION=thumbsup
ATLANTIS_HIDE_PREV_PLAN_COMMENTS=true
ATLANTIS_HIDE_UNCHANGED_PLAN_COMMENTS=true
as you can see, we also wanted the Slack integration. For that, follow the offical documentation.
More Config
You can also move config into atlantis in the designated files mentioned in the environment. In our case like this:
├── config.yaml
└── repos.yaml
We wanted to have a PR/MR approved before any atlantis actions happen. Therefore we made this config:
# repos.yaml
repos:
- id: /.*/
# The default will be approved.
plan_requirements: [approved]
apply_requirements: [approved]
import_requirements: [approved]
# But all repos can set their own using atlantis.yaml
allowed_overrides:
[plan_requirements, apply_requirements, import_requirements]
The Slack config needs to go in config.yaml
.
# config.yaml
webhooks:
- event: apply
workspace-regex: .*
branch-regex: .*
kind: slack
channel: C07A4C9UPAL
docker compose up
and off we go.
Nginx as Reverse Proxy
We’re setting up nginx to have proper SSL termination in front of atlantis.
I trust you know how to configure nginx, the only thing you need in your root location is
proxy_pass http://localhost:4141/;
Next up: Vault
Vault Setup
I assume you already have a working Hashicorp Vault Cluster (this probably also works with OpenBao).
All secrets we’re using in terraform (for databases, access keys, API Keys used by our workloads, etc) are stored in Vault and either moved to AWS SecretsManager or just used to set up the initial infrastructure.
Hence, for terraform plan/apply
we need a valid token to retrieve these secrets from vault.
There are multiple ways to achieve this. Here is an example using a username & password auth method, which generates a token with a long lifetime.
resource "vault_auth_backend" "user_pass" {
type = "userpass"
path = "userpass"
description = "Username and Password Authentication"
tune {
max_lease_ttl = "43800h"
default_lease_ttl = "43800h"
listing_visibility = "unauth"
token_type = "service"
}
}
data "vault_policy_document" "user_pass" {
rule {
path = "auth/token/create"
capabilities = [
"create",
"update",
]
}
rule {
path = "auth/token/lookup-self"
capabilities = [
"create",
"update",
"read",
"list"
]
}
rule {
path = "event*"
capabilities = [
"read",
"list",
]
}
rule {
path = "module*"
capabilities = [
"read",
"list",
]
}
rule {
path = "aws*"
capabilities = [
"read",
"list",
]
}
}
resource "vault_policy" "user_pass" {
name = "user-password-default"
policy = data.vault_policy_document.user_pass.hcl
}
resource "random_password" "atlantis_password" {
length = 16
special = true
override_special = "!#_"
}
resource "vault_generic_endpoint" "create_user_atlantis" {
depends_on = [vault_auth_backend.user_pass]
path = "auth/userpass/users/atlantis"
data_json = jsonencode({
password = random_password.atlantis_password.result
policies = [vault_policy.user_pass.name, "default"]
})
}
resource "vault_mount" "vault_userbase" {
path = "vault-userbase"
type = "kv-v2"
description = "Vault Userbase Storage"
}
resource "vault_generic_secret" "atlantis_user" {
path = "${vault_mount.vault_userbase.path}/atlantis"
data_json = <<EOT
{
"user": "atlantis",
"password": "${random_password.atlantis_password.result}"
}
EOT
}
terraform provider setup & credentials
Depending on the service you want to provision using terraform, you need to make sure, the authentication configuration in terraform matches the one for atlantis. The Credentials/Authentication to your target service (AWS in my case), will be handled by the atlantis node.
In the AWS Example, you can pass along a profile, which needs to be present on the atlantis node. e.g.
provider "aws" {
profile = "assumed-role-atlantis-dev"
region = "eu-central-1"
}
For AWS you have multiple options to configure the Atlantis node. I would recommend to use the assume role
variant using EC2 Instance Profiles (if using EC2). For Fargate, you can attach the policy directly.
You could also create an .aws/credentials
file in the Atlantis container, but this is not considered good practice.
Gitlab Pipeline
There is no need to configure anything here, as Atlantis will take over. Instead of creating a .gitlab-ci.yml
, you can create a atlantis.yaml
file. Which tells Atlantis what to do and when and where to run.
You can explicitly tell Atlantis which directory holds your terraform code and disable autodiscovery.
version: 3
automerge: true
autodiscover:
mode: disabled
delete_source_branch_on_merge: true
abort_on_execution_order_fail: true
projects:
- dir: ./demo/mycloud
execution_order_group: 1
workspace: default
name: base-stuff
autoplan:
enabled: true
Renovate
Use Renovate to keep modules/dependencies up to date! You still have your .gitlab-ci.yml
;)