feat: revamp the entire documentation with some baseline
@ -6,20 +6,21 @@
|
||||
![Consul Badge](https://img.shields.io/badge/Consul-F24C53?logo=consul&logoColor=fff&style=for-the-badge)
|
||||
![Vault Badge](https://img.shields.io/badge/Vault-FFEC6E?logo=vault&logoColor=000&style=for-the-badge)
|
||||
|
||||
Hashistack-Ansible's project aims at providing a production-ready, repeatable, and manageable way of deploying hashicorp clusters for [Nomad](https://www.hashicorp.com/products/nomad), [Consul](https://www.hashicorp.com/products/consul),and [Vault](https://www.hashicorp.com/products/vault).
|
||||
**Hashistack-Ansible** provides a production-ready, repeatable, and manageable solution for deploying HashiCorp clusters, including [Nomad](https://www.hashicorp.com/products/nomad), [Consul](https://www.hashicorp.com/products/consul), and [Vault](https://www.hashicorp.com/products/vault). This project is designed to automate the deployment and maintenance of these services, ensuring a seamless and efficient cluster setup.
|
||||
|
||||
> [!NOTE]
|
||||
> This documentation is continually updated and may not represent the state of the project at any specific prior release.
|
||||
> This documentation is continuously updated and may not always reflect the exact state of a specific release.
|
||||
|
||||
# Index
|
||||
## Index
|
||||
|
||||
1. [Introduction](01-introduction)
|
||||
2. [Quick-Start Guide](02-quick-start)
|
||||
3. [General informations](10-general-informations)
|
||||
3. [General Information](10-general-informations)
|
||||
4. [Architecture Guide](11-architecture-guide)
|
||||
5. Service Configuration
|
||||
- [Globals](20-globals)
|
||||
- [Consul](21-consul-configuration)
|
||||
- [Nomad](22-nomad-configuration)
|
||||
- [Vault](23-vault-configuration)
|
||||
- [Extras](24-extra-configuration)
|
||||
6. [TLS]()
|
||||
6. [TLS](31-tls-guide)
|
||||
|
@ -1,65 +1,46 @@
|
||||
# Hashistack-Ansible
|
||||
|
||||
## General informations
|
||||
## 🛠️ General Information
|
||||
|
||||
### Inventory
|
||||
### 📋 Inventory
|
||||
|
||||
#### Inventory file
|
||||
#### 🗂️ Inventory File
|
||||
|
||||
This project relies on an ansible inventory file in order to know which host does what.
|
||||
A sample inventory file can be found under `playbooks/inventory/multinode`, within the collection directory.
|
||||
Hashistack-Ansible relies on an Ansible inventory file to understand which host handles which role. You can find a sample inventory file under `playbooks/inventory/multinode` within the collection directory.
|
||||
|
||||
#### Ingress
|
||||
#### 🌐 Ingress
|
||||
|
||||
The project, while built for maximum flexibility in the deployment and configuration, takes a rather opinionated approach when it comes to ingress.
|
||||
While Hashistack-Ansible is built for flexibility, it takes a specific stance on ingress:
|
||||
|
||||
This project will NOT deploy central ingress (reverse-proxy) for your cluster(s), unless you deploy a Nomad cluster.
|
||||
- **No central ingress services** are deployed unless you also deploy a Nomad cluster.
|
||||
- **Dedicated ingress nodes** can be costly on cloud platforms and might be underutilized unless they can handle other workloads.
|
||||
- When needed, ingress nodes are deployed as a Nomad job, enabling resource reuse or isolation via [Nomad node pools](https://developer.hashicorp.com/nomad/docs/concepts/node-pools).
|
||||
|
||||
The opinion on ingress nodes is as follow:
|
||||
### 📁 Main Configuration Directory
|
||||
|
||||
- dedicated ingress nodes are expensive if you are running these clusters on some cloud provider.
|
||||
- better options exist if running on said cloud providers, or even on some baremetal private clouds (openstack, etc..)
|
||||
- If there is a need to deploy ingress nodes (eg. for baremetal deployments), these nodes are essentially wasted resources unless you can run other workloads on them.
|
||||
Hashistack-Ansible uses a configuration directory to store all necessary files and artifacts. The directory is defined by the `hashistack_configuration_directory` variable. By default, it’s set to `{{ lookup('env', 'PWD') }}/etc/hashistack`, which equals `$(pwd)/etc/hashistack`.
|
||||
|
||||
For these reasons, central ingress services will only be deployed if you also deploy a Nomad cluster, and they will be deployed as a Nomad job on the cluster, on dedicated nodes.
|
||||
In this directory, place your `globals.yml` file with your configurations. You can find a template for this file at `playbooks/inventory/group_vars/all/globals.yml` in the collection's directory. This file starts simple, but it’s highly extensible. Additional service configurations can be found in `consul.yml`, `vault.yml`, `nomad.yml`, `nomad_ingress.yml`, `cni.yml`, and `all.yml` within the same directory.
|
||||
|
||||
This allows you to reuse those nodes for potentially other workloads, or isolate the ingress completely through [node pools](https://developer.hashicorp.com/nomad/docs/concepts/node-pools).
|
||||
> **Pro Tip:** ✨ Instead of editing these service configuration files directly, extend them by adding your own values to your `globals.yml` file.
|
||||
|
||||
### Main configuration directory
|
||||
### 📂 Sub-Configuration Directories
|
||||
|
||||
Hashistack Ansible uses a configuration directory to store all the configuration files and other artifacts.
|
||||
You can fine-tune configurations by using sub-directories for group-specific or host-specific settings.
|
||||
|
||||
This directory is defined with the variable `hashistack_configuration_directory`. By default, it will look at `{{ lookup('env', 'PWD') }}/etc/hashistack`, which equals `$(pwd)/etc/hashistack`.
|
||||
#### 👥 Group Configuration Directories
|
||||
|
||||
Under this directory, you are expected to place the `globals.yml` file, with your configuration.
|
||||
Each group in the inventory has its own sub-directory within the main configuration directory:
|
||||
|
||||
The `globals.yml` file can be found under `playbooks/inventory/group_vars/all/globals.yml` in the collection's directory.
|
||||
- The `nomad_servers` group will look in `{{ hashistack_configuration_directory }}/nomad_servers`
|
||||
- The `vault_servers` group will look in `{{ hashistack_configuration_directory }}/vault_servers`
|
||||
- The `consul_servers` group will look in `{{ hashistack_configuration_directory }}/consul_servers`
|
||||
|
||||
This file, by default, only include a simple configuration, but is extensible. In the same `playbooks/inventory/group_vars/all/` directory where the `globals.yml` file is located, you can find the `consul.yml`, `vault.yml`, `nomad.yml`, `nomad_ingress.yml`, `cni.yml`, and `all.yml` files. These files hold the complete configuration for the services.
|
||||
Inside each of these directories, you can place a `globals.yml` file that will override settings in the main `globals.yml`.
|
||||
|
||||
These files should not be edited, but rather superseeded by appending your own values to your `globals.yml` file.
|
||||
|
||||
### Sub configuration directories
|
||||
|
||||
On top of the `globals.yml` file, you can fine tune variables per groups, or hosts, depending on your setup.
|
||||
|
||||
#### Group configuration directories
|
||||
|
||||
|
||||
Subdirectories can be used to tailor the configuration further.
|
||||
|
||||
Each group within the `inventory` will look at a directory named after itself:
|
||||
|
||||
- nomad_servers group will look for `{{ hashistack_configuration_directory }}/nomad_servers`
|
||||
- vault_servers group will look for `{{ hashistack_configuration_directory }}/vault_servers`
|
||||
- consul_servers group will look for `{{ hashistack_configuration_directory }}/consul_servers`
|
||||
|
||||
Within each of these directories, you can place an additional `globals.yml` file, that will superseed the file at the root of the configuration directory.
|
||||
|
||||
- **Example**:
|
||||
|
||||
If `etc/hashistack/globals.yml` looks like:
|
||||
**Example:**
|
||||
|
||||
If `etc/hashistack/globals.yml` looks like this:
|
||||
```yaml
|
||||
---
|
||||
enable_vault: "no"
|
||||
@ -67,15 +48,13 @@ Within each of these directories, you can place an additional `globals.yml` file
|
||||
enable_nomad: "no"
|
||||
```
|
||||
|
||||
And `etc/hashistack/nomad_servers/globals.yml` looks like:
|
||||
|
||||
And `etc/hashistack/nomad_servers/globals.yml` looks like this:
|
||||
```yaml
|
||||
---
|
||||
enable_nomad: "yes"
|
||||
```
|
||||
|
||||
Servers in the `nomad_servers` group will end up with the following configuration:
|
||||
|
||||
Then the `nomad_servers` group will have this configuration:
|
||||
```yaml
|
||||
---
|
||||
enable_vault: "no"
|
||||
@ -83,16 +62,13 @@ Within each of these directories, you can place an additional `globals.yml` file
|
||||
enable_nomad: "yes"
|
||||
```
|
||||
|
||||
This approach lets you customize your deployment for your exact needs.
|
||||
#### Host Configuration Directories 🖥️
|
||||
|
||||
#### Host configuration directories
|
||||
For even more granularity, each group configuration directory can contain sub-directories for individual hosts, named after the hosts in your inventory. These host directories can include a `globals.yml` file to override both group and global settings.
|
||||
|
||||
Additionally, within each `group configuration directory`, you can add `host configuration directory`, that will be named after the hosts defined in your `inventory`. These host directories can also be populated with a `globals.yml` file, that will superseed the `group` and `deployment` configuration files.
|
||||
|
||||
- **Example**
|
||||
|
||||
If `etc/hashistack/globals.yml` looks like:
|
||||
**Example:**
|
||||
|
||||
If `etc/hashistack/globals.yml` looks like this:
|
||||
```yaml
|
||||
---
|
||||
enable_vault: "no"
|
||||
@ -101,22 +77,19 @@ Additionally, within each `group configuration directory`, you can add `host con
|
||||
api_interface: "eth0"
|
||||
```
|
||||
|
||||
And `etc/hashistack/nomad_servers/globals.yml` looks like:
|
||||
|
||||
And `etc/hashistack/nomad_servers/globals.yml` looks like this:
|
||||
```yaml
|
||||
---
|
||||
enable_nomad: "yes"
|
||||
api_interface: "eth1"
|
||||
```
|
||||
|
||||
And `etc/hashistack/nomad_servers/nomad-master-01/globals.yml` looks like:
|
||||
|
||||
And `etc/hashistack/nomad_servers/nomad-master-01/globals.yml` looks like this:
|
||||
```yaml
|
||||
api_interface: "eth0.vlan40"
|
||||
```
|
||||
|
||||
Servers in the `nomad_servers` group will end up with the following configuration:
|
||||
|
||||
Then all servers in the `nomad_servers` group will have this configuration:
|
||||
```yaml
|
||||
---
|
||||
enable_vault: "no"
|
||||
@ -124,8 +97,7 @@ Additionally, within each `group configuration directory`, you can add `host con
|
||||
enable_nomad: "yes"
|
||||
api_interface: "eth1"
|
||||
```
|
||||
Except for host `nomad-master-01`, who will have the following:
|
||||
|
||||
Except for `nomad-master-01`, which will have this configuration:
|
||||
```yaml
|
||||
---
|
||||
enable_vault: "no"
|
||||
@ -134,4 +106,4 @@ Additionally, within each `group configuration directory`, you can add `host con
|
||||
api_interface: "eth0.vlan40"
|
||||
```
|
||||
|
||||
This lets you tweak deployment parameters for each host, and allow for great flexibility.
|
||||
This flexible approach lets you tailor the deployment to your exact needs, ensuring everything works just the way you want! 🎯
|
||||
|
@ -1,15 +1,12 @@
|
||||
# Architecture Guide
|
||||
# 🏗️ Architecture Guide
|
||||
|
||||
Hashistack-Ansible allows you to deploy a number of architecture, wether you want to deploy a dev, testing, or production environment. These different architectures are described in this section.
|
||||
Hashistack-Ansible offers flexibility in deploying various environments, whether for development, testing, or production. This guide will help you understand the different architectures you can deploy with Hashistack-Ansible.
|
||||
|
||||
## Dev/Testing deployment
|
||||
## 🧪 Dev/Testing Deployment
|
||||
|
||||
If you only want to deploy a test environment, you can simply add a simgle host to each service that you want to deploy.
|
||||
If you're setting up a test environment, you can deploy each service on a single host. Here’s an example of a minimal inventory file:
|
||||
|
||||
```ini
|
||||
[haproxy_servers]
|
||||
test-server
|
||||
|
||||
[vault_servers]
|
||||
test-server
|
||||
|
||||
@ -25,37 +22,41 @@ test-server
|
||||
[consul_agents]
|
||||
```
|
||||
|
||||
In this example, you will end end with each service running on a single host, with no clustering, and no redundancy. This setup *IS NOT RECOMMENDED** for anything but testing purposes, as it provides zero resiliency, and will break if anything goes down.
|
||||
In this setup, each service runs on a single host with no clustering and no redundancy. **This configuration is ONLY recommended for testing** as it provides no resiliency and will fail if any component goes down.
|
||||
|
||||
For this setup, the only requirement is for the target host to have a network interface that you can ssh into from the deployment host.
|
||||
### 📝 Requirements
|
||||
|
||||
The architecture would like something like this:
|
||||
The only requirement for this setup is that the target host must have a network interface accessible via SSH from the deployment host.
|
||||
|
||||
### 🌱 Dev/Testing Architecture
|
||||
|
||||
The architecture for this test setup looks like this:
|
||||
|
||||
```mermaid
|
||||
graph LR;
|
||||
client[Client] -->|http| server{
|
||||
Vault Server
|
||||
Consul Server
|
||||
Nomad Server
|
||||
};
|
||||
client[Client] -->|http/s| server
|
||||
subgraph server[Dev/Test Server]
|
||||
direction LR
|
||||
vault[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/vault_500x500.png' width='40' height='40' /><span>] <--> consul
|
||||
nomad[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_500x500.png' width='40' height='40' /><span>] <--> consul
|
||||
consul[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>]
|
||||
end
|
||||
```
|
||||
## Production deployment
|
||||
|
||||
For production use, it is recommended to separate concerns as much as possible. This means that consul, vault and nomad, as well as the haproxy services, should be on different nodes altogether. The **client-facing** and **cluster-facing** interfaces should also be separated.
|
||||
## 🚀 Production Deployment
|
||||
|
||||
Ideally, you would need:
|
||||
- an odd number (3 to 5) of consul servers
|
||||
- an odd number (3 to 5) of vault servers
|
||||
- an odd number (3 to 5) of nomad servers
|
||||
- multiple (2 to 3) haproxy servers
|
||||
For production environments, it’s crucial to separate concerns and deploy services on different nodes. This ensures high availability and fault tolerance.
|
||||
|
||||
A production-ready setup could look like:
|
||||
### 🛡️ Recommended Setup
|
||||
|
||||
- **Consul Servers:** An odd number (3 to 5) of nodes.
|
||||
- **Vault Servers:** An odd number (3 to 5) of nodes.
|
||||
- **Nomad Servers:** An odd number (3 to 5) of nodes.
|
||||
- **HAProxy Servers:** Multiple nodes (2 to 3) for load balancing.
|
||||
|
||||
A production-ready inventory file might look like this:
|
||||
|
||||
```ini
|
||||
[nomad_ingress]
|
||||
nomadingress1
|
||||
nomadingress2
|
||||
|
||||
[vault_servers]
|
||||
vaultnode1
|
||||
vaultnode2
|
||||
@ -77,102 +78,95 @@ nomadclient2
|
||||
nomadclient3
|
||||
|
||||
[consul_agents]
|
||||
...
|
||||
```
|
||||
|
||||
The **nomad**, **vault** and **consul** servers should have **two network interfaces**, and one of them should be reachable from the haproxy nodes.
|
||||
### 🌐 Networking Considerations
|
||||
|
||||
The architecture for this infrastructure would look like:
|
||||
- **Two network interfaces** should be available on the **Nomad**, **Vault**, and **Consul** servers, with one being reachable from the HAProxy nodes.
|
||||
|
||||
### 🖥️ Production Architecture Diagram
|
||||
|
||||
Here’s what the architecture for a production setup might look like:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph consul
|
||||
subgraph consulservers[Consul Servers]
|
||||
direction LR
|
||||
consul1[Consul 01] <--> consul2[Consul 02] & consul3[Consul 03] & consul4[Consul 04] & consul5[Consul 05]
|
||||
consul2[Consul 02] <--> consul3[Consul 03] & consul4[Consul 04] & consul5[Consul 05]
|
||||
consul3[Consul 03] <--> consul4[Consul 04] & consul5[Consul 05]
|
||||
consul4[Consul 04] <--> consul5[Consul 05]
|
||||
|
||||
consul1[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>] <--> consul2 & consul3 & consul4 & consul5
|
||||
consul2[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>] <--> consul3 & consul4 & consul5
|
||||
consul3[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>] <--> consul4 & consul5
|
||||
consul4[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>] <--> consul5
|
||||
consul5[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_500x500.png' width='40' height='40' /><span>]
|
||||
end
|
||||
|
||||
subgraph vault
|
||||
subgraph vaultservers[Vault Servers]
|
||||
direction LR
|
||||
subgraph vaultnode1
|
||||
subgraph vaultnode1[ ]
|
||||
direction TB
|
||||
vault1[Vault 01] <--> consulvaultagent1([Consul agent])
|
||||
vault1[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/vault_500x500.png' width='40' height='40' /><span>] <--> consulvaultagent1
|
||||
consulvaultagent1([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph vaultnode2
|
||||
subgraph vaultnode2[ ]
|
||||
direction TB
|
||||
vault2[Vault 02] <--> consulvaultagent2([Consul agent])
|
||||
vault2[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/vault_500x500.png' width='40' height='40' /><span>] <--> consulvaultagent2
|
||||
consulvaultagent2([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph vaultnode3
|
||||
subgraph vaultnode3[ ]
|
||||
direction TB
|
||||
vault3[Vault 03] <--> consulvaultagent3([Consul agent])
|
||||
vault3[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/vault_500x500.png' width='40' height='40' /><span>] <--> consulvaultagent3
|
||||
consulvaultagent3([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
vaultnode1 <--> vaultnode2
|
||||
vaultnode2 <--> vaultnode3
|
||||
vaultnode3 <--> vaultnode1
|
||||
end
|
||||
|
||||
vault -->|Service registration| consul
|
||||
vaultservers -->|Service registration| consulservers
|
||||
|
||||
subgraph nomadservers
|
||||
subgraph nomadservers[Nomad Servers]
|
||||
direction LR
|
||||
subgraph nomadservernode1
|
||||
subgraph nomadservernode1[ ]
|
||||
direction TB
|
||||
nomadserver1[Nomad server 01] <--> consulnomadserveragent1([Consul agent])
|
||||
nomadserver1[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_500x500.png' width='40' height='40' /><span>] <--> consulnomadserveragent1
|
||||
consulnomadserveragent1([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph nomadservernode2
|
||||
subgraph nomadservernode2[ ]
|
||||
direction TB
|
||||
nomadserver2[Nomad server 02] <--> consulnomadserveragent2([Consul agent])
|
||||
nomadserver2[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_500x500.png' width='40' height='40' /><span>] <--> consulnomadserveragent2
|
||||
consulnomadserveragent2([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph nomadservernode3
|
||||
subgraph nomadservernode3[ ]
|
||||
direction TB
|
||||
nomadserver3[Nomad server 03] <--> consulnomadserveragent3([Consul agent])
|
||||
nomadserver3[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_500x500.png' width='40' height='40' /><span>] <--> consulnomadserveragent3
|
||||
consulnomadserveragent3([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
nomadserver1 <--> nomadserver2
|
||||
nomadserver2 <--> nomadserver3
|
||||
nomadserver3 <--> nomadserver1
|
||||
end
|
||||
|
||||
nomadservers -->|Service registration| consul
|
||||
nomadservers -->|Service registration| consulservers
|
||||
|
||||
|
||||
client[Client] -->|https :443| keepalived
|
||||
keepalived[VIP] --> haproxy1[HAProxy] & haproxy2[HAProxy]
|
||||
subgraph nomadingress
|
||||
subgraph nomadclients[Nomad Clients]
|
||||
direction LR
|
||||
subgraph nomadingressnode1
|
||||
subgraph nomadclientnode1[ ]
|
||||
direction LR
|
||||
nomadingress1[Nomad ingress 01] <--> consulnomadingressagent1([Consul agent])
|
||||
nomadingress1[Nomad ingress 01] <--> haproxy1[HAProxy]
|
||||
nomadclient1[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_white_500x500.png' width='40' height='40' /><span>] <--> consulnomadclientagent1
|
||||
consulnomadclientagent1([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph nomadingressnode2
|
||||
subgraph nomadclientnode2[ ]
|
||||
direction LR
|
||||
nomadingress2[Nomad ingress 02] <--> consulnomadingressagent2([Consul agent])
|
||||
nomadingress2[Nomad ingress 01] <--> haproxy2[HAProxy]
|
||||
nomadclient2[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_white_500x500.png' width='40' height='40' /><span>] <--> consulnomadclientagent2
|
||||
consulnomadclientagent2([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
subgraph nomadclientnode3[ ]
|
||||
direction LR
|
||||
nomadclient3[<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/nomad_white_500x500.png' width='40' height='40' /><span>] <--> consulnomadclientagent3
|
||||
consulnomadclientagent3([<span style='min-width: 40px; display: block;'><img src='assets/hashicorp/consul_white_500x500.png' width='40' height='40' /><span>])
|
||||
end
|
||||
end
|
||||
|
||||
nomadingress <--> nomadservers
|
||||
|
||||
subgraph nomadclients
|
||||
direction LR
|
||||
subgraph nomadclientnode1
|
||||
direction LR
|
||||
nomadclient1[Nomad client 01] <--> consulnomadclientagent1([Consul agent])
|
||||
end
|
||||
subgraph nomadclientnode2
|
||||
direction LR
|
||||
nomadclient2[Nomad client 02] <--> consulnomadclientagent2([Consul agent])
|
||||
end
|
||||
subgraph nomadclientnode3
|
||||
direction LR
|
||||
nomadclient3[Nomad client 03] <--> consulnomadclientagent3([Consul agent])
|
||||
end
|
||||
end
|
||||
|
||||
nomadclients -->|Service registration| consul
|
||||
nomadclients -->|Service registration| consulservers
|
||||
nomadclients <--> nomadservers
|
||||
```
|
||||
|
||||
> [!NOTE]: you can substract the haproxy part if using an external load-balancing solution, like AWS ALB,or any other LB technology, for connecting to your platform.
|
||||
|
169
20-globals.md
Normal file
@ -0,0 +1,169 @@
|
||||
### 🌐 Global Options
|
||||
|
||||
This section defines the overarching settings for the deployment:
|
||||
|
||||
```yaml
|
||||
enable_ingress: "yes"
|
||||
enable_vault: "yes"
|
||||
enable_consul: "yes"
|
||||
enable_nomad: "yes"
|
||||
|
||||
nomad_version: "1.8.1"
|
||||
consul_version: "1.18.1"
|
||||
vault_version: "1.16.2"
|
||||
|
||||
consul_fqdn: consul.ednz.lab
|
||||
vault_fqdn: vault.ednz.lab
|
||||
nomad_fqdn: nomad.ednz.lab
|
||||
```
|
||||
|
||||
- **Service Enablement**: Flags to enable or disable Ingress, Vault, Consul, and Nomad.
|
||||
- **Versions**: Defines the versions for Nomad, Consul, and Vault.
|
||||
- **FQDNs**: Fully Qualified Domain Names for each service.
|
||||
|
||||
### 🔧 Network Configuration
|
||||
|
||||
This section handles network-related settings:
|
||||
|
||||
```yaml
|
||||
api_interface: "eth0"
|
||||
api_interface_address: "{{ ansible_facts[api_interface]['ipv4']['address'] }}"
|
||||
```
|
||||
|
||||
- **`api_interface`**: The network interface used for the API.
|
||||
- **`api_interface_address`**: Automatically derived IP address from the specified interface.
|
||||
|
||||
### 📜 Logging Options
|
||||
|
||||
Configure logging across the stack:
|
||||
|
||||
```yaml
|
||||
enable_log_to_file: true
|
||||
```
|
||||
|
||||
- **`enable_log_to_file`**: Enable or disable logging to file for all services.
|
||||
|
||||
### 🔒 External TLS Options
|
||||
|
||||
Set up external TLS configurations:
|
||||
|
||||
```yaml
|
||||
enable_tls_external: false
|
||||
external_tls_externally_managed_certs: false
|
||||
```
|
||||
|
||||
- **`enable_tls_external`**: Enable TLS for external communications.
|
||||
- **`external_tls_externally_managed_certs`**: Determines if TLS certificates are managed externally.
|
||||
|
||||
### 🔐 Internal TLS Options
|
||||
|
||||
Configure internal TLS:
|
||||
|
||||
```yaml
|
||||
enable_tls_internal: false
|
||||
internal_tls_externally_managed_certs: false
|
||||
```
|
||||
|
||||
- **`enable_tls_internal`**: Enable TLS for internal communications between services.
|
||||
- **`internal_tls_externally_managed_certs`**: Determines if internal TLS certificates are managed externally.
|
||||
|
||||
### 🔹 Consul Configuration
|
||||
|
||||
Basic Consul settings:
|
||||
|
||||
```yaml
|
||||
consul_domain: consul
|
||||
consul_datacenter: dc1
|
||||
consul_primary_datacenter: "{{ consul_datacenter }}"
|
||||
consul_gossip_encryption_key: "{{ _credentials.consul.gossip_encryption_key }}"
|
||||
consul_enable_script_checks: false
|
||||
|
||||
consul_extra_files_list: []
|
||||
consul_extra_configuration: {}
|
||||
|
||||
consul_enable_tls: "{{ enable_tls_internal }}"
|
||||
|
||||
consul_log_level: info
|
||||
```
|
||||
|
||||
- **`consul_domain`**: Domain for Consul services.
|
||||
- **`consul_datacenter`**: Datacenter name for Consul.
|
||||
- **`consul_gossip_encryption_key`**: Key for gossip encryption.
|
||||
- **`consul_enable_tls`**: TLS setting for Consul.
|
||||
- **`consul_log_level`**: Logging level for Consul.
|
||||
|
||||
### 🔐 Vault Configuration
|
||||
|
||||
Vault-specific settings:
|
||||
|
||||
```yaml
|
||||
vault_cluster_name: vault
|
||||
vault_bind_addr: "0.0.0.0"
|
||||
vault_cluster_addr: "{{ api_interface_address }}"
|
||||
vault_enable_ui: true
|
||||
vault_disable_mlock: false
|
||||
vault_disable_cache: false
|
||||
|
||||
vault_extra_files_list: []
|
||||
vault_extra_configuration: {}
|
||||
|
||||
vault_enable_tls: "{{ enable_tls_internal }}"
|
||||
|
||||
vault_enable_service_registration: "{{ enable_consul | bool }}"
|
||||
|
||||
vault_enable_plugins: false
|
||||
|
||||
vault_log_level: info
|
||||
```
|
||||
|
||||
- **`vault_cluster_name`**: Cluster name for Vault.
|
||||
- **`vault_bind_addr`**: Address Vault listens on.
|
||||
- **`vault_enable_ui`**: Enable or disable the Vault UI.
|
||||
- **`vault_enable_service_registration`**: Register Vault with Consul if enabled.
|
||||
- **`vault_enable_tls`**: TLS setting for Vault.
|
||||
- **`vault_log_level`**: Logging level for Vault.
|
||||
|
||||
### 🗂️ Nomad Configuration
|
||||
|
||||
Nomad settings:
|
||||
|
||||
```yaml
|
||||
nomad_region: global
|
||||
nomad_datacenter: dc1
|
||||
|
||||
nomad_extra_files_list: []
|
||||
nomad_extra_configuration: {}
|
||||
|
||||
nomad_autopilot_configuration: {}
|
||||
|
||||
nomad_driver_enable_docker: true
|
||||
nomad_driver_enable_podman: false
|
||||
nomad_driver_enable_raw_exec: false
|
||||
nomad_driver_enable_java: false
|
||||
nomad_driver_enable_qemu: false
|
||||
|
||||
nomad_driver_extra_configuration: {}
|
||||
|
||||
nomad_log_level: info
|
||||
|
||||
nomad_enable_tls: "{{ enable_tls_internal }}"
|
||||
```
|
||||
|
||||
- **`nomad_region`**: Region for Nomad deployment.
|
||||
- **`nomad_datacenter`**: Datacenter name for Nomad.
|
||||
- **`nomad_driver_enable_*`**: Enable or disable various Nomad drivers.
|
||||
- **`nomad_enable_tls`**: TLS setting for Nomad.
|
||||
- **`nomad_log_level`**: Logging level for Nomad.
|
||||
|
||||
> [!NOTE]
|
||||
> Currently, only the docker and raw_exec drivers are supported to be automatically configured, other drivers will be supported at a later date.
|
||||
|
||||
### 🌟 Key Points
|
||||
|
||||
- **Defaults and Recommendations**: `globals.yml` provides default and recommended settings for a standard deployment.
|
||||
- **Advanced Customizations**: For more granular or specific settings, refer to the advanced configuration files for each component (e.g., `consul.yml`, `vault.yml`, `nomad.yml`).
|
||||
|
||||
> [!WARNING]
|
||||
> even for advanced configuration, changes should still be applied to the `globlas.yml` file, as it will superseed all other configuration files, and your changes might get ignored.
|
||||
|
||||
This configuration file sets up a solid foundation for your HashiCorp stack while allowing flexibility for advanced customizations and adjustments. Adjust values according to your environment's requirements and operational needs.
|
@ -1,3 +1,226 @@
|
||||
# Deploying a Consul cluster
|
||||
### Consul Configuration Documentation 🌍
|
||||
|
||||
This documentation explains each steps necessary to successfully deploy a Consul cluster using the ednz_cloud.hashistack ansible collection.
|
||||
This section outlines the customizable variables for deploying and managing a Consul cluster using the `hashistack-ansible` collection. Each configuration map should adhere to the structure documented in the official [Consul documentation](https://developer.hashicorp.com/consul/docs) as these maps will be merged into the final Consul configuration file.
|
||||
|
||||
---
|
||||
|
||||
### 🔧 Basics
|
||||
|
||||
To deploy a Consul cluster, start by enabling it:
|
||||
|
||||
```yaml
|
||||
enable_consul: "yes"
|
||||
```
|
||||
|
||||
Specify the version of Consul to install:
|
||||
|
||||
```yaml
|
||||
consul_version: "1.18.1"
|
||||
```
|
||||
|
||||
You can define a fully qualified domain name (FQDN) for Consul:
|
||||
|
||||
```yaml
|
||||
consul_fqdn: consul.ednz.lab
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🌍 General Settings
|
||||
|
||||
Configure general settings for your Consul deployment:
|
||||
|
||||
```yaml
|
||||
consul_domain: consul
|
||||
consul_datacenter: dc1
|
||||
consul_primary_datacenter: "{{ consul_datacenter }}"
|
||||
consul_gossip_encryption_key: "{{ _credentials.consul.gossip_encryption_key }}"
|
||||
consul_enable_script_checks: false
|
||||
```
|
||||
|
||||
These settings allow you to define the Consul domain, datacenter, and encryption key for secure communication.
|
||||
|
||||
---
|
||||
|
||||
### 🌐 TLS Configuration
|
||||
|
||||
Enable TLS for internal Consul communication:
|
||||
|
||||
```yaml
|
||||
consul_enable_tls: "{{ enable_tls_internal }}"
|
||||
```
|
||||
|
||||
Define TLS configuration parameters:
|
||||
|
||||
```yaml
|
||||
consul_tls_configuration:
|
||||
defaults:
|
||||
ca_file: "/etc/ssl/certs/ca-certificates.crt"
|
||||
cert_file: "{{ consul_certs_dir }}/fullchain.crt"
|
||||
key_file: "{{ consul_certs_dir }}/cert.key"
|
||||
verify_incoming: false
|
||||
verify_outgoing: true
|
||||
internal_rpc:
|
||||
verify_server_hostname: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📂 Directory Paths
|
||||
|
||||
Set the directory paths used by Consul for configuration, data, certificates, and logs:
|
||||
|
||||
```yaml
|
||||
consul_config_dir: "{{ hashistack_remote_config_dir }}/consul.d"
|
||||
consul_data_dir: "/opt/consul"
|
||||
consul_certs_dir: "{{ consul_config_dir }}/tls"
|
||||
consul_logs_dir: "{{ hashistack_remote_log_dir }}/consul"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🔗 Join Configuration
|
||||
|
||||
Configure how Consul servers and agents join the cluster:
|
||||
|
||||
```yaml
|
||||
consul_join_configuration:
|
||||
retry_join: |
|
||||
{{
|
||||
groups['consul_servers'] |
|
||||
map('extract', hostvars, ['consul_address_configuration', 'bind_addr']) |
|
||||
list |
|
||||
to_json |
|
||||
from_json
|
||||
}}
|
||||
retry_interval: 30s
|
||||
retry_max: 0
|
||||
```
|
||||
|
||||
This setup helps ensure that Consul agents and servers can reliably join the cluster.
|
||||
|
||||
---
|
||||
|
||||
### 🖥️ Server Configuration
|
||||
|
||||
Enable and configure Consul servers:
|
||||
|
||||
```yaml
|
||||
consul_enable_server: "{{ 'consul_servers' in group_names }}"
|
||||
consul_bootstrap_expect: "{{ (groups['consul_servers'] | length) }}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥️ UI Configuration
|
||||
|
||||
Enable the Consul UI:
|
||||
|
||||
```yaml
|
||||
consul_ui_configuration:
|
||||
enabled: "{{ consul_enable_server }}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🛡️ ACL Configuration
|
||||
|
||||
ACLs are enabled by default in Consul. Customize ACL settings and token management:
|
||||
|
||||
```yaml
|
||||
consul_acl_configuration:
|
||||
enabled: true
|
||||
default_policy: "deny"
|
||||
enable_token_persistence: true
|
||||
tokens:
|
||||
agent: "{{ _credentials.consul.tokens.agent.secret_id }}"
|
||||
```
|
||||
|
||||
Define default agent policies to manage permissions:
|
||||
|
||||
```yaml
|
||||
consul_default_agent_policy: |
|
||||
node_prefix "" {
|
||||
policy = "write"
|
||||
}
|
||||
service_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🛠️ Extra Configuration
|
||||
|
||||
Add additional configuration files and environment variables:
|
||||
|
||||
```yaml
|
||||
consul_extra_files_list: []
|
||||
consul_extra_configuration: {}
|
||||
consul_env_variables: {}
|
||||
```
|
||||
|
||||
Use these settings to include any non-standard configurations or extra environment variables.
|
||||
|
||||
---
|
||||
|
||||
### 🔗 Service Mesh Configuration
|
||||
|
||||
Enable and configure the service mesh:
|
||||
|
||||
```yaml
|
||||
consul_mesh_configuration:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
This setting enables Consul's service mesh capabilities, allowing for secure and dynamic service-to-service communication.
|
||||
|
||||
---
|
||||
|
||||
### 🌐 DNS Configuration
|
||||
|
||||
Control DNS behavior within the Consul cluster:
|
||||
|
||||
```yaml
|
||||
consul_dns_configuration:
|
||||
allow_stale: true
|
||||
enable_truncate: true
|
||||
only_passing: true
|
||||
```
|
||||
|
||||
These options help optimize DNS responses and ensure that only healthy services are resolved.
|
||||
|
||||
---
|
||||
|
||||
### 📊 Telemetry Configuration
|
||||
|
||||
Enable Prometheus metrics and customize telemetry settings:
|
||||
|
||||
```yaml
|
||||
consul_enable_prometheus_metrics: false
|
||||
consul_prometheus_retention_time: 60s
|
||||
consul_telemetry_configuration: {}
|
||||
```
|
||||
|
||||
> **Note:** The `consul_telemetry_configuration` map should be structured according to the official Consul documentation, as it will be merged into the final configuration file.
|
||||
|
||||
---
|
||||
|
||||
### 📝 Logging Configuration
|
||||
|
||||
Configure logging for Consul:
|
||||
|
||||
```yaml
|
||||
consul_log_level: info
|
||||
consul_enable_log_to_file: "{{ enable_log_to_file | bool }}"
|
||||
consul_log_to_file_configuration:
|
||||
log_file: "{{ consul_logs_dir }}/consul.log"
|
||||
log_rotate_duration: 24h
|
||||
log_rotate_max_files: 30
|
||||
```
|
||||
|
||||
This configuration manages the verbosity and output of Consul logs.
|
||||
|
||||
---
|
||||
|
||||
This documentation provides an overview of the key variables and settings for configuring a Consul cluster using `hashistack-ansible`. Remember to follow the official [Consul documentation](https://developer.hashicorp.com/consul/docs) for any specific configurations within each map to ensure proper integration into the final configuration file. Adjust the settings as needed to fit your environment and deployment requirements.
|
||||
|
@ -1,44 +1,177 @@
|
||||
# Deploying a Nomad cluster
|
||||
### Nomad Configuration Documentation 🌍
|
||||
|
||||
This documentation explains each steps necessary to successfully deploy a Nomad cluster using the ednz_cloud.hashistack ansible collection.
|
||||
This section provides detailed documentation on the configurable variables for deploying and managing a Nomad cluster using the `hashistack-ansible` collection. Each configuration map should adhere to the structure documented in the official [Nomad documentation](https://developer.hashicorp.com/nomad/docs) since these maps will be merged into the final Nomad configuration file.
|
||||
|
||||
## Prerequisites
|
||||
---
|
||||
|
||||
You should, before attempting any deployment, have read through the [Quick Start Guide](./quick_start.md). These steps are necessary in order to ensure smooth operations going forward.
|
||||
### 🔧 Basics
|
||||
|
||||
## Variables
|
||||
|
||||
### Basics
|
||||
|
||||
First, in order to deploy a nomad cluster, you need to enable it.
|
||||
To deploy a Nomad cluster, enable it and specify the version:
|
||||
|
||||
```yaml
|
||||
enable_nomad: "yes"
|
||||
```
|
||||
|
||||
Selecting the nomad version to install is done with the `nomad_version` variable.
|
||||
|
||||
```yaml
|
||||
nomad_version: "1.8.1"
|
||||
```
|
||||
|
||||
The vault version can either be `latest` or `X.Y.Z`.
|
||||
Define a fully qualified domain name (FQDN) for Nomad:
|
||||
|
||||
For production deployment, it is recommended to use the `X.Y.Z` syntax.
|
||||
```yaml
|
||||
nomad_fqdn: nomad.ednz.lab
|
||||
```
|
||||
|
||||
### General settings
|
||||
|
||||
First, you can change some general settings for nomad, like the dc and region options.
|
||||
Specify the region and datacenter:
|
||||
|
||||
```yaml
|
||||
nomad_region: global
|
||||
nomad_datacenter: dc1
|
||||
```
|
||||
|
||||
### ACLs settings
|
||||
---
|
||||
|
||||
By default, ACLs are enabled on nomad, and automatically bootstrapped.
|
||||
You can change this by editing the `nomad_acl_configuration` variable:
|
||||
### 🌐 TLS Configuration
|
||||
|
||||
Enable TLS for internal Nomad communication:
|
||||
|
||||
```yaml
|
||||
nomad_enable_tls: "{{ enable_tls_internal }}"
|
||||
```
|
||||
|
||||
Define TLS settings:
|
||||
|
||||
```yaml
|
||||
nomad_tls_configuration:
|
||||
http: true
|
||||
rpc: true
|
||||
ca_file: "/etc/ssl/certs/ca-certificates.crt"
|
||||
cert_file: "{{ nomad_certs_dir }}/fullchain.crt"
|
||||
key_file: "{{ nomad_certs_dir }}/cert.key"
|
||||
verify_server_hostname: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📂 Directory Paths
|
||||
|
||||
Configure the paths used by Nomad for storing configuration, data, certificates, and logs:
|
||||
|
||||
```yaml
|
||||
nomad_config_dir: "{{ hashistack_remote_config_dir }}/nomad.d"
|
||||
nomad_data_dir: "/opt/nomad"
|
||||
nomad_certs_dir: "{{ nomad_config_dir }}/tls"
|
||||
nomad_logs_dir: "{{ hashistack_remote_log_dir }}/nomad"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🌍 Address Configuration
|
||||
|
||||
Set the addresses and ports for Nomad communication:
|
||||
|
||||
```yaml
|
||||
nomad_bind_addr: "0.0.0.0"
|
||||
nomad_advertise_addr: "{{ api_interface_address }}"
|
||||
nomad_address_configuration:
|
||||
bind_addr: "{{ nomad_bind_addr }}"
|
||||
addresses:
|
||||
http: "{{ nomad_advertise_addr }}"
|
||||
rpc: "{{ nomad_advertise_addr }}"
|
||||
serf: "{{ nomad_advertise_addr }}"
|
||||
advertise:
|
||||
http: "{{ nomad_advertise_addr }}"
|
||||
rpc: "{{ nomad_advertise_addr }}"
|
||||
serf: "{{ nomad_advertise_addr }}"
|
||||
ports:
|
||||
http: 4646
|
||||
rpc: 4647
|
||||
serf: 4648
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥️ Server Configuration
|
||||
|
||||
Enable and configure Nomad server nodes:
|
||||
|
||||
```yaml
|
||||
nomad_enable_server: "{{ ('nomad_servers' in group_names) | bool }}"
|
||||
nomad_server_bootstrap_expect: "{{ (groups['nomad_servers'] | length) }}"
|
||||
nomad_server_configuration:
|
||||
enabled: "{{ nomad_enable_server }}"
|
||||
data_dir: "{{ nomad_data_dir }}/server"
|
||||
encrypt: "{{ _credentials.nomad.gossip_encryption_key }}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥️ Client Configuration
|
||||
|
||||
Enable and configure Nomad client nodes:
|
||||
|
||||
```yaml
|
||||
nomad_enable_client: "{{ ('nomad_clients' in group_names) | bool }}"
|
||||
nomad_client_configuration:
|
||||
enabled: "{{ nomad_enable_client }}"
|
||||
state_dir: "{{ nomad_data_dir }}/client"
|
||||
cni_path: "{{ cni_plugins_install_path | default('/opt/cni/bin') }}"
|
||||
bridge_network_name: nomad
|
||||
bridge_network_subnet: "172.26.64.0/20"
|
||||
node_pool: >-
|
||||
{{
|
||||
'ingress' if 'nomad_ingress' in group_names else
|
||||
'controller' if 'nomad_servers' in group_names else
|
||||
omit
|
||||
}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥️ UI Configuration
|
||||
|
||||
Enable the Nomad UI:
|
||||
|
||||
```yaml
|
||||
nomad_ui_configuration:
|
||||
enabled: "{{ nomad_enable_server }}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🛠️ Driver Configuration
|
||||
|
||||
Enable or disable specific Nomad task drivers:
|
||||
|
||||
```yaml
|
||||
nomad_driver_enable_docker: true
|
||||
nomad_driver_enable_podman: false
|
||||
nomad_driver_enable_raw_exec: false
|
||||
nomad_driver_enable_java: false
|
||||
nomad_driver_enable_qemu: false
|
||||
|
||||
nomad_driver_configuration:
|
||||
raw_exec:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📝 Logging Configuration
|
||||
|
||||
Configure logging for Nomad:
|
||||
|
||||
```yaml
|
||||
nomad_log_level: info
|
||||
nomad_enable_log_to_file: "{{ enable_log_to_file | bool }}"
|
||||
nomad_log_to_file_configuration:
|
||||
log_file: "{{ nomad_logs_dir }}/nomad.log"
|
||||
log_rotate_duration: 24h
|
||||
log_rotate_max_files: 30
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🛡️ ACL Configuration
|
||||
|
||||
ACLs are enabled by default in Nomad. Customize ACL settings:
|
||||
|
||||
```yaml
|
||||
nomad_acl_configuration:
|
||||
@ -48,9 +181,39 @@ nomad_acl_configuration:
|
||||
role_ttl: 60s
|
||||
```
|
||||
|
||||
### Consul integration settings
|
||||
---
|
||||
|
||||
By default, if consul if also enabled, nomad will use it to register itself as a consul service and also use consul to automatically join the cluster.
|
||||
### 🔧 Autopilot Configuration
|
||||
|
||||
Use Autopilot to automate the management of Nomad servers:
|
||||
|
||||
```yaml
|
||||
nomad_autopilot_configuration: {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📊 Telemetry Configuration
|
||||
|
||||
Enable telemetry and configure settings:
|
||||
|
||||
```yaml
|
||||
nomad_telemetry_configuration:
|
||||
collection_interval: 10s
|
||||
disable_hostname: false
|
||||
use_node_name: false
|
||||
publish_allocation_metrics: false
|
||||
publish_node_metrics: false
|
||||
prefix_filter: []
|
||||
disable_dispatched_job_summary_metrics: false
|
||||
prometheus_metrics: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🔗 Consul Integration
|
||||
|
||||
Enable integration with Consul:
|
||||
|
||||
```yaml
|
||||
nomad_enable_consul_integration: "{{ enable_consul | bool }}"
|
||||
@ -64,21 +227,54 @@ nomad_consul_integration_configuration:
|
||||
tags: []
|
||||
```
|
||||
|
||||
Optionally, you can add tags to you nomad services, or disable the consul integration if you don't plan on using it.
|
||||
Define TLS settings for Consul integration:
|
||||
|
||||
### Vault integration settings
|
||||
```yaml
|
||||
nomad_consul_integration_tls_configuration:
|
||||
ca_file: "/etc/ssl/certs/ca-certificates.crt"
|
||||
```
|
||||
|
||||
Vault integration for nomad is by default disabled, as it requires some vault configuration that is out of the scope of this collection.
|
||||
Server and client policies for Consul integration:
|
||||
|
||||
You can, once you have deployed and configured vault (or if you are using an external vault not managed by the collection), enable the integration
|
||||
```yaml
|
||||
nomad_consul_integration_server_policy: |
|
||||
agent_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
node_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
service_prefix "" {
|
||||
policy = "write"
|
||||
}
|
||||
acl = "write"
|
||||
mesh = "write"
|
||||
|
||||
nomad_consul_integration_client_policy: |
|
||||
agent_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
node_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
service_prefix "" {
|
||||
policy = "write"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🔐 Vault Integration
|
||||
|
||||
Enable Vault integration with Nomad:
|
||||
|
||||
```yaml
|
||||
nomad_enable_vault_integration: false
|
||||
nomad_vault_integration_configuration: {}
|
||||
```
|
||||
|
||||
For configuration options, please refer to the [Official Documentation](https://developer.hashicorp.com/nomad/docs/configuration/vault)
|
||||
This setting allows for seamless integration with HashiCorp Vault for secrets management.
|
||||
|
||||
### Drivers settings
|
||||
---
|
||||
|
||||
### Internal TLS
|
||||
This documentation provides an overview of the key variables and settings for configuring a Nomad cluster using `hashistack-ansible`. Remember to follow the official [Nomad documentation](https://developer.hashicorp.com/nomad/docs) for any specific configurations within each map to ensure proper integration into the final configuration file. Adjust the settings as needed to fit your environment and deployment requirements.
|
||||
|
@ -1,47 +1,100 @@
|
||||
# Deploying a Vault cluster
|
||||
### Vault Configuration Documentation 🔐
|
||||
|
||||
This documentation explains each steps necessary to successfully deploy a Vault cluster using the ednz_cloud.hashistack ansible collection.
|
||||
This section provides comprehensive documentation for setting up HashiCorp Vault using Ansible. It details the available configuration options, enabling you to customize and deploy Vault in your environment.
|
||||
|
||||
## Prerequisites
|
||||
---
|
||||
|
||||
You should, before attempting any deployment, have read through the [Quick Start Guide](./quick_start.md). These steps are necessary in order to ensure smooth operations going forward.
|
||||
### 🔧 Basic Configuration
|
||||
|
||||
## Variables
|
||||
|
||||
### Basics
|
||||
|
||||
First, in order to deploy a Vault cluster, you need to enable it.
|
||||
To deploy Vault, start by enabling it and specifying the version:
|
||||
|
||||
```yaml
|
||||
enable_vault: "yes"
|
||||
```
|
||||
|
||||
Selecting the vault version to install is done with the `vault_version` variable.
|
||||
|
||||
```yaml
|
||||
vault_version: "1.16.2"
|
||||
```
|
||||
|
||||
The vault version can either be `latest` or `X.Y.Z`.
|
||||
|
||||
For production deployment, it is recommended to use the `X.Y.Z` syntax.
|
||||
|
||||
### General settings
|
||||
|
||||
First, you can change some general settings for vault.
|
||||
Define the fully qualified domain name (FQDN) for Vault and specify the cluster name:
|
||||
|
||||
```yaml
|
||||
vault_fqdn: vault.ednz.lab
|
||||
vault_cluster_name: vault
|
||||
```
|
||||
|
||||
Set the bind address and cluster address:
|
||||
|
||||
```yaml
|
||||
vault_bind_addr: "0.0.0.0"
|
||||
vault_cluster_addr: "{{ api_interface_address }}"
|
||||
```
|
||||
|
||||
Enable or disable the Vault UI:
|
||||
|
||||
```yaml
|
||||
vault_enable_ui: true
|
||||
```
|
||||
|
||||
Control mlock (which protects memory from being swapped to disk) and cache:
|
||||
|
||||
```yaml
|
||||
vault_disable_mlock: false
|
||||
vault_disable_cache: false
|
||||
```
|
||||
|
||||
### Storage settings
|
||||
---
|
||||
|
||||
The storage configuration for vault can be edited as well. By default, vault will be configured to setup `raft` storage between all declared vault servers (in the `vault_servers` group).
|
||||
### 🌐 TLS Configuration
|
||||
|
||||
Enable TLS for Vault's listener and other communications:
|
||||
|
||||
```yaml
|
||||
vault_enable_tls: "{{ enable_tls_internal }}"
|
||||
```
|
||||
|
||||
Specify the TLS settings for the listener:
|
||||
|
||||
```yaml
|
||||
vault_tls_listener_configuration:
|
||||
- tcp:
|
||||
tls_disable: false
|
||||
tls_cert_file: "{{ vault_certs_dir }}/fullchain.crt"
|
||||
tls_key_file: "{{ vault_certs_dir }}/cert.key"
|
||||
tls_disable_client_certs: true
|
||||
```
|
||||
|
||||
> **Note:** Set `tls_disable_client_certs` to `true` if you do not require client certificates for mutual TLS.
|
||||
|
||||
---
|
||||
|
||||
### 📂 Directory Paths
|
||||
|
||||
Configure paths for Vault's configuration, data, certificates, and logs:
|
||||
|
||||
```yaml
|
||||
vault_config_dir: "{{ hashistack_remote_config_dir }}/vault.d"
|
||||
vault_data_dir: "/opt/vault"
|
||||
vault_certs_dir: "{{ vault_config_dir }}/tls"
|
||||
vault_logs_dir: "{{ hashistack_remote_log_dir }}/vault"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🔐 Seal Configuration
|
||||
|
||||
Vault uses a seal/unseal mechanism to protect the master key. Configure the key shares and threshold:
|
||||
|
||||
```yaml
|
||||
vault_seal_configuration:
|
||||
key_shares: 3
|
||||
key_threshold: 2
|
||||
```
|
||||
|
||||
> **Note:** Adjust these values according to your security requirements.
|
||||
|
||||
---
|
||||
|
||||
### 🗃️ Storage Configuration
|
||||
|
||||
Configure Vault's storage backend, such as Raft, for integrated storage:
|
||||
|
||||
```yaml
|
||||
vault_storage_configuration:
|
||||
@ -58,6 +111,8 @@ vault_storage_configuration:
|
||||
]
|
||||
```
|
||||
|
||||
> **Note:** The `retry_join` block is critical for cluster formation in Raft-based storage.
|
||||
|
||||
While this is the [recommended](https://developer.hashicorp.com/vault/docs/configuration/storage#integrated-storage-vs-external-storage) way to configure storage for vault, you can edit this variable to enable any storage you want. Refer to the [vault documentation](https://developer.hashicorp.com/vault/docs/configuration/storage) for compatibility and syntax details about this variable.
|
||||
|
||||
Example:
|
||||
@ -72,13 +127,11 @@ vault_storage_configuration:
|
||||
database: "vault"
|
||||
```
|
||||
|
||||
### Listener settings
|
||||
---
|
||||
|
||||
#### TCP listeners
|
||||
### 📶 Listener Configuration
|
||||
|
||||
By default, TLS is **disabled** for vault. This goes against the Hashicorp recommendations on the matter, but there is no simple way to force the use of TLS (yet), without adding a lot of complexity to the deployment.
|
||||
|
||||
The listener configuration settings can be modified in `vault_listener_configuration` variable.
|
||||
Configure Vault's listener to bind on a specific address and port:
|
||||
|
||||
```yaml
|
||||
vault_listener_configuration:
|
||||
@ -86,23 +139,84 @@ vault_listener_configuration:
|
||||
address: "{{ vault_cluster_addr }}:8200"
|
||||
tls_disable: true
|
||||
```
|
||||
By default, vault will listen on all interfaces, on port 8200. you can change it by modifying the `tcp.address` property, and adding you own listener preferences.
|
||||
|
||||
#### Enabling TLS for Vault
|
||||
---
|
||||
|
||||
In order to enable TLS for Vault, you simply need to set the `vault_enable_tls` variable to `true`.
|
||||
### 🛠️ Service Registration
|
||||
|
||||
At the moment, Hashistack-Ansible does nothing to help you generate the certificates and renew them. All it does is look inside the `etc/hashistack/vault_servers/tls` directory on the deployment node, and copy the files to the destination hosts in `/etc/vault.d/config/tls`. The listener expect **2 files** by default, a `cert.pem`, and a `key.pem` file.
|
||||
Enable and configure service registration with Consul:
|
||||
|
||||
Please refer to the [vault documentation](https://developer.hashicorp.com/vault/docs/configuration/listener/tcp) for details bout enabling TLS on vault listeners.
|
||||
```yaml
|
||||
vault_enable_service_registration: "{{ enable_consul | bool }}"
|
||||
vault_service_registration_configuration:
|
||||
consul:
|
||||
address: >-
|
||||
127.0.0.1:{{ hostvars[groups['consul_servers'][0]].consul_api_port[hostvars[groups['consul_servers'][0]].consul_api_scheme] }}
|
||||
scheme: "{{ hostvars[groups['consul_servers'][0]].consul_api_scheme }}"
|
||||
token: "{{ _credentials.consul.tokens.vault.secret_id }}"
|
||||
```
|
||||
|
||||
In case you want to add more configuration to the vault listeners, you can add it to the `vault_extra_listener_configuration` variable, which by default is empty. This variable will be merge with the rest ofthe listener configuration variables, and takes precedence over all the others.
|
||||
Define the Consul service registration policy:
|
||||
|
||||
> **Waring**
|
||||
> At the moment, hashistack-ansible does not support setting up multiple TCP listeners. Only one can be set.
|
||||
```yaml
|
||||
vault_service_registration_policy: |
|
||||
service "vault" {
|
||||
policy = "write"
|
||||
}
|
||||
```
|
||||
|
||||
### Plugins for Vault
|
||||
---
|
||||
|
||||
To enable plugin support for Vault, you can set the `vault_enable_plugins` variable to true. This variable will add the necessary configuration options in the vault.json file to enable support. Once enabled, you can simply place your compiled plugin files into the `etc/hashistack/vault_servers/plugin` directory. They will be copied over to the `/etc/vault.d/config/plugin` directory on the target nodes.
|
||||
### 🔌 Plugin Configuration
|
||||
|
||||
Refer to the [vault documentation](https://developer.hashicorp.com/vault/docs/plugins/plugin-management) for details about enabling and using plugins.
|
||||
If plugins are required, specify the directory where plugins are stored:
|
||||
|
||||
```yaml
|
||||
vault_plugins_directory: "{{ vault_config_dir }}/plugins"
|
||||
```
|
||||
|
||||
> **Note:** Plugin management is typically disabled by default. Set `vault_enable_plugins` to `true` if needed.
|
||||
|
||||
---
|
||||
|
||||
### 📝 Logging Configuration
|
||||
|
||||
Configure Vault's logging level and log file settings:
|
||||
|
||||
```yaml
|
||||
vault_log_level: info
|
||||
vault_enable_log_to_file: "{{ enable_log_to_file | bool }}"
|
||||
vault_log_to_file_configuration:
|
||||
log_file: "{{ vault_logs_dir }}/vault.log"
|
||||
log_rotate_duration: 24h
|
||||
log_rotate_max_files: 30
|
||||
```
|
||||
|
||||
> **Note:** Logging to a file can be useful for auditing and troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
### 🌐 Extra Configuration
|
||||
|
||||
In case additional configuration is required that isn't covered by standard variables, you can use:
|
||||
|
||||
```yaml
|
||||
vault_extra_configuration: {}
|
||||
```
|
||||
|
||||
This allows you to extend Vault's configuration as needed.
|
||||
|
||||
---
|
||||
|
||||
### 📁 Extra Files
|
||||
|
||||
If additional files need to be deployed to the Vault configuration directory, enable this option and provide a list of files:
|
||||
|
||||
```yaml
|
||||
vault_extra_files: true
|
||||
vault_extra_files_list: []
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
This documentation covers the key aspects of configuring Vault with Ansible using `hashistack-ansible`. Adjust the settings to suit your specific environment and operational requirements, and refer to the official [Vault documentation](https://developer.hashicorp.com/vault/docs) for further details.
|
||||
|
@ -1,5 +1,7 @@
|
||||
# TLS Guide
|
||||
|
||||
# WORK IN PROGRESS
|
||||
|
||||
|
||||
create certificate/ca directory
|
||||
|
||||
|
BIN
assets/hashicorp/boundary_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/boundary_white_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/consul_500x500.png
Executable file
After Width: | Height: | Size: 20 KiB |
BIN
assets/hashicorp/consul_white_500x500.png
Executable file
After Width: | Height: | Size: 15 KiB |
BIN
assets/hashicorp/hashicorp_500x500.png
Executable file
After Width: | Height: | Size: 13 KiB |
BIN
assets/hashicorp/nomad_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/nomad_white_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/packer_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/packer_white_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/terraform_500x500.png
Executable file
After Width: | Height: | Size: 14 KiB |
BIN
assets/hashicorp/terraform_white_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/vagrant_500x500.png
Executable file
After Width: | Height: | Size: 15 KiB |
BIN
assets/hashicorp/vagrant_white_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/vault_500x500.png
Executable file
After Width: | Height: | Size: 12 KiB |
BIN
assets/hashicorp/vault_white_500x500.png
Executable file
After Width: | Height: | Size: 11 KiB |
BIN
assets/hashicorp/waypoint_500x500.png
Executable file
After Width: | Height: | Size: 14 KiB |
BIN
assets/hashicorp/waypoint_white_500x500.png
Executable file
After Width: | Height: | Size: 14 KiB |