feat: more structure to the wiki

Bertrand Lanson 2024-08-14 22:19:18 +02:00
parent e88f4a44f7
commit 134d9e4ca7
Signed by: lanson
SSH Key Fingerprint: SHA256:/nqc6HGqld/PS208F6FUOvZlUzTS0rGpNNwR5O2bQBw
8 changed files with 142 additions and 149 deletions

@ -16,3 +16,9 @@ Hashistack-Ansible's project aims at providing a production-ready, repeatable, a
1. [Introduction](01-introduction)
2. [General informations](02-general-informations)
3. [Architecture Guide](03-architecture-guide)
4. Service Configuration
- [Consul](04-consul-configuration)
- [Nomad](04-nomad-configuration)
- [Vault](04-vault-configuration)
- [Extras](04-extra-configuration)
5. [TLS]()

@ -1,6 +1,29 @@
# General documentation
# Hashistack-Ansible
## Configuration directory
## General informations
### Inventory
#### Inventory file
This project relies on an ansible inventory file in order to know which host does what.
A sample inventory file can be found under `playbooks/inventory/multinode`, within the collection directory.
#### Ingress
The project, while built for maximum flexibility in the deployment and configuration, takes a rather opinionated approach when it comes to ingress.
This project will NOT deploy central ingress (reverse-proxy) for your cluster(s), unless you deploy a Nomad cluster.
The opinion on ingress nodes is as follow:
- dedicated ingress nodes are expensive if you are running these clusters on some cloud provider.
- better options exist if running on said cloud providers, or even on some baremetal private clouds (openstack, etc..)
- If there is a need to deploy ingress nodes (eg. for baremetal deployments), these nodes are essentially wasted resources unless you can run other workloads on them.
For these reasons, central ingress services will only be deployed if you also deploy a Nomad cluster, and they will be deployed as a Nomad job on the cluster, on dedicated nodes.
This allows you to reuse those nodes for potentially other workloads, or isolate the ingress completely through [node pools](https://developer.hashicorp.com/nomad/docs/concepts/node-pools).
### Main configuration directory
@ -10,11 +33,20 @@ This directory is defined with the variable `hashistack_configuration_directory`
Under this directory, you are expected to place the `globals.yml` file, with your configuration.
The `globals.yml` file can be found under `playbooks/inventory/group_vars/all/globals.yml` in the collection's directory.
This file, by default, only include a simple configuration, but is extensible. In the same `playbooks/inventory/group_vars/all/` directory where the `globals.yml` file is located, you can find the `consul.yml`, `vault.yml`, `nomad.yml`, `nomad_ingress.yml`, `cni.yml`, and `all.yml` files. These files hold the complete configuration for the services.
These files should not be edited, but rather superseeded by appending your own values to your `globals.yml` file.
### Sub configuration directories
On top of the `globals.yml` file, you can fine tune variables per groups, or hosts, depending on your setup.
#### Group configuration directories
Additionally, subdirectories can be used to tailor the configuration further.
Subdirectories can be used to tailor the configuration further.
Each group within the `inventory` will look at a directory named after itself:
@ -22,7 +54,7 @@ Each group within the `inventory` will look at a directory named after itself:
- vault_servers group will look for `{{ hashistack_configuration_directory }}/vault_servers`
- consul_servers group will look for `{{ hashistack_configuration_directory }}/consul_servers`
Within each of these directories, you can place an additional `globals.yml file`, that will superseed the file at the root of the configuration directory.
Within each of these directories, you can place an additional `globals.yml` file, that will superseed the file at the root of the configuration directory.
- **Example**:
@ -101,3 +133,5 @@ Additionally, within each `group configuration directory`, you can add `host con
enable_nomad: "yes"
api_interface: "eth0.vlan40"
```
This lets you tweak deployment parameters for each host, and allow for great flexibility.

@ -49,22 +49,42 @@ Ideally, you would need:
- an odd number (3 to 5) of nomad servers
- multiple (2 to 3) haproxy servers
A production-ready setup could look like:
```ini
[nomad_ingress]
nomadingress1
nomadingress2
[vault_servers]
vaultnode1
vaultnode2
vaultnode3
[consul_servers]
consulnode1
consulnode2
consulnode3
[nomad_servers]
nomadnode1
nomadnode2
nomadnode3
[nomad_clients]
nomadclient1
nomadclient2
nomadclient3
[consul_agents]
```
The **nomad**, **vault** and **consul** servers should have **two network interfaces**, and one of them should be reachable from the haproxy nodes.
The architecture for this infrastructure would look like:
```mermaid
graph TD
client[Client] -->|https :443| keepalived
keepalived[VIP] --> haproxy1[HAProxy] & haproxy2[HAProxy]
subgraph frontends
direction LR
haproxy1[HAProxy]
haproxy2[HAProxy]
end
haproxy1[HAProxy] & haproxy2[HAProxy] -->|http :8500| consul
subgraph consul
direction LR
consul1[Consul 01] <--> consul2[Consul 02] & consul3[Consul 03] & consul4[Consul 04] & consul5[Consul 05]
@ -74,8 +94,6 @@ graph TD
end
haproxy1[HAProxy] & haproxy2[HAProxy] -->|http :8200| vault
subgraph vault
direction LR
subgraph vaultnode1
@ -95,30 +113,66 @@ graph TD
vaultnode3 <--> vaultnode1
end
vault -->|Service registration| consul
haproxy1[HAProxy] & haproxy2[HAProxy] -->|http :4646| nomad
subgraph nomad
subgraph nomadservers
direction LR
subgraph nomadnode1
subgraph nomadservernode1
direction TB
nomad1[Nomad 01] <--> consulnomadagent1([Consul agent])
nomadserver1[Nomad server 01] <--> consulnomadserveragent1([Consul agent])
end
subgraph nomadnode2
subgraph nomadservernode2
direction TB
nomad2[Nomad 02] <--> consulnomadagent2([Consul agent])
nomadserver2[Nomad server 02] <--> consulnomadserveragent2([Consul agent])
end
subgraph nomadnode3
subgraph nomadservernode3
direction TB
nomad3[Nomad 03] <--> consulnomadagent3([Consul agent])
nomadserver3[Nomad server 03] <--> consulnomadserveragent3([Consul agent])
end
nomadnode1 <--> nomadnode2
nomadnode2 <--> nomadnode3
nomadnode3 <--> nomadnode1
nomadserver1 <--> nomadserver2
nomadserver2 <--> nomadserver3
nomadserver3 <--> nomadserver1
end
nomad -->|Service registration| consul
nomadservers -->|Service registration| consul
client[Client] -->|https :443| keepalived
keepalived[VIP] --> haproxy1[HAProxy] & haproxy2[HAProxy]
subgraph nomadingress
direction LR
subgraph nomadingressnode1
direction LR
nomadingress1[Nomad ingress 01] <--> consulnomadingressagent1([Consul agent])
nomadingress1[Nomad ingress 01] <--> haproxy1[HAProxy]
end
subgraph nomadingressnode2
direction LR
nomadingress2[Nomad ingress 02] <--> consulnomadingressagent2([Consul agent])
nomadingress2[Nomad ingress 01] <--> haproxy2[HAProxy]
end
end
nomadingress <--> nomadservers
subgraph nomadclients
direction LR
subgraph nomadclientnode1
direction LR
nomadclient1[Nomad client 01] <--> consulnomadclientagent1([Consul agent])
end
subgraph nomadclientnode2
direction LR
nomadclient2[Nomad client 02] <--> consulnomadclientagent2([Consul agent])
end
subgraph nomadclientnode3
direction LR
nomadclient3[Nomad client 03] <--> consulnomadclientagent3([Consul agent])
end
end
nomadclients -->|Service registration| consul
nomadclients <--> nomadservers
```
> [!NOTE]: you can substract the haproxy part if using an external load-balancing solution, like AWS ALB,or any other LB technology, for connecting to your platform.

@ -19,7 +19,7 @@ enable_nomad: "yes"
Selecting the nomad version to install is done with the `nomad_version` variable.
```yaml
nomad_version: latest
nomad_version: "1.8.1"
```
The vault version can either be `latest` or `X.Y.Z`.
@ -31,8 +31,8 @@ For production deployment, it is recommended to use the `X.Y.Z` syntax.
First, you can change some general settings for nomad, like the dc and region options.
```yaml
nomad_datacenter: dc1
nomad_region: global
nomad_datacenter: dc1
```
### ACLs settings
@ -55,10 +55,12 @@ By default, if consul if also enabled, nomad will use it to register itself as a
```yaml
nomad_enable_consul_integration: "{{ enable_consul | bool }}"
nomad_consul_integration_configuration:
address: "127.0.0.1:{{ hashicorp_consul_configuration.ports.https if consul_enable_tls else hashicorp_consul_configuration.ports.http }}"
address: >-
127.0.0.1:{{ consul_api_port[consul_api_scheme] }}
auto_advertise: true
ssl: "{{ consul_enable_tls | bool }}"
token: "{{ _credentials.consul.tokens.nomad.server.secret_id if nomad_enable_server else _credentials.consul.tokens.nomad.client.secret_id}}"
token: >-
{{ _credentials.consul.tokens.nomad.server.secret_id if nomad_enable_server else _credentials.consul.tokens.nomad.client.secret_id }}
tags: []
```

@ -19,7 +19,7 @@ enable_vault: "yes"
Selecting the vault version to install is done with the `vault_version` variable.
```yaml
vault_version: latest
vault_version: "1.16.2"
```
The vault version can either be `latest` or `X.Y.Z`.
@ -32,10 +32,11 @@ First, you can change some general settings for vault.
```yaml
vault_cluster_name: vault
vault_bind_addr: "0.0.0.0"
vault_cluster_addr: "{{ api_interface_address }}"
vault_enable_ui: true
vault_seal_configuration:
key_shares: 3
key_threshold: 2
vault_disable_mlock: false
vault_disable_cache: false
```
### Storage settings
@ -45,13 +46,13 @@ The storage configuration for vault can be edited as well. By default, vault wil
```yaml
vault_storage_configuration:
raft:
path: "{{ hashicorp_vault_data_dir }}/data"
path: "{{ vault_data_dir }}"
node_id: "{{ ansible_hostname }}"
retry_join: |
retry_join: >-
[
{% for host in groups['vault_servers'] %}
{
'leader_api_addr': 'http://{{ hostvars[host].api_interface_address }}:8200'
'leader_api_addr': '{{ "https" if vault_enable_tls else "http"}}://{{ hostvars[host].api_interface_address }}:8200'
}{% if not loop.last %},{% endif %}
{% endfor %}
]
@ -81,8 +82,8 @@ The listener configuration settings can be modified in `vault_listener_configura
```yaml
vault_listener_configuration:
tcp:
address: "0.0.0.0:8200"
- tcp:
address: "{{ vault_cluster_addr }}:8200"
tls_disable: true
```
By default, vault will listen on all interfaces, on port 8200. you can change it by modifying the `tcp.address` property, and adding you own listener preferences.
@ -91,7 +92,7 @@ By default, vault will listen on all interfaces, on port 8200. you can change it
In order to enable TLS for Vault, you simply need to set the `vault_enable_tls` variable to `true`.
At the moment, hashistack-Ansible does nothing to help you generate the certificates and renew them. All it does is look inside the `etc/hashistack/vault_servers/tls` directory on the deployment node, and copy the files to the destination hosts in `/etc/vault.d/config/tls`. The listener expect **2 files** by default, a `cert.pem`, and a `key.pem` file.
At the moment, Hashistack-Ansible does nothing to help you generate the certificates and renew them. All it does is look inside the `etc/hashistack/vault_servers/tls` directory on the deployment node, and copy the files to the destination hosts in `/etc/vault.d/config/tls`. The listener expect **2 files** by default, a `cert.pem`, and a `key.pem` file.
Please refer to the [vault documentation](https://developer.hashicorp.com/vault/docs/configuration/listener/tcp) for details bout enabling TLS on vault listeners.

@ -1,104 +0,0 @@
# Deploying HAProxy frontends
This documentation explains each steps necessary to successfully deploy HAProxy frontends for your deployment, using the ednz_cloud.hashistack ansible collection.
## Prerequisites
You should, before attempting any deployment, have read through the [Quick Start Guide](./quick_start.md). These steps are necessary in order to ensure smooth operations going forward.
## Variables
### Basics
First, in order to deploy the HAproxy frontends, you need to enable the deployment.
```yaml
enable_haproxy: "yes"
```
You can also configure the version of haproxy you would like to use. This has very little impact, and should most likely be left untouched to whatever the collection version is defaulting to (which is the version that it is tested against).
```yaml
haproxy_version: "2.8"
```
This version can either be `latest`, or any `X`, `X.Y`, `X.Y.Z` tag of the [haproxytech/haproxy-debian](https://hub.docker.com/r/haproxytech/haproxy-debian) docker image.
For production deployment, it is recommended to use the `X.Y.Z` syntax.
The `deployment_method` variable will define how to install vault on the nodes.
By default, it runs haproxy inside a docker container, but this can be changed to `host` to install haproxy from the package manager.
Note that not all versions of haproxy are available as a package on all supported distributions. Please refer to the documentation of [ednz_cloud.deploy_haproxy](https://github.com/ednz-cloud/deploy_haproxy) for details about supported versions when installing from the package manager.
```yaml
deployment_method: "docker"
```
### General settings
There aren't many settings that you can configure to deploy the HAProxy frontends. First you'll need to configure a Virtual IP, and pass it in the `globals.yml` configuration file.
```yaml
hashistack_external_vip_interface: "eth0"
hashistack_external_vip_addr: "192.168.121.100"
```
This is used to configure keepalived to automatically configure this VIP on one of the frontend, and handle failover.
You also need to configure the names that will resolve to your different applications (consul, nomad, vault). These names should resolve to your Virtual IP, and will be used to handle host-based routing on haproxy.
```yaml
consul_fqdn: consul.ednz.lab
vault_fqdn: vault.ednz.lab
nomad_fqdn: nomad.ednz.lab
```
With this configuration, querying `http://consul.ednz.lab` will give you the consul UI and API, through haproxy.
> Note: subpaths are not yet supported, so you cannot set the fqdn to `generic.domain.tld/consul`. This feature will be added in a future release.
### Enabling external TLS
To enable external TLS for your APIs and UIs, you will need to set the following variable.
```yaml
enable_tls_external: true
```
This will enable the https listener for haproxy and configure the http listener to be a https redirect only.
## Managing external TLS certificates
### Generating certificates with hashistack-ansible
If you don't care about having trusted certificates (e.g. for developement or testing purposes), you can generate some self-signed certificates for your applications using the `generate_certs.yml` playbook.
```bash
ansible-playbook -i multinode.ini ednz_cloud.hashistack.generate_certs.yml
```
This will generate self-signed certificates for each applications that has been enabled in your `globals.yml`, and for then respective fqdn (also configured in `globals.yml`).
These certificates are going to be placed in `etc/hashistack/certificates/external/`, and will be named after each fqdn. These files should be encrypted using something like ansible-vault, as they are sensitive.
### Managing your own TLS certificates
Similarly, you can manage your own TLS certificates, signed by your own CA. Your certificates should be placed in the `etc/hashistack/certificates/external/` directory, similar to the self-signed ones, and be named like:
`<your_fqdn>.pem` and `<your_fqdn>.pem.key`, for each application.
At the moment, setting all certificates in a single file is not supported, but will be added in a later release.
These certificates will be copied over to the `haproxy_servers` hosts, in `/var/lib/haproxy/certs/`.
### Managing certificates externally
In case you already have systems in place to deploy and renew your certificates, you can simply enable the options in `globals.yml` to not manage certificates directly in hashistack-ansible.
```yaml
external_tls_externally_managed_certs: true
```
Enabling this option will prevents the playbooks from trying to copy certificates over, but the HAProxy nodes will still expect them to be present. It is up to you to copy them over.