[arvados] created: 2.6.0-177-g7c048d952
git repository hosting
git at public.arvados.org
Thu May 18 20:44:02 UTC 2023
at 7c048d95219e1c7033ca5f0b50076d29b17f246d (commit)
commit 7c048d95219e1c7033ca5f0b50076d29b17f246d
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Thu May 18 17:36:10 2023 -0300
20482: Updates installer's documentation to reflect latest changes.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/doc/_includes/_terraform_datastorage_tfvars.liquid b/doc/_includes/_terraform_datastorage_tfvars.liquid
new file mode 120000
index 000000000..bcb8f5a87
--- /dev/null
+++ b/doc/_includes/_terraform_datastorage_tfvars.liquid
@@ -0,0 +1 @@
+../../tools/salt-install/terraform/aws/data-storage/terraform.tfvars
\ No newline at end of file
diff --git a/doc/_includes/_terraform_services_tfvars.liquid b/doc/_includes/_terraform_services_tfvars.liquid
new file mode 120000
index 000000000..ff53a8599
--- /dev/null
+++ b/doc/_includes/_terraform_services_tfvars.liquid
@@ -0,0 +1 @@
+../../tools/salt-install/terraform/aws/services/terraform.tfvars
\ No newline at end of file
diff --git a/doc/_includes/_terraform_vpc_tfvars.liquid b/doc/_includes/_terraform_vpc_tfvars.liquid
new file mode 120000
index 000000000..96d67c385
--- /dev/null
+++ b/doc/_includes/_terraform_vpc_tfvars.liquid
@@ -0,0 +1 @@
+../../tools/salt-install/terraform/aws/vpc/terraform.tfvars
\ No newline at end of file
diff --git a/doc/architecture/federation.html.textile.liquid b/doc/architecture/federation.html.textile.liquid
index 698e355da..d98f3b287 100644
--- a/doc/architecture/federation.html.textile.liquid
+++ b/doc/architecture/federation.html.textile.liquid
@@ -22,7 +22,7 @@ Clusters are identified by a five-digit alphanumeric id (numbers and lowercase l
* For automated test purposes, use "z****"
* For experimental/local-only/private clusters that won't ever be visible on the public Internet, use "x****"
-* For long-lived clusters, we recommend reserving a cluster id. Contact "info at curii.com":mailto:info at curii.com for more information.
+* For public long-lived clusters, we recommend reserving a cluster id. Contact "info at curii.com":mailto:info at curii.com for more information.
Cluster identifiers are mapped API server hosts one of two ways:
diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index b840b585a..22e06eb7f 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -44,7 +44,7 @@ Choose a 5-character cluster identifier that will represent the cluster. Here a
Determine the base domain for the cluster. This will be referred to as @${DOMAIN}@.
-For example, if CLUSTER is @xarv1@ and DOMAIN is @example.com@, then @controller.${CLUSTER}.${DOMAIN}@ means @controller.xarv1.example.com at .
+For example, if DOMAIN is @xarv1.example.com@, then @controller.${DOMAIN}@ means @controller.xarv1.example.com at .
h3(#DNS). DNS hostnames for each service
@@ -52,19 +52,19 @@ You will need a DNS entry for each service. When using the "Terraform script":#
In the default configuration these are:
-# @controller.${CLUSTER}.${DOMAIN}@
-# @ws.${CLUSTER}.${DOMAIN}@
-# @keep0.${CLUSTER}.${DOMAIN}@
-# @keep1.${CLUSTER}.${DOMAIN}@
-# @keep.${CLUSTER}.${DOMAIN}@
-# @download.${CLUSTER}.${DOMAIN}@
-# @*.collections.${CLUSTER}.${DOMAIN}@ -- important note, this must be a wildcard DNS, resolving to the @keepweb@ service
-# @workbench.${CLUSTER}.${DOMAIN}@
-# @workbench2.${CLUSTER}.${DOMAIN}@
-# @webshell.${CLUSTER}.${DOMAIN}@
-# @shell.${CLUSTER}.${DOMAIN}@
-# @prometheus.${CLUSTER}.${DOMAIN}@
-# @grafana.${CLUSTER}.${DOMAIN}@
+# @controller.${DOMAIN}@
+# @ws.${DOMAIN}@
+# @keep0.${DOMAIN}@
+# @keep1.${DOMAIN}@
+# @keep.${DOMAIN}@
+# @download.${DOMAIN}@
+# @*.collections.${DOMAIN}@ -- important note, this must be a wildcard DNS, resolving to the @keepweb@ service
+# @workbench.${DOMAIN}@
+# @workbench2.${DOMAIN}@
+# @webshell.${DOMAIN}@
+# @shell.${DOMAIN}@
+# @prometheus.${DOMAIN}@
+# @grafana.${DOMAIN}@
For more information, see "DNS entries and TLS certificates":install-manual-prerequisites.html#dnstls.
@@ -98,27 +98,24 @@ The Terraform state files (that keep crucial infrastructure information from the
h4. Terraform code configuration
-Each section described above contain a @terraform.tfvars@ file with some configuration values that you should set before applying each configuration. You should set the cluster prefix and domain name in @terraform/vpc/terraform.tfvars@:
+Each section described above contain a @terraform.tfvars@ file with some configuration values that you should set before applying each configuration. You should at least set the AWS region, cluster prefix and domain name in @terraform/vpc/terraform.tfvars@:
-<pre><code>region_name = "us-east-1"
-# cluster_name = "xarv1"
-# domain_name = "xarv1.example.com"
+<pre><code>{% include 'terraform_vpc_tfvars' %}</code></pre>
-# Uncomment this to create an non-publicly accessible Arvados cluster
-# private_only = true</code></pre>
+If you don't set the main configuration variables at @vpc/terraform.tfvars@ file, you will be asked to re-enter these parameters every time you run Terraform.
-If you don't set the variables @vpc/terraform.tfvars@ file, you will be asked to re-enter these parameters every time you run Terraform.
+The @data-storage/terraform.tfvars@ and @services/terraform.tfvars@ let you configure additional details, including the SSH public key for deployment, instance & volume sizes, etc. All these configurations are provided with sensible defaults:
-The @data-storage/terraform.tfvars@ and @services/terraform.tfvars@ let you configure the location of your ssh public key (default @~/.ssh/id_rsa.pub@) and the instance type to use (default @m5a.large@).
+<pre><code>{% include 'terraform_datastorage_tfvars' %}</code></pre>
+
+<pre><code>{% include 'terraform_services_tfvars' %}</code></pre>
h4. Set credentials
You will need an AWS access key and secret key to create the infrastructure.
-<pre><code>
-$ export AWS_ACCESS_KEY_ID="anaccesskey"
-$ export AWS_SECRET_ACCESS_KEY="asecretkey"
-</code></pre>
+<pre><code>$ export AWS_ACCESS_KEY_ID="anaccesskey"
+$ export AWS_SECRET_ACCESS_KEY="asecretkey"</code></pre>
h4. Create the infrastructure
@@ -132,10 +129,11 @@ Outputs:
arvados_sg_id = "sg-02f999a99973999d7"
arvados_subnet_id = "subnet-01234567abc"
+cluster_int_cidr = "10.1.0.0/16"
cluster_name = "xarv1"
compute_subnet_id = "subnet-abcdef12345"
deploy_user = "admin"
-domain_name = "example.com"
+domain_name = "xarv1.example.com"
letsencrypt_iam_access_key_id = "AKAA43MAAAWAKAADAASD"
private_ip = {
"controller" = "10.1.1.1"
@@ -160,7 +158,7 @@ route53_dns_ns = tolist([
"ns-437.awsdns-54.com",
"ns-809.awsdns-37.net",
])
-vpc_cidr = "10.1.0.0/16"
+ssl_password_secret_name = "xarv1-arvados-ssl-privkey-password"
vpc_id = "vpc-0999994998399923a"
letsencrypt_iam_secret_access_key = "XXXXXSECRETACCESSKEYXXXX"
</code></pre>
@@ -172,7 +170,7 @@ Once Terraform has completed, the infrastructure for your Arvados cluster is up
The domain names for your cluster (e.g.: controller.xarv1.example.com) are managed via "Route 53":https://aws.amazon.com/route53/ and the TLS certificates will be issued using "Let's Encrypt":https://letsencrypt.org/ .
-You need to configure the parent domain to delegate to the newly created zone. In other words, you need to configure @${DOMAIN}@ (e.g. "example.com") to delegate the subdomain @${CLUSTER}.${DOMAIN}@ (e.g. "xarv1.example.com") to the nameservers for the Arvados hostname records created by Terraform. You do this by creating a @NS@ record on the parent domain that refers to the name servers listed in the Terraform output parameter @route53_dns_ns at .
+You need to configure the parent domain to delegate to the newly created zone. For example, you need to configure "example.com" to delegate the subdomain "xarv1.example.com" to the nameservers for the Arvados hostname records created by Terraform. You do this by creating a @NS@ record on the parent domain that refers to the name servers listed in the Terraform output parameter @route53_dns_ns at .
If your parent domain is also controlled by Route 53, the process will be like this:
@@ -190,7 +188,7 @@ h4. Other important output parameters
The certificates will be requested from Let's Encrypt when you run the installer.
-* @vpc_cidr@ will be used to set @CLUSTER_INT_CIDR@
+* @cluster_int_cidr@ will be used to set @CLUSTER_INT_CIDR@
* You'll also need @compute_subnet_id@ and @arvados_sg_id@ to set @DriverParameters.SubnetID@ and @DriverParameters.SecurityGroupIDs@ in @local_config_dir/pillars/arvados.sls@ and when you "create a compute image":#create_a_compute_image.
@@ -229,21 +227,21 @@ The installer will set up the Arvados services on your machines. Here is the de
# API node
## postgresql server
## arvados api server
-## arvados controller (recommendend hostname @controller.${CLUSTER}.${DOMAIN}@)
-## arvados websocket (recommendend hostname @ws.${CLUSTER}.${DOMAIN}@)
+## arvados controller (recommendend hostname @controller.${DOMAIN}@)
+## arvados websocket (recommendend hostname @ws.${DOMAIN}@)
## arvados cloud dispatcher
## arvados keepbalance
-# KEEPSTORE nodes (at least 2)
-## arvados keepstore (recommendend hostnames @keep0.${CLUSTER}.${DOMAIN}@ and @keep1.${CLUSTER}.${DOMAIN}@)
+# KEEPSTORE nodes (at least 1 if using S3 as a Keep backend, else 2)
+## arvados keepstore (recommendend hostnames @keep0.${DOMAIN}@ and @keep1.${DOMAIN}@)
# KEEPPROXY node
-## arvados keepproxy (recommendend hostname @keep.${CLUSTER}.${DOMAIN}@)
-## arvados keepweb (recommendend hostname @download.${CLUSTER}.${DOMAIN}@ and @*.collections.${CLUSTER}.${DOMAIN}@)
+## arvados keepproxy (recommendend hostname @keep.${DOMAIN}@)
+## arvados keepweb (recommendend hostname @download.${DOMAIN}@ and @*.collections.${DOMAIN}@)
# WORKBENCH node
-## arvados workbench (recommendend hostname @workbench.${CLUSTER}.${DOMAIN}@)
-## arvados workbench2 (recommendend hostname @workbench2.${CLUSTER}.${DOMAIN}@)
-## arvados webshell (recommendend hostname @webshell.${CLUSTER}.${DOMAIN}@)
+## arvados workbench (recommendend hostname @workbench.${DOMAIN}@)
+## arvados workbench2 (recommendend hostname @workbench2.${DOMAIN}@)
+## arvados webshell (recommendend hostname @webshell.${DOMAIN}@)
# SHELL node (optional)
-## arvados shell (recommended hostname @shell.${CLUSTER}.${DOMAIN}@)
+## arvados shell (recommended hostname @shell.${DOMAIN}@)
When using the database installed by Arvados (and not an "external database":#ext-database), the database is stored under @/var/lib/postgresql at . Arvados logs are also kept in @/var/log@ and @/var/www/arvados-api/shared/log at . Accordingly, you should ensure that the disk partition containing @/var@ has adequate storage for your planned usage. We suggest starting with 50GiB of free space on the database host.
@@ -266,9 +264,9 @@ h2(#localparams). Edit @local.params@
This can be found wherever you choose to initialize the install files (@~/setup-arvados-xarv1@ in these examples).
# Set @CLUSTER@ to the 5-character cluster identifier (e.g "xarv1")
-# Set @DOMAIN@ to the base DNS domain of the environment, e.g. "example.com"
+# Set @DOMAIN@ to the base DNS domain of the environment, e.g. "xarv1.example.com"
# Set the @*_INT_IP@ variables with the internal (private) IP addresses of each host. Since services share hosts, some hosts are the same. See "note about /etc/hosts":#etchosts
-# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC. If you used terraform, this is emitted as @vpc_cidr at .
+# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC. If you used terraform, this is emitted as @cluster_int_cidr at .
_CIDR stands for "Classless Inter-Domain Routing" and describes which portion of the IP address that refers to the network. For example 192.168.3.0/24 means that the first 24 bits are the network (192.168.3) and the last 8 bits are a specific host on that network._
_AWS Specific: Go to the AWS console and into the VPC service, there is a column in this table view of the VPCs that gives the CIDR for the VPC (IPv4 CIDR)._
# Set @INITIAL_USER_EMAIL@ to your email address, as you will be the first admin user of the system.
commit 37ef3bcd2364ab7dbb3640e07983438439503752
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Thu May 18 17:43:06 2023 -0300
20482: Adds doc include files to the licenseignore file.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/.licenseignore b/.licenseignore
index 85e38ad81..650aeaa52 100644
--- a/.licenseignore
+++ b/.licenseignore
@@ -16,6 +16,7 @@ doc/fonts/*
doc/_includes/_config_default_yml.liquid
doc/user/cwl/federated/*
doc/_includes/_federated_cwl.liquid
+doc/_includes/_terraform_*_tfvars.liquid
*/docker_image
docker/jobs/apt.arvados.org*.list
docker/jobs/1078ECD7.key
commit d824d91c45c4ef7d79ec9dca2369df21ed45a088
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Thu May 18 17:35:17 2023 -0300
20482: Code cleanup for readability.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/tools/salt-install/terraform/aws/services/locals.tf b/tools/salt-install/terraform/aws/services/locals.tf
index 9b95ebdbc..618da3a51 100644
--- a/tools/salt-install/terraform/aws/services/locals.tf
+++ b/tools/salt-install/terraform/aws/services/locals.tf
@@ -24,4 +24,9 @@ locals {
keep0 = aws_iam_instance_profile.keepstore_instance_profile
keep1 = aws_iam_instance_profile.keepstore_instance_profile
}
+ private_subnet_id = data.terraform_remote_state.vpc.outputs.private_subnet_id
+ public_subnet_id = data.terraform_remote_state.vpc.outputs.public_subnet_id
+ arvados_sg_id = data.terraform_remote_state.vpc.outputs.arvados_sg_id
+ eip_id = data.terraform_remote_state.vpc.outputs.eip_id
+ keepstore_iam_role_name = data.terraform_remote_state.data-storage.outputs.keepstore_iam_role_name
}
diff --git a/tools/salt-install/terraform/aws/services/main.tf b/tools/salt-install/terraform/aws/services/main.tf
index dde52705a..bdb2bdcc3 100644
--- a/tools/salt-install/terraform/aws/services/main.tf
+++ b/tools/salt-install/terraform/aws/services/main.tf
@@ -57,8 +57,8 @@ resource "aws_instance" "arvados_service" {
"ssh_pubkey": file(local.pubkey_path)
})
private_ip = local.private_ip[each.value]
- subnet_id = contains(local.user_facing_hosts, each.value) ? data.terraform_remote_state.vpc.outputs.public_subnet_id : data.terraform_remote_state.vpc.outputs.private_subnet_id
- vpc_security_group_ids = [ data.terraform_remote_state.vpc.outputs.arvados_sg_id ]
+ subnet_id = contains(local.user_facing_hosts, each.value) ? local.public_subnet_id : local.private_subnet_id
+ vpc_security_group_ids = [ local.arvados_sg_id ]
iam_instance_profile = try(local.instance_profile[each.value], local.instance_profile.default).name
tags = {
Name = "${local.cluster_name}_arvados_service_${each.value}"
@@ -148,7 +148,7 @@ resource "aws_iam_policy_attachment" "cloud_dispatcher_ec2_access_attachment" {
resource "aws_eip_association" "eip_assoc" {
for_each = local.private_only ? [] : toset(local.public_hosts)
instance_id = aws_instance.arvados_service[each.value].id
- allocation_id = data.terraform_remote_state.vpc.outputs.eip_id[each.value]
+ allocation_id = local.eip_id[each.value]
}
resource "aws_iam_role" "default_iam_role" {
@@ -175,7 +175,7 @@ resource "aws_iam_policy_attachment" "ssl_privkey_password_access_attachment" {
roles = [
aws_iam_role.cloud_dispatcher_iam_role.name,
aws_iam_role.default_iam_role.name,
- data.terraform_remote_state.data-storage.outputs.keepstore_iam_role_name,
+ local.keepstore_iam_role_name,
]
policy_arn = aws_iam_policy.ssl_privkey_password_access.arn
}
diff --git a/tools/salt-install/terraform/aws/services/terraform.tfvars b/tools/salt-install/terraform/aws/services/terraform.tfvars
index 3a2bf1d8e..965153756 100644
--- a/tools/salt-install/terraform/aws/services/terraform.tfvars
+++ b/tools/salt-install/terraform/aws/services/terraform.tfvars
@@ -2,7 +2,8 @@
#
# SPDX-License-Identifier: CC-BY-SA-3.0
-# Set to a specific SSH public key path. Default: ~/.ssh/id_rsa.pub
+# SSH public key path to use by the installer script. It will be installed in
+# the home directory of the 'deploy_user'. Default: ~/.ssh/id_rsa.pub
# pubkey_path = "/path/to/pub.key"
# Set the instance type for your nodes. Default: m5a.large
commit 8a80644b400fb52d6c14cc94ea7d1fad2dabd7ca
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Thu May 18 17:31:37 2023 -0300
20482: Exports public subnet's CIDR.
Previously exported as 'vpc_cidr' and removed when preexisting vpc usage
was added. This config data is used on local.params, but its name was not
correct.
Now, it's exported as 'cluster_int_cidr' and its value is requested from AWS
so that we get the correct one whether the subnet was just created or a
previously existing is being in use.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/tools/salt-install/terraform/aws/services/data.tf b/tools/salt-install/terraform/aws/services/data.tf
index 3587f2526..85e6aba7d 100644
--- a/tools/salt-install/terraform/aws/services/data.tf
+++ b/tools/salt-install/terraform/aws/services/data.tf
@@ -28,4 +28,8 @@ data "aws_ami" "debian-11" {
name = "virtualization-type"
values = ["hvm"]
}
+}
+
+data "aws_subnet" "cluster_public_subnet" {
+ id = data.terraform_remote_state.vpc.outputs.public_subnet_id
}
\ No newline at end of file
diff --git a/tools/salt-install/terraform/aws/services/outputs.tf b/tools/salt-install/terraform/aws/services/outputs.tf
index 7ac42a783..5492f1133 100644
--- a/tools/salt-install/terraform/aws/services/outputs.tf
+++ b/tools/salt-install/terraform/aws/services/outputs.tf
@@ -5,7 +5,9 @@
output "vpc_id" {
value = data.terraform_remote_state.vpc.outputs.arvados_vpc_id
}
-
+output "cluster_int_cidr" {
+ value = data.aws_subnet.cluster_public_subnet.cidr_block
+}
output "arvados_subnet_id" {
value = data.terraform_remote_state.vpc.outputs.public_subnet_id
}
commit f349b5f3ec25656ebc079069d965f314e695af87
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Thu May 18 11:22:10 2023 -0300
20482: Allows the cluster operator to use an arbitrary domain.
Instead of making domains like cluster_prefix.domain mandatory, let the site
admin to select whichever domain they need for the deployment.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/tools/salt-install/config_examples/multi_host/aws/certs/README.md b/tools/salt-install/config_examples/multi_host/aws/certs/README.md
index dc9043217..3597fff5b 100644
--- a/tools/salt-install/config_examples/multi_host/aws/certs/README.md
+++ b/tools/salt-install/config_examples/multi_host/aws/certs/README.md
@@ -5,14 +5,18 @@ Add the certificates for your hosts in this directory.
The nodes requiring certificates are:
-* CLUSTER.DOMAIN
-* collections.CLUSTER.DOMAIN
-* \*.collections.CLUSTER.DOMAIN
-* download.CLUSTER.DOMAIN
-* keep.CLUSTER.DOMAIN
-* workbench.CLUSTER.DOMAIN
-* workbench2.CLUSTER.DOMAIN
-* ws.CLUSTER.DOMAIN
+* DOMAIN
+* collections.DOMAIN
+* controller.DOMAIN
+* \*.collections.DOMAIN
+* grafana.DOMAIN
+* download.DOMAIN
+* keep.DOMAIN
+* prometheus.DOMAIN
+* shell.DOMAIN
+* workbench.DOMAIN
+* workbench2.DOMAIN
+* ws.DOMAIN
They can be individual certificates or a wildcard certificate for all of them.
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
index f181c874d..ef5a91b27 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
@@ -84,7 +84,7 @@ arvados:
resources:
virtual_machines:
shell:
- name: shell.__CLUSTER__.__DOMAIN__
+ name: shell.__DOMAIN__
backend: __SHELL_INT_IP__
port: 4200
@@ -158,7 +158,7 @@ arvados:
Services:
Controller:
- ExternalURL: 'https://__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
+ ExternalURL: 'https://__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__'
InternalURLs:
'http://localhost:8003': {}
DispatchCloud:
@@ -168,7 +168,7 @@ arvados:
InternalURLs:
'http://__CONTROLLER_INT_IP__:9005': {}
Keepproxy:
- ExternalURL: 'https://keep.__CLUSTER__.__DOMAIN__:__KEEP_EXT_SSL_PORT__'
+ ExternalURL: 'https://keep.__DOMAIN__:__KEEP_EXT_SSL_PORT__'
InternalURLs:
'http://localhost:25107': {}
Keepstore:
@@ -178,21 +178,21 @@ arvados:
InternalURLs:
'http://localhost:8004': {}
WebDAV:
- ExternalURL: 'https://*.collections.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__/'
+ ExternalURL: 'https://*.collections.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__/'
InternalURLs:
'http://__KEEPWEB_INT_IP__:9002': {}
WebDAVDownload:
- ExternalURL: 'https://download.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
+ ExternalURL: 'https://download.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
WebShell:
- ExternalURL: 'https://webshell.__CLUSTER__.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
+ ExternalURL: 'https://webshell.__DOMAIN__:__KEEPWEB_EXT_SSL_PORT__'
Websocket:
- ExternalURL: 'wss://ws.__CLUSTER__.__DOMAIN__/websocket'
+ ExternalURL: 'wss://ws.__DOMAIN__/websocket'
InternalURLs:
'http://localhost:8005': {}
Workbench1:
- ExternalURL: 'https://workbench.__CLUSTER__.__DOMAIN__:__WORKBENCH1_EXT_SSL_PORT__'
+ ExternalURL: 'https://workbench.__DOMAIN__:__WORKBENCH1_EXT_SSL_PORT__'
Workbench2:
- ExternalURL: 'https://workbench2.__CLUSTER__.__DOMAIN__:__WORKBENCH2_EXT_SSL_PORT__'
+ ExternalURL: 'https://workbench2.__DOMAIN__:__WORKBENCH2_EXT_SSL_PORT__'
InstanceTypes:
t3small:
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/grafana.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/grafana.sls
index 1cdff39a6..b46615609 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/grafana.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/grafana.sls
@@ -17,7 +17,7 @@ grafana:
- pkg: grafana
config:
default:
- instance_name: __CLUSTER__.__DOMAIN__
+ instance_name: __DOMAIN__
security:
admin_user: {{ "__MONITORING_USERNAME__" | yaml_dquote }}
admin_password: {{ "__MONITORING_PASSWORD__" | yaml_dquote }}
@@ -26,5 +26,5 @@ grafana:
protocol: http
http_addr: 127.0.0.1
http_port: 3000
- domain: grafana.__CLUSTER__.__DOMAIN__
- root_url: https://grafana.__CLUSTER__.__DOMAIN__
+ domain: grafana.__DOMAIN__
+ root_url: https://grafana.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_controller_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_controller_configuration.sls
index 1f088a8a7..d0ecb54df 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_controller_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_controller_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- controller.__CLUSTER__.__DOMAIN__:
- - __CLUSTER__.__DOMAIN__
+ controller.__DOMAIN__:
+ - __DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_grafana_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_grafana_configuration.sls
index 60a4c315d..c92a962be 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_grafana_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_grafana_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- grafana.__CLUSTER__.__DOMAIN__:
- - grafana.__CLUSTER__.__DOMAIN__
+ grafana.__DOMAIN__:
+ - grafana.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepproxy_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepproxy_configuration.sls
index b2945e611..c174386a5 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepproxy_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepproxy_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- keepproxy.__CLUSTER__.__DOMAIN__:
- - keep.__CLUSTER__.__DOMAIN__
+ keepproxy.__DOMAIN__:
+ - keep.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepweb_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepweb_configuration.sls
index f95d7e619..f77d17c87 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepweb_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_keepweb_configuration.sls
@@ -6,8 +6,8 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- download.__CLUSTER__.__DOMAIN__:
- - download.__CLUSTER__.__DOMAIN__
- collections.__CLUSTER__.__DOMAIN__:
- - collections.__CLUSTER__.__DOMAIN__
- - '*.collections.__CLUSTER__.__DOMAIN__'
+ download.__DOMAIN__:
+ - download.__DOMAIN__
+ collections.__DOMAIN__:
+ - collections.__DOMAIN__
+ - '*.collections.__DOMAIN__'
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_prometheus_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_prometheus_configuration.sls
index 7b1165d6d..a352bc213 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_prometheus_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_prometheus_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- prometheus.__CLUSTER__.__DOMAIN__:
- - prometheus.__CLUSTER__.__DOMAIN__
+ prometheus.__DOMAIN__:
+ - prometheus.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_webshell_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_webshell_configuration.sls
index 17e6422f4..538719f7f 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_webshell_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_webshell_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- webshell.__CLUSTER__.__DOMAIN__:
- - webshell.__CLUSTER__.__DOMAIN__
+ webshell.__DOMAIN__:
+ - webshell.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_websocket_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_websocket_configuration.sls
index 6515b3bd0..f4d222761 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_websocket_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_websocket_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- websocket.__CLUSTER__.__DOMAIN__:
- - ws.__CLUSTER__.__DOMAIN__
+ websocket.__DOMAIN__:
+ - ws.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench2_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench2_configuration.sls
index 2bcf2b784..0ea0179a2 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench2_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench2_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- workbench2.__CLUSTER__.__DOMAIN__:
- - workbench2.__CLUSTER__.__DOMAIN__
+ workbench2.__DOMAIN__:
+ - workbench2.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench_configuration.sls
index 9ef348719..cfff3ea8f 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt_workbench_configuration.sls
@@ -6,5 +6,5 @@
### LETSENCRYPT
letsencrypt:
domainsets:
- workbench.__CLUSTER__.__DOMAIN__:
- - workbench.__CLUSTER__.__DOMAIN__
+ workbench.__DOMAIN__:
+ - workbench.__DOMAIN__
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_api_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_api_configuration.sls
index 9fbf90dd2..bfe0386e9 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_api_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_api_configuration.sls
@@ -22,7 +22,7 @@ nginx:
- server_name: api
- root: /var/www/arvados-api/current/public
- index: index.html index.htm
- - access_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.access.log combined
- - error_log: /var/log/nginx/api.__CLUSTER__.__DOMAIN__-upstream.error.log
+ - access_log: /var/log/nginx/api.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/api.__DOMAIN__-upstream.error.log
- passenger_enabled: 'on'
- client_max_body_size: 128m
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_collections_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_collections_configuration.sls
index b349ded32..1c10847f7 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_collections_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_collections_configuration.sls
@@ -15,7 +15,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
+ - server_name: '~^(.*\.)?collections\.__DOMAIN__'
- listen:
- 80
- location /:
@@ -29,7 +29,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
+ - server_name: '~^(.*\.)?collections\.__DOMAIN__'
- listen:
- __KEEPWEB_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -52,5 +52,5 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/collections.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/collections.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_controller_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_controller_configuration.sls
index a48810e83..d0fd6a131 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_controller_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_controller_configuration.sls
@@ -28,7 +28,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: __CLUSTER__.__DOMAIN__
+ - server_name: __DOMAIN__
- listen:
- 80 default
- location /.well-known:
@@ -43,7 +43,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: __CLUSTER__.__DOMAIN__
+ - server_name: __DOMAIN__
- listen:
- __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -69,6 +69,6 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/controller.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/controller.__DOMAIN__.error.log
- client_max_body_size: 128m
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_download_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_download_configuration.sls
index a183475a4..4470a388a 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_download_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_download_configuration.sls
@@ -15,7 +15,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: download.__CLUSTER__.__DOMAIN__
+ - server_name: download.__DOMAIN__
- listen:
- 80
- location /:
@@ -29,7 +29,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: download.__CLUSTER__.__DOMAIN__
+ - server_name: download.__DOMAIN__
- listen:
- __KEEPWEB_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -52,5 +52,5 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/download.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/download.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_grafana_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_grafana_configuration.sls
index e306dbd0c..9e1d72615 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_grafana_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_grafana_configuration.sls
@@ -24,7 +24,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: grafana.__CLUSTER__.__DOMAIN__
+ - server_name: grafana.__DOMAIN__
- listen:
- 80
- location /.well-known:
@@ -39,7 +39,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: grafana.__CLUSTER__.__DOMAIN__
+ - server_name: grafana.__DOMAIN__
- listen:
- 443 http2 ssl
- index: index.html index.htm
@@ -58,5 +58,5 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/grafana.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/grafana.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/grafana.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/grafana.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_keepproxy_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_keepproxy_configuration.sls
index c8deaebe9..63c318fc2 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_keepproxy_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_keepproxy_configuration.sls
@@ -23,7 +23,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: keep.__CLUSTER__.__DOMAIN__
+ - server_name: keep.__DOMAIN__
- listen:
- 80
- location /:
@@ -36,7 +36,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: keep.__CLUSTER__.__DOMAIN__
+ - server_name: keep.__DOMAIN__
- listen:
- __KEEP_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -60,5 +60,5 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/keepproxy.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/keepproxy.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_prometheus_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_prometheus_configuration.sls
index d654d6ed0..5e82a9a4b 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_prometheus_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_prometheus_configuration.sls
@@ -24,7 +24,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: prometheus.__CLUSTER__.__DOMAIN__
+ - server_name: prometheus.__DOMAIN__
- listen:
- 80
- location /.well-known:
@@ -39,7 +39,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: prometheus.__CLUSTER__.__DOMAIN__
+ - server_name: prometheus.__DOMAIN__
- listen:
- 443 http2 ssl
- index: index.html index.htm
@@ -60,5 +60,5 @@ nginx:
{%- endif %}
- auth_basic: '"Restricted Area"'
- auth_basic_user_file: htpasswd
- - access_log: /var/log/nginx/prometheus.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/prometheus.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/prometheus.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/prometheus.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_webshell_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_webshell_configuration.sls
index 3a0a23d95..41471ab7a 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_webshell_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_webshell_configuration.sls
@@ -14,7 +14,7 @@ nginx:
### STREAMS
http:
upstream webshell_upstream:
- - server: 'shell.__CLUSTER__.__DOMAIN__:4200 fail_timeout=10s'
+ - server: 'shell.__DOMAIN__:4200 fail_timeout=10s'
### SITES
servers:
@@ -24,7 +24,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: webshell.__CLUSTER__.__DOMAIN__
+ - server_name: webshell.__DOMAIN__
- listen:
- 80
- location /:
@@ -37,11 +37,11 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: webshell.__CLUSTER__.__DOMAIN__
+ - server_name: webshell.__DOMAIN__
- listen:
- __WEBSHELL_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- - location /shell.__CLUSTER__.__DOMAIN__:
+ - location /shell.__DOMAIN__:
- proxy_pass: 'http://webshell_upstream'
- proxy_read_timeout: 90
- proxy_connect_timeout: 90
@@ -76,6 +76,6 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/webshell.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/webshell.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_websocket_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_websocket_configuration.sls
index 36246d751..f80eeb96b 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_websocket_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_websocket_configuration.sls
@@ -23,7 +23,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: ws.__CLUSTER__.__DOMAIN__
+ - server_name: ws.__DOMAIN__
- listen:
- 80
- location /:
@@ -36,7 +36,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: ws.__CLUSTER__.__DOMAIN__
+ - server_name: ws.__DOMAIN__
- listen:
- __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -61,5 +61,5 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/ws.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/ws.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench2_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench2_configuration.sls
index 47eafeeec..629910eb8 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench2_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench2_configuration.sls
@@ -21,7 +21,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: workbench2.__CLUSTER__.__DOMAIN__
+ - server_name: workbench2.__DOMAIN__
- listen:
- 80
- location /:
@@ -34,7 +34,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: workbench2.__CLUSTER__.__DOMAIN__
+ - server_name: workbench2.__DOMAIN__
- listen:
- __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -44,12 +44,12 @@ nginx:
- 'if (-f $document_root/maintenance.html)':
- return: 503
- location /config.json:
- - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
+ - return: {{ "200 '" ~ '{"API_HOST":"__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
- include: snippets/ssl_hardening_default.conf
- ssl_certificate: __CERT_PEM__
- ssl_certificate_key: __CERT_KEY__
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/workbench2.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench2.__DOMAIN__.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench_configuration.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench_configuration.sls
index 82fd24756..013be704c 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench_configuration.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench_configuration.sls
@@ -30,7 +30,7 @@ nginx:
overwrite: true
config:
- server:
- - server_name: workbench.__CLUSTER__.__DOMAIN__
+ - server_name: workbench.__DOMAIN__
- listen:
- 80
- location /:
@@ -43,7 +43,7 @@ nginx:
__CERT_REQUIRES__
config:
- server:
- - server_name: workbench.__CLUSTER__.__DOMAIN__
+ - server_name: workbench.__DOMAIN__
- listen:
- __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
@@ -62,8 +62,8 @@ nginx:
{%- if ssl_key_encrypted_pillar.ssl_key_encrypted.enabled %}
- ssl_password_file: {{ '/run/arvados/' | path_join(ssl_key_encrypted_pillar.ssl_key_encrypted.privkey_password_filename) }}
{%- endif %}
- - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
+ - access_log: /var/log/nginx/workbench.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/workbench.__DOMAIN__.error.log
arvados_workbench_upstream:
enabled: true
@@ -76,5 +76,5 @@ nginx:
- index: index.html index.htm
- passenger_enabled: 'on'
# yamllint disable-line rule:line-length
- - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.access.log combined
- - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__-upstream.error.log
+ - access_log: /var/log/nginx/workbench.__DOMAIN__-upstream.access.log combined
+ - error_log: /var/log/nginx/workbench.__DOMAIN__-upstream.error.log
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/prometheus_server.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/prometheus_server.sls
index 7b4a09f50..bbf997b7b 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/prometheus_server.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/prometheus_server.sls
@@ -36,7 +36,7 @@ prometheus:
bearer_token: __MANAGEMENT_TOKEN__
scheme: https
static_configs:
- - targets: ['ws.__CLUSTER__.__DOMAIN__:443']
+ - targets: ['ws.__DOMAIN__:443']
labels:
instance: ws.__CLUSTER__
cluster: __CLUSTER__
@@ -44,7 +44,7 @@ prometheus:
bearer_token: __MANAGEMENT_TOKEN__
scheme: https
static_configs:
- - targets: ['__CLUSTER__.__DOMAIN__:443']
+ - targets: ['__DOMAIN__:443']
labels:
instance: controller.__CLUSTER__
cluster: __CLUSTER__
@@ -52,7 +52,7 @@ prometheus:
bearer_token: __MANAGEMENT_TOKEN__
scheme: https
static_configs:
- - targets: ['keep.__CLUSTER__.__DOMAIN__:443']
+ - targets: ['keep.__DOMAIN__:443']
labels:
instance: keep-web.__CLUSTER__
cluster: __CLUSTER__
@@ -98,7 +98,7 @@ prometheus:
'workbench',
'shell',
] %}
- - targets: [ "{{ node }}.__CLUSTER__.__DOMAIN__:9100" ]
+ - targets: [ "{{ node }}.__DOMAIN__:9100" ]
labels:
instance: "{{ node }}.__CLUSTER__"
cluster: __CLUSTER__
diff --git a/tools/salt-install/installer.sh b/tools/salt-install/installer.sh
index 5a55e337d..104ce3a60 100755
--- a/tools/salt-install/installer.sh
+++ b/tools/salt-install/installer.sh
@@ -338,7 +338,7 @@ case "$subcmd" in
exit 1
fi
- export ARVADOS_API_HOST="${CLUSTER}.${DOMAIN}:${CONTROLLER_EXT_SSL_PORT}"
+ export ARVADOS_API_HOST="${DOMAIN}:${CONTROLLER_EXT_SSL_PORT}"
export ARVADOS_API_TOKEN="$SYSTEM_ROOT_TOKEN"
arvados-client diagnostics $LOCATION
diff --git a/tools/salt-install/local.params.example.multiple_hosts b/tools/salt-install/local.params.example.multiple_hosts
index fd1919e0c..463ee4c10 100644
--- a/tools/salt-install/local.params.example.multiple_hosts
+++ b/tools/salt-install/local.params.example.multiple_hosts
@@ -8,8 +8,8 @@
# The Arvados cluster ID, needs to be 5 lowercase alphanumeric characters.
CLUSTER="cluster_fixme_or_this_wont_work"
-# The domain name you want to give to your cluster's hosts
-# the end result hostnames will be $SERVICE.$CLUSTER.$DOMAIN
+# The domain name you want to give to your cluster's hosts;
+# the end result hostnames will be $SERVICE.$DOMAIN
DOMAIN="domain_fixme_or_this_wont_work"
# For multi-node installs, the ssh log in for each node
@@ -19,7 +19,7 @@ DEPLOY_USER=admin
INITIAL_USER=admin
# If not specified, the initial user email will be composed as
-# INITIAL_USER at CLUSTER.DOMAIN
+# INITIAL_USER at DOMAIN
INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
INITIAL_USER_PASSWORD="fixmepassword"
@@ -27,7 +27,7 @@ INITIAL_USER_PASSWORD="fixmepassword"
# installer from the outside of the cluster's local network and still reach
# the internal servers for configuration deployment.
# Comment out to disable.
-USE_SSH_JUMPHOST="controller.${CLUSTER}.${DOMAIN}"
+USE_SSH_JUMPHOST="controller.${DOMAIN}"
# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
@@ -96,10 +96,10 @@ MONITORING_EMAIL=${INITIAL_USER_EMAIL}
# installer.sh will log in to each of these nodes and then provision
# it for the specified roles.
NODES=(
- [controller.${CLUSTER}.${DOMAIN}]=database,api,controller,websocket,dispatcher,keepbalance
- [workbench.${CLUSTER}.${DOMAIN}]=monitoring,workbench,workbench2,webshell,keepproxy,keepweb
- [keep0.${CLUSTER}.${DOMAIN}]=keepstore
- [shell.${CLUSTER}.${DOMAIN}]=shell
+ [controller.${DOMAIN}]=database,api,controller,websocket,dispatcher,keepbalance
+ [workbench.${DOMAIN}]=monitoring,workbench,workbench2,webshell,keepproxy,keepweb
+ [keep0.${DOMAIN}]=keepstore
+ [shell.${DOMAIN}]=shell
)
# Host SSL port where you want to point your browser to access Arvados
diff --git a/tools/salt-install/provision.sh b/tools/salt-install/provision.sh
index 1c9f1c7df..f90386652 100755
--- a/tools/salt-install/provision.sh
+++ b/tools/salt-install/provision.sh
@@ -287,7 +287,7 @@ else
USE_SINGLE_HOSTNAME="no"
# We set this variable, anyway, so sed lines do not fail and we don't need to add more
# conditionals
- HOSTNAME_EXT="${CLUSTER}.${DOMAIN}"
+ HOSTNAME_EXT="${DOMAIN}"
fi
if [ "${DUMP_CONFIG}" = "yes" ]; then
@@ -659,7 +659,7 @@ if [ -z "${ROLES}" ]; then
CERT_NAME=${HOSTNAME_EXT}
else
# We are in a multiple-hostnames env
- CERT_NAME=${c}.${CLUSTER}.${DOMAIN}
+ CERT_NAME=${c}.${DOMAIN}
fi
# As the pillar differs whether we use LE or custom certs, we need to do a final edition on them
@@ -771,9 +771,9 @@ else
grep -q "letsencrypt" ${P_DIR}/top.sls || echo " - letsencrypt" >> ${P_DIR}/top.sls
for SVC in grafana prometheus; do
grep -q "letsencrypt_${SVC}_configuration" ${P_DIR}/top.sls || echo " - letsencrypt_${SVC}_configuration" >> ${P_DIR}/top.sls
- sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${SVC}.${CLUSTER}.${DOMAIN}*/g;
- s#__CERT_PEM__#/etc/letsencrypt/live/${SVC}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
- s#__CERT_KEY__#/etc/letsencrypt/live/${SVC}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${SVC}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${SVC}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${SVC}.${DOMAIN}/privkey.pem#g" \
${P_DIR}/nginx_${SVC}_configuration.sls
done
if [ "${USE_LETSENCRYPT_ROUTE53}" = "yes" ]; then
@@ -883,15 +883,15 @@ else
# Special case for keepweb
if [ ${R} = "keepweb" ]; then
for kwsub in download collections; do
- sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${kwsub}.${CLUSTER}.${DOMAIN}*/g;
- s#__CERT_PEM__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
- s#__CERT_KEY__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${kwsub}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${kwsub}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${kwsub}.${DOMAIN}/privkey.pem#g" \
${P_DIR}/nginx_${kwsub}_configuration.sls
done
else
- sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${R}.${CLUSTER}.${DOMAIN}*/g;
- s#__CERT_PEM__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
- s#__CERT_KEY__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${R}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${R}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${R}.${DOMAIN}/privkey.pem#g" \
${P_DIR}/nginx_${R}_configuration.sls
fi
else
@@ -956,11 +956,11 @@ fi
# Leave a copy of the Arvados CA so the user can copy it where it's required
if [ "${SSL_MODE}" = "self-signed" ]; then
- echo "Copying the Arvados CA certificate '${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.crt' to the installer dir, so you can import it"
+ echo "Copying the Arvados CA certificate '${DOMAIN}-arvados-snakeoil-ca.crt' to the installer dir, so you can import it"
if [ "x${VAGRANT}" = "xyes" ]; then
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant/${DOMAIN}-arvados-snakeoil-ca.pem
else
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.crt
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}/${DOMAIN}-arvados-snakeoil-ca.crt
fi
fi
commit bbbca31c61873d11a701b23e8d202f62cbc6d918
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Wed May 17 14:56:16 2023 -0300
20482: Overrides arvados formula shell node templates to not use cluster prefix
Saltstack supports overriding formulas' templates by a pattern called TOFS.
Because this formula could be in use by others, instead of changing the
templates on the formula, we tweak them a bit here.
This change allows the deployment to use an arbitrary cluster domain, instead
of the traditional "cluster_prefix.domain" pattern.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-pam-shellinabox.tmpl.jinja b/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-pam-shellinabox.tmpl.jinja
new file mode 100644
index 000000000..82e5109d0
--- /dev/null
+++ b/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-pam-shellinabox.tmpl.jinja
@@ -0,0 +1,29 @@
+########################################################################
+# File managed by Salt at <{{ source }}>.
+# Your changes will be overwritten.
+########################################################################
+auth optional pam_faildelay.so delay=3000000
+auth [success=ok new_authtok_reqd=ok ignore=ignore user_unknown=bad default=die] pam_securetty.so
+auth requisite pam_nologin.so
+session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
+session required pam_env.so readenv=1
+session required pam_env.so readenv=1 envfile=/etc/default/locale
+
+# yamllint disable rule:line-length
+auth [success=1 default=ignore] /usr/lib/pam_arvados.so {{ arvados.cluster.domain }} shell.{{ arvados.cluster.domain }}
+# yamllint enable rule:line-length
+auth requisite pam_deny.so
+auth required pam_permit.so
+
+auth optional pam_group.so
+session required pam_limits.so
+session optional pam_lastlog.so
+session optional pam_motd.so motd=/run/motd.dynamic
+session optional pam_motd.so
+session optional pam_mail.so standard
+
+ at include common-account
+ at include common-session
+ at include common-password
+
+session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open
diff --git a/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-shellinabox.tmpl.jinja b/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-shellinabox.tmpl.jinja
new file mode 100644
index 000000000..cf5099679
--- /dev/null
+++ b/tools/salt-install/config_examples/multi_host/aws/tofs/arvados/shell/config/files/default/shell-shellinabox.tmpl.jinja
@@ -0,0 +1,10 @@
+########################################################################
+# File managed by Salt at <{{ source }}>.
+# Your changes will be overwritten.
+########################################################################
+# Should shellinaboxd start automatically
+SHELLINABOX_DAEMON_START=1
+# TCP port that shellinboxd's webserver listens on
+SHELLINABOX_PORT={{ arvados.shell.shellinabox.service.port }}
+# SSL is disabled because it is terminated in Nginx. Adjust as needed.
+SHELLINABOX_ARGS="--disable-ssl --no-beep --service=/shell.{{ arvados.cluster.domain }}:AUTH:HOME:SHELL"
diff --git a/tools/salt-install/provision.sh b/tools/salt-install/provision.sh
index 4f044c42e..1c9f1c7df 100755
--- a/tools/salt-install/provision.sh
+++ b/tools/salt-install/provision.sh
@@ -396,10 +396,12 @@ fi
if [ "x${VAGRANT}" = "xyes" ]; then
EXTRA_STATES_DIR="/home/vagrant/${CONFIG_DIR}/states"
SOURCE_PILLARS_DIR="/home/vagrant/${CONFIG_DIR}/pillars"
+ SOURCE_TOFS_DIR="/home/vagrant/${CONFIG_DIR}/tofs"
SOURCE_TESTS_DIR="/home/vagrant/${TESTS_DIR}"
else
EXTRA_STATES_DIR="${SCRIPT_DIR}/${CONFIG_DIR}/states"
SOURCE_PILLARS_DIR="${SCRIPT_DIR}/${CONFIG_DIR}/pillars"
+ SOURCE_TOFS_DIR="${SCRIPT_DIR}/${CONFIG_DIR}/tofs"
SOURCE_TESTS_DIR="${SCRIPT_DIR}/${TESTS_DIR}"
fi
@@ -545,6 +547,12 @@ fi
# As we need to separate both states and pillars in case we want specific
# roles, we iterate on both at the same time
+# Formula template overrides (TOFS)
+# See: https://template-formula.readthedocs.io/en/latest/TOFS_pattern.html#template-override
+if [ -d ${SOURCE_TOFS_DIR} ]; then
+ find ${SOURCE_TOFS_DIR} -mindepth 1 -maxdepth 1 -type d -exec cp -r "{}" ${S_DIR} \;
+fi
+
# States
cat > ${S_DIR}/top.sls << EOFTSLS
base:
commit 55c14ffed1dbed08dd61fd14f8f9d4b3b21c859e
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date: Wed May 17 15:11:00 2023 -0300
20482: Ignore salt formula's config file templates for licensing purposes.
Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>
diff --git a/.licenseignore b/.licenseignore
index 4456725dc..85e38ad81 100644
--- a/.licenseignore
+++ b/.licenseignore
@@ -95,3 +95,4 @@ CITATION.cff
SECURITY.md
*/testdata/fakestat/*
lib/controller/localdb/testdata/*.pub
+tools/salt-install/config_examples/**/*.jinja
-----------------------------------------------------------------------
hooks/post-receive
--
More information about the arvados-commits
mailing list