[ARVADOS] updated: 2.1.0-468-g332a26ebf

Git user git at public.arvados.org
Mon Mar 22 16:00:02 UTC 2021


Summary of changes:
 doc/install/salt-single-host.html.textile.liquid   | 111 +++++++++-----
 doc/install/salt.html.textile.liquid               |  27 +++-
 tools/salt-install/Vagrantfile                     | 113 +++++++-------
 .../multiple_hostnames/pillars/postgresql.sls      |  12 +-
 .../single_hostname/pillars/arvados.sls            |   1 +
 .../single_hostname/pillars/postgresql.sls         |  12 +-
 .../single_hostname/states/host_entries.sls        |   1 +
 ...l.params.example.single_host_multiple_hostnames |  25 +++-
 ...ocal.params.example.single_host_single_hostname |  28 +++-
 tools/salt-install/provision.sh                    | 165 +++++++++++++--------
 tools/salt-install/tests/run-test.sh               |   2 +-
 11 files changed, 308 insertions(+), 189 deletions(-)

  discards  1b9d2df0d5ddd92635385890d8ca60daab111170 (commit)
       via  332a26ebf92320cf4c3c9a02cf3d82870dc742bf (commit)
       via  1a0cdc10f409fe410594f62a252c1aa5f264f345 (commit)

This update added new revisions after undoing existing revisions.  That is
to say, the old revision is not a strict subset of the new revision.  This
situation occurs when you --force push a change and generate a repository
containing something like this:

 * -- * -- B -- O -- O -- O (1b9d2df0d5ddd92635385890d8ca60daab111170)
            \
             N -- N -- N (332a26ebf92320cf4c3c9a02cf3d82870dc742bf)

When this happens we assume that you've already had alert emails for all
of the O revisions, and so we here report only the revisions in the N
branch from the common base, B.

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 332a26ebf92320cf4c3c9a02cf3d82870dc742bf
Author: Javier Bértoli <jbertoli at curii.com>
Date:   Mon Mar 22 12:56:44 2021 -0300

    fix(provision): improve single host installation and documentation
    
    Also improved single host installation options (single, multiple hostnames)
    
    refs #17246
    Arvados-DCO-1.1-Signed-off-by: Javier Bértoli <jbertoli at curii.com>

diff --git a/doc/install/salt-single-host.html.textile.liquid b/doc/install/salt-single-host.html.textile.liquid
index 48b26e83a..f8e1310e7 100644
--- a/doc/install/salt-single-host.html.textile.liquid
+++ b/doc/install/salt-single-host.html.textile.liquid
@@ -9,67 +9,86 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-# "Install Saltstack":#saltstack
 # "Single host install using the provision.sh script":#single_host
-# "Final steps":#final_steps
-## "DNS configuration":#dns_configuration
-## "Install root certificate":#ca_root_certificate
+# "Choose the desired configuration":#choose_configuration
+## "Single host / single hostname":#single_host_single_hostnames
+## "Single host / multiple hostnames (Alternative configuration)":#single_host_multiple_hostnames
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+# "Run the provision.sh script":#run_provision_script
+# "Final configuration steps":#final_steps
+## "Install the CA root certificate (required in both alternatives)":#ca_root_certificate
+## "DNS configuration (single host / multiple hostnames)":#single_host_multiple_hostnames_dns_configuration
 # "Initial user and login":#initial_user
 # "Test the installed cluster running a simple workflow":#test_install
 
-h2(#saltstack). Install Saltstack
+h2(#single_host). Single host install using the provision.sh script
+
+<b>NOTE: The single host installation is not recommended for production use.</b>
+
+This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+
+This procedure will install all the main Arvados components to get you up and running in a single host. The whole installation procedure takes somewhere between 15 to 60 minutes, depending on the host resources and its network bandwidth. As a reference, on a virtual machine with 1 core and 1 GB RAM, it takes ~25 minutes to do the initial install.
+
+We suggest you to use the @provision.sh@ script to deploy Arvados, which is implemented with the @arvados-formula@ in a Saltstack master-less setup. After setting up a few variables in a config file (next step), you'll be ready to run it and get Arvados deployed.
 
-If you already have a Saltstack environment you can skip this section.
+h2(#choose_configuration). Choose the desired configuration
 
-The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
+For documentation's sake, we will use the cluster name <i>arva2</i> and the domain <i>arv.local</i>. If you don't change them as required in the next steps, installation won't proceed.
 
+Arvados' single host installation can be done in two fashions:
+
+* Using a single hostname, asigning <i>a different port (other than 443) for each user-facing service</i>: This choice is easier to setup, but the user will need to know the port/s for the different services she wants to connect to.
+* Using multiple hostnames on the same IP: this setup involves a few extra steps but each service will have a meaningful hostname so it will make easier to access them later.
+
+Once you decide which of these choices you prefer, copy one the two example configuration files and directory, and edit them to suit your needs.
+
+h3(#single_host_single_hostnames). Single host / single hostname
 <notextile>
-<pre><code>curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
-sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
+<pre><code>cp local.params.example.single_host_single_hostname local.params
+cp -r config_examples/single_host/single_hostname local_config_dir
 </code></pre>
 </notextile>
 
-For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html
+Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_PORT, *_TOKEN</b> and <b>*KEY</b> variables.
 
-h2(#single_host). Single host install using the provision.sh script
+h3(#single_host_multiple_hostnames). Single host / multiple hostnames (Alternative configuration)
+<notextile>
+<pre><code>cp local.params.example.single_host_multiple_hostnames local.params
+cp -r config_examples/single_host/multiple_hostnames local_config_dir
+</code></pre>
+</notextile>
 
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+Edit the variables in the <i>local.params</i> file.
 
-Use the @provision.sh@ script to deploy Arvados, which is implemented with the @arvados-formula@ in a Saltstack master-less setup:
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
 
-* edit the variables at the very beginning of the file,
-* run the script as root
-* wait for it to finish
+If you want or need further customization, you can edit the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> one. Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the host.
 
-This will install all the main Arvados components to get you up and running. The whole installation procedure takes somewhere between 15 to 60 minutes, depending on the host and your network bandwidth. On a virtual machine with 1 core and 1 GB RAM, it takes ~25 minutes to do the initial install.
+h2(#run_provision_script). Run the provision.sh script
 
-If everything goes OK, you'll get some final lines stating something like:
+When you finished customizing the configuration, you are ready to copy the files to the host (if needed) and run the @provision.sh@ script:
 
 <notextile>
-<pre><code>arvados: Succeeded: 109 (changed=9)
-arvados: Failed:      0
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh
 </code></pre>
 </notextile>
 
-h2(#final_steps). Final configuration steps
-
-h3(#dns_configuration). DNS configuration
-
-After the setup is done, you need to set up your DNS to be able to access the cluster.
+and wait for it to finish.
 
-The simplest way to do this is to edit your @/etc/hosts@ file (as root):
+If everything goes OK, you'll get some final lines stating something like:
 
 <notextile>
-<pre><code>export CLUSTER="arva2"
-export DOMAIN="arv.local"
-export HOST_IP="127.0.0.2"    # This is valid either if installing in your computer directly
-                              # or in a Vagrant VM. If you're installing it on a remote host
-                              # just change the IP to match that of the host.
-echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} keep.${CLUSTER}.${DOMAIN} keep0.${CLUSTER}.${DOMAIN} collections.${CLUSTER}.${DOMAIN} download.${CLUSTER}.${DOMAIN} ws.${CLUSTER}.${DOMAIN} workbench.${CLUSTER}.${DOMAIN} workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+<pre><code>arvados: Succeeded: 109 (changed=9)
+arvados: Failed:      0
 </code></pre>
 </notextile>
 
-h3(#ca_root_certificate). Install root certificate
+h2(#final_steps). Final configuration steps
+
+Once the deployment went OK, you'll need to perform a few extra steps in your local browser/host to access the cluster.
+
+h3(#ca_root_certificate). Install the CA root certificate (required in both alternatives)
 
 Arvados uses SSL to encrypt communications. Its UI uses AJAX which will silently fail if the certificate is not valid or signed by an unknown Certification Authority.
 
@@ -102,11 +121,25 @@ To access your Arvados instance using command line clients (such as arv-get and
 </code></pre>
 </notextile>
 
-h2(#initial_user). Initial user and login
+h3(#single_host_multiple_hostnames_dns_configuration). DNS configuration (single host / multiple hostnames)
+
+When using multiple hostnames, after the setup is done, you need to set up your DNS to be able to access the cluster.
+
+If you don't have access to the domain's DNS to add the required entries, the simplest way to do it is to edit your @/etc/hosts@ file (as root):
+
+<notextile>
+<pre><code>export CLUSTER="arva2"
+export DOMAIN="arv.local"
+export HOST_IP="127.0.0.2"    # This is valid either if installing in your computer directly
+                              # or in a Vagrant VM. If you're installing it on a remote host
+                              # just change the IP to match that of the host.
+echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} keep.${CLUSTER}.${DOMAIN} keep0.${CLUSTER}.${DOMAIN} collections.${CLUSTER}.${DOMAIN} download.${CLUSTER}.${DOMAIN} ws.${CLUSTER}.${DOMAIN} workbench.${CLUSTER}.${DOMAIN} workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+</code></pre>
+</notextile>
 
-At this point you should be able to log into the Arvados cluster.
+h2(#initial_user). Initial user and login 
 
-If you changed nothing in the @provision.sh@ script, the initial URL will be:
+At this point you should be able to log into the Arvados cluster. The initial URL will be:
 
 * https://workbench.arva2.arv.local
 
@@ -116,7 +149,7 @@ or, in general, the url format will be:
 
 By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.
 
-Assuming you didn't change these values in the @provision.sh@ script, the initial credentials are:
+Assuming you didn't change these values in the @local.params@ file, the initial credentials are:
 
 * User: 'admin'
 * Password: 'password'
@@ -124,7 +157,7 @@ Assuming you didn't change these values in the @provision.sh@ script, the initia
 
 h2(#test_install). Test the installed cluster running a simple workflow
 
-The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests at . If you want to run it, just change to that directory and run:
+The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the node. If you want to run it, just ssh to the node, change to that directory and run:
 
 <notextile>
 <pre><code>cd /tmp/cluster_tests
@@ -132,7 +165,7 @@ The @provision.sh@ script saves a simple example test workflow in the @/tmp/clus
 </code></pre>
 </notextile>
 
-It will create a test user, upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
+It will create a test user (by default, the same one as the admin user), upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
 
 <notextile>
 <pre><code>Creating Arvados Standard Docker Images project
diff --git a/doc/install/salt.html.textile.liquid b/doc/install/salt.html.textile.liquid
index 2b7aa6602..a9ee08fb8 100644
--- a/doc/install/salt.html.textile.liquid
+++ b/doc/install/salt.html.textile.liquid
@@ -10,20 +10,35 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
 # "Introduction":#introduction
-# "Choose an installation method":#installmethod
+# "Install Saltstack":#saltstack
+# "Choose an Arvados installation configuration":#installconfiguration
 
 h2(#introduction). Introduction
 
-To ease the installation of the various Arvados components, we have developed a "Saltstack":https://www.saltstack.com/ 's "arvados-formula":https://github.com/arvados/arvados-formula which can help you get an Arvados cluster up and running.
+To ease the installation of the various Arvados components, we have developed a "Saltstack":https://www.saltstack.com/ 's "arvados-formula":https://github.com/arvados/arvados-formula.git which can help you get an Arvados cluster up and running.
 
 Saltstack is a Python-based, open-source software for event-driven IT automation, remote task execution, and configuration management. It can be used in a master/minion setup or master-less.
 
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+This is a package-based installation method. The Salt scripts to install and configure Arvados using this formula are available at the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
 
-h2(#installmethod). Choose an installation method
+h2(#saltstack). Install Saltstack
 
-The salt formulas can be used in different ways. Choose one of these three options to install Arvados:
+If you already have a Saltstack environment or you plan to use the @provision.sh@ script we provide, you can skip this section.
+
+The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
+
+<notextile>
+<pre><code>curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
+sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
+</code></pre>
+</notextile>
+
+For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html
+
+h2(#installconfiguration). Choose an Arvados installation configuration
+
+The salt formula can be used in a few different ways. Choose one of these three options to install Arvados:
 
 * "Arvados on a single host":salt-single-host.html
-* "Use Vagrant to install Arvados in a virtual machine":salt-vagrant.html
 * "Arvados across multiple hosts":salt-multi-host.html
+* "Use Vagrant to install Arvados in a virtual machine":salt-vagrant.html
diff --git a/tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/postgresql.sls b/tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/postgresql.sls
index 56b0a42e8..71e712cad 100644
--- a/tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/postgresql.sls
+++ b/tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/postgresql.sls
@@ -15,11 +15,11 @@ postgres:
     - ['local', 'all', 'all', 'peer']
     - ['host', 'all', 'all', '127.0.0.1/32', 'md5']
     - ['host', 'all', 'all', '::1/128', 'md5']
-    - ['host', 'arvados', 'arvados', '127.0.0.1/32']
+    - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '127.0.0.1/32']
   users:
-    arvados:
+    __CLUSTER___arvados:
       ensure: present
-      password: changeme_arvados
+      password: __DATABASE_PASSWORD__
 
   # tablespaces:
   #   arvados_tablespace:
@@ -27,15 +27,15 @@ postgres:
   #     owner: arvados
 
   databases:
-    arvados:
-      owner: arvados
+    __CLUSTER___arvados:
+      owner: __CLUSTER___arvados
       template: template0
       lc_ctype: en_US.utf8
       lc_collate: en_US.utf8
       # tablespace: arvados_tablespace
       schemas:
         public:
-          owner: arvados
+          owner: __CLUSTER___arvados
       extensions:
         pg_trgm:
           if_not_exists: true
diff --git a/tools/salt-install/config_examples/single_host/single_hostname/pillars/postgresql.sls b/tools/salt-install/config_examples/single_host/single_hostname/pillars/postgresql.sls
index 56b0a42e8..71e712cad 100644
--- a/tools/salt-install/config_examples/single_host/single_hostname/pillars/postgresql.sls
+++ b/tools/salt-install/config_examples/single_host/single_hostname/pillars/postgresql.sls
@@ -15,11 +15,11 @@ postgres:
     - ['local', 'all', 'all', 'peer']
     - ['host', 'all', 'all', '127.0.0.1/32', 'md5']
     - ['host', 'all', 'all', '::1/128', 'md5']
-    - ['host', 'arvados', 'arvados', '127.0.0.1/32']
+    - ['host', '__CLUSTER___arvados', '__CLUSTER___arvados', '127.0.0.1/32']
   users:
-    arvados:
+    __CLUSTER___arvados:
       ensure: present
-      password: changeme_arvados
+      password: __DATABASE_PASSWORD__
 
   # tablespaces:
   #   arvados_tablespace:
@@ -27,15 +27,15 @@ postgres:
   #     owner: arvados
 
   databases:
-    arvados:
-      owner: arvados
+    __CLUSTER___arvados:
+      owner: __CLUSTER___arvados
       template: template0
       lc_ctype: en_US.utf8
       lc_collate: en_US.utf8
       # tablespace: arvados_tablespace
       schemas:
         public:
-          owner: arvados
+          owner: __CLUSTER___arvados
       extensions:
         pg_trgm:
           if_not_exists: true
diff --git a/tools/salt-install/local.params.example.single_host_multiple_hostnames b/tools/salt-install/local.params.example.single_host_multiple_hostnames
index 78c26af0e..8d5438e8f 100644
--- a/tools/salt-install/local.params.example.single_host_multiple_hostnames
+++ b/tools/salt-install/local.params.example.single_host_multiple_hostnames
@@ -45,19 +45,26 @@ MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
 SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
 ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
 WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=please_set_this_to_some_secure_value
+
+# SSL CERTIFICATES
+# Arvados REQUIRES valid SSL to work correctly. Otherwise, some components will fail
+# to communicate and can silently drop traffic. You can try to use the Letsencrypt
+# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
+# automatically obtain and install SSL certificates for your instances or set this
+# variable to "no", provide and upload your own certificates to the instances and
+# modify the 'nginx_*' salt pillars accordingly
+USE_LETSENCRYPT="no"
 
 # The directory to check for the config files (pillars, states) you want to use.
 # There are a few examples under 'config_examples'. If you don't change this
 # variable, the single_host, multiple_hostnames config will be used
 # CONFIG_DIR="config_examples/single_host/single_hostname"
-CONFIG_DIR="config_examples/single_host/multiple_hostnames"
+CONFIG_DIR="local_config_dir"
 # Extra states to apply. If you use your own subdir, change this value accordingly
 # This is the value for the single_host/multiple_hostnames example
 EXTRA_STATES_DIR="${F_DIR}/arvados-formula/test/salt/states/examples/single_host"
 
-# When using the single_host/single_hostname example, change to this one
-# EXTRA_STATES_DIR="${CONFIG_DIR}/states"
-
 # Which release of Arvados repo you want to use
 RELEASE="production"
 # Which version of Arvados you want to install. Defaults to 'latest'
@@ -74,7 +81,8 @@ BRANCH="master"
 
 # Formulas versions
 ARVADOS_TAG="v1.1.4"
-POSTGRES_TAG="v0.41.3"
-NGINX_TAG="v2.4.0"
+POSTGRES_TAG="v0.41.6"
+NGINX_TAG="v2.5.0"
 DOCKER_TAG="v1.0.0"
 LOCALE_TAG="v0.3.4"
+LETSENCRYPT_TAG="v2.1.0"
diff --git a/tools/salt-install/local.params.example.single_host_single_hostname b/tools/salt-install/local.params.example.single_host_single_hostname
index 110d79429..264f2a72e 100644
--- a/tools/salt-install/local.params.example.single_host_single_hostname
+++ b/tools/salt-install/local.params.example.single_host_single_hostname
@@ -45,12 +45,22 @@ MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
 SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
 ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
 WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=please_set_this_to_some_secure_value
+
+# SSL CERTIFICATES
+# Arvados REQUIRES valid SSL to work correctly. Otherwise, some components will fail
+# to communicate and can silently drop traffic. You can try to use the Letsencrypt
+# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
+# automatically obtain and install SSL certificates for your instances or set this
+# variable to "no", provide and upload your own certificates to the instances and
+# modify the 'nginx_*' salt pillars accordingly
+USE_LETSENCRYPT="no"
 
 # The directory to check for the config files (pillars, states) you want to use.
 # There are a few examples under 'config_examples'. If you don't change this
 # variable, the single_host, multiple_hostnames config will be used
 # CONFIG_DIR="config_examples/single_host/single_hostname"
-CONFIG_DIR="config_examples/single_host/single_hostname"
+CONFIG_DIR="local_config_dir"
 # Extra states to apply. If you use your own subdir, change this value accordingly
 # This is the value for the single_host/multiple_hostnames example
 # EXTRA_STATES_DIR="${F_DIR}/arvados-formula/test/salt/states/examples/single_host"
@@ -74,7 +84,8 @@ VERSION="latest"
 
 # Formulas versions
 ARVADOS_TAG="v1.1.4"
-POSTGRES_TAG="v0.41.3"
-NGINX_TAG="v2.4.0"
+POSTGRES_TAG="v0.41.6"
+NGINX_TAG="v2.5.0"
 DOCKER_TAG="v1.0.0"
 LOCALE_TAG="v0.3.4"
+LETSENCRYPT_TAG="v2.1.0"
diff --git a/tools/salt-install/provision.sh b/tools/salt-install/provision.sh
index 5174f2398..19ec6eccb 100755
--- a/tools/salt-install/provision.sh
+++ b/tools/salt-install/provision.sh
@@ -129,10 +129,11 @@ WORKBENCH2_EXT_SSL_PORT=3001
 RELEASE="production"
 VERSION="latest"
 ARVADOS_TAG="v1.1.4"
-POSTGRES_TAG="v0.41.3"
-NGINX_TAG="v2.4.0"
+POSTGRES_TAG="v0.41.6"
+NGINX_TAG="v2.5.0"
 DOCKER_TAG="v1.0.0"
 LOCALE_TAG="v0.3.4"
+LETSENCRYPT_TAG="v2.1.0"
 
 # Salt's dir
 ## states
@@ -192,11 +193,13 @@ mkdir -p ${S_DIR} ${F_DIR} ${P_DIR}
 
 # Get the formula and dependencies
 cd ${F_DIR} || exit 1
-git clone --branch "${ARVADOS_TAG}" https://github.com/arvados/arvados-formula.git
-git clone --branch "${DOCKER_TAG}" https://github.com/saltstack-formulas/docker-formula.git
-git clone --branch "${LOCALE_TAG}" https://github.com/saltstack-formulas/locale-formula.git
-git clone --branch "${NGINX_TAG}" https://github.com/saltstack-formulas/nginx-formula.git
-git clone --branch "${POSTGRES_TAG}" https://github.com/saltstack-formulas/postgres-formula.git
+git clone --branch "${ARVADOS_TAG}"     https://github.com/arvados/arvados-formula.git
+git clone --branch "${DOCKER_TAG}"      https://github.com/saltstack-formulas/docker-formula.git
+git clone --branch "${LOCALE_TAG}"      https://github.com/saltstack-formulas/locale-formula.git
+# git clone --branch "${NGINX_TAG}"       https://github.com/saltstack-formulas/nginx-formula.git
+git clone --branch "${NGINX_TAG}"       https://github.com/netmanagers/nginx-formula.git
+git clone --branch "${POSTGRES_TAG}"    https://github.com/saltstack-formulas/postgres-formula.git
+git clone --branch "${LETSENCRYPT_TAG}" https://github.com/saltstack-formulas/letsencrypt-formula.git
 
 # If we want to try a specific branch of the formula
 if [ "x${BRANCH}" != "x" ]; then
@@ -218,41 +221,54 @@ SOURCE_STATES_DIR="${EXTRA_STATES_DIR}"
 # Replace variables (cluster,  domain, etc) in the pillars, states and tests
 # to ease deployment for newcomers
 for f in "${SOURCE_PILLARS_DIR}"/*; do
-  sed "s/__ANONYMOUS_USER_TOKEN__/${ANONYMOUS_USER_TOKEN}/g;
-       s/__BLOB_SIGNING_KEY__/${BLOB_SIGNING_KEY}/g;
-       s/__CONTROLLER_EXT_SSL_PORT__/${CONTROLLER_EXT_SSL_PORT}/g;
-       s/__CLUSTER__/${CLUSTER}/g;
-       s/__DOMAIN__/${DOMAIN}/g;
-       s/__HOSTNAME_EXT__/${HOSTNAME_EXT}/g;
-       s/__HOSTNAME_INT__/${HOSTNAME_INT}/g;
-       s/__INITIAL_USER_EMAIL__/${INITIAL_USER_EMAIL}/g;
-       s/__INITIAL_USER_PASSWORD__/${INITIAL_USER_PASSWORD}/g;
-       s/__INITIAL_USER__/${INITIAL_USER}/g;
-       s/__KEEPWEB_EXT_SSL_PORT__/${KEEPWEB_EXT_SSL_PORT}/g;
-       s/__KEEP_EXT_SSL_PORT__/${KEEP_EXT_SSL_PORT}/g;
-       s/__MANAGEMENT_TOKEN__/${MANAGEMENT_TOKEN}/g;
-       s/__RELEASE__/${RELEASE}/g;
-       s/__SYSTEM_ROOT_TOKEN__/${SYSTEM_ROOT_TOKEN}/g;
-       s/__VERSION__/${VERSION}/g;
-       s/__WEBSHELL_EXT_SSL_PORT__/${WEBSHELL_EXT_SSL_PORT}/g;
-       s/__WEBSOCKET_EXT_SSL_PORT__/${WEBSOCKET_EXT_SSL_PORT}/g;
-       s/__WORKBENCH1_EXT_SSL_PORT__/${WORKBENCH1_EXT_SSL_PORT}/g;
-       s/__WORKBENCH2_EXT_SSL_PORT__/${WORKBENCH2_EXT_SSL_PORT}/g;
-       s/__WORKBENCH_SECRET_KEY__/${WORKBENCH_SECRET_KEY}/g" \
+  sed "s#__ANONYMOUS_USER_TOKEN__#${ANONYMOUS_USER_TOKEN}#g;
+       s#__BLOB_SIGNING_KEY__#${BLOB_SIGNING_KEY}#g;
+       s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+       s#__CLUSTER__#${CLUSTER}#g;
+       s#__DOMAIN__#${DOMAIN}#g;
+       s#__HOSTNAME_EXT__#${HOSTNAME_EXT}#g;
+       s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+       s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+       s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g;
+       s#__INITIAL_USER__#${INITIAL_USER}#g;
+       s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+       s#__KEEPWEB_EXT_SSL_PORT__#${KEEPWEB_EXT_SSL_PORT}#g;
+       s#__KEEP_EXT_SSL_PORT__#${KEEP_EXT_SSL_PORT}#g;
+       s#__MANAGEMENT_TOKEN__#${MANAGEMENT_TOKEN}#g;
+       s#__RELEASE__#${RELEASE}#g;
+       s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g;
+       s#__VERSION__#${VERSION}#g;
+       s#__WEBSHELL_EXT_SSL_PORT__#${WEBSHELL_EXT_SSL_PORT}#g;
+       s#__WEBSOCKET_EXT_SSL_PORT__#${WEBSOCKET_EXT_SSL_PORT}#g;
+       s#__WORKBENCH1_EXT_SSL_PORT__#${WORKBENCH1_EXT_SSL_PORT}#g;
+       s#__WORKBENCH2_EXT_SSL_PORT__#${WORKBENCH2_EXT_SSL_PORT}#g;
+       s#__CLUSTER_INT_CIDR__#${CLUSTER_INT_CIDR}#g;
+       s#__CONTROLLER_INT_IP__#${CONTROLLER_INT_IP}#g;
+       s#__WEBSOCKET_INT_IP__#${WEBSOCKET_INT_IP}#g;
+       s#__KEEP_INT_IP__#${KEEP_INT_IP}#g;
+       s#__KEEPSTORE0_INT_IP__#${KEEPSTORE0_INT_IP}#g;
+       s#__KEEPSTORE1_INT_IP__#${KEEPSTORE1_INT_IP}#g;
+       s#__KEEPWEB_INT_IP__#${KEEPWEB_INT_IP}#g;
+       s#__WEBSHELL_INT_IP__#${WEBSHELL_INT_IP}#g;
+       s#__WORKBENCH1_INT_IP__#${WORKBENCH1_INT_IP}#g;
+       s#__WORKBENCH2_INT_IP__#${WORKBENCH2_INT_IP}#g;
+       s#__DATABASE_INT_IP__#${DATABASE_INT_IP}#g;
+       s#__WORKBENCH_SECRET_KEY__#${WORKBENCH_SECRET_KEY}#g" \
   "${f}" > "${P_DIR}"/$(basename "${f}")
 done
 
 mkdir -p /tmp/cluster_tests
 # Replace cluster and domain name in the test files
 for f in "${SOURCE_TESTS_DIR}"/*; do
-  sed "s/__CLUSTER__/${CLUSTER}/g;
-       s/__CONTROLLER_EXT_SSL_PORT__/${CONTROLLER_EXT_SSL_PORT}/g;
-       s/__DOMAIN__/${DOMAIN}/g;
-       s/__HOSTNAME_INT__/${HOSTNAME_INT}/g;
-       s/__INITIAL_USER_EMAIL__/${INITIAL_USER_EMAIL}/g;
-       s/__INITIAL_USER_PASSWORD__/${INITIAL_USER_PASSWORD}/g
-       s/__INITIAL_USER__/${INITIAL_USER}/g;
-       s/__SYSTEM_ROOT_TOKEN__/${SYSTEM_ROOT_TOKEN}/g" \
+  sed "s#__CLUSTER__#${CLUSTER}#g;
+       s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+       s#__DOMAIN__#${DOMAIN}#g;
+       s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+       s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+       s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g
+       s#__INITIAL_USER__#${INITIAL_USER}#g;
+       s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+       s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g" \
   "${f}" > "/tmp/cluster_tests"/$(basename "${f}")
 done
 chmod 755 /tmp/cluster_tests/run-test.sh
@@ -262,27 +278,39 @@ if [ -d "${SOURCE_STATES_DIR}" ]; then
   mkdir -p "${F_DIR}"/extra/extra
 
   for f in "${SOURCE_STATES_DIR}"/*; do
-    sed "s/__ANONYMOUS_USER_TOKEN__/${ANONYMOUS_USER_TOKEN}/g;
-         s/__CLUSTER__/${CLUSTER}/g;
-         s/__BLOB_SIGNING_KEY__/${BLOB_SIGNING_KEY}/g;
-         s/__CONTROLLER_EXT_SSL_PORT__/${CONTROLLER_EXT_SSL_PORT}/g;
-         s/__DOMAIN__/${DOMAIN}/g;
-         s/__HOSTNAME_EXT__/${HOSTNAME_EXT}/g;
-         s/__HOSTNAME_INT__/${HOSTNAME_INT}/g;
-         s/__INITIAL_USER_EMAIL__/${INITIAL_USER_EMAIL}/g;
-         s/__INITIAL_USER_PASSWORD__/${INITIAL_USER_PASSWORD}/g;
-         s/__INITIAL_USER__/${INITIAL_USER}/g;
-         s/__KEEPWEB_EXT_SSL_PORT__/${KEEPWEB_EXT_SSL_PORT}/g;
-         s/__KEEP_EXT_SSL_PORT__/${KEEP_EXT_SSL_PORT}/g;
-         s/__MANAGEMENT_TOKEN__/${MANAGEMENT_TOKEN}/g;
-         s/__RELEASE__/${RELEASE}/g;
-         s/__SYSTEM_ROOT_TOKEN__/${SYSTEM_ROOT_TOKEN}/g;
-         s/__VERSION__/${VERSION}/g;
-         s/__WEBSHELL_EXT_SSL_PORT__/${WEBSHELL_EXT_SSL_PORT}/g;
-         s/__WEBSOCKET_EXT_SSL_PORT__/${WEBSOCKET_EXT_SSL_PORT}/g;
-         s/__WORKBENCH1_EXT_SSL_PORT__/${WORKBENCH1_EXT_SSL_PORT}/g;
-         s/__WORKBENCH2_EXT_SSL_PORT__/${WORKBENCH2_EXT_SSL_PORT}/g;
-         s/__WORKBENCH_SECRET_KEY__/${WORKBENCH_SECRET_KEY}/g" \
+    sed "s#__ANONYMOUS_USER_TOKEN__#${ANONYMOUS_USER_TOKEN}#g;
+         s#__CLUSTER__#${CLUSTER}#g;
+         s#__BLOB_SIGNING_KEY__#${BLOB_SIGNING_KEY}#g;
+         s#__CONTROLLER_EXT_SSL_PORT__#${CONTROLLER_EXT_SSL_PORT}#g;
+         s#__DOMAIN__#${DOMAIN}#g;
+         s#__HOSTNAME_EXT__#${HOSTNAME_EXT}#g;
+         s#__HOSTNAME_INT__#${HOSTNAME_INT}#g;
+         s#__INITIAL_USER_EMAIL__#${INITIAL_USER_EMAIL}#g;
+         s#__INITIAL_USER_PASSWORD__#${INITIAL_USER_PASSWORD}#g;
+         s#__INITIAL_USER__#${INITIAL_USER}#g;
+         s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
+         s#__KEEPWEB_EXT_SSL_PORT__#${KEEPWEB_EXT_SSL_PORT}#g;
+         s#__KEEP_EXT_SSL_PORT__#${KEEP_EXT_SSL_PORT}#g;
+         s#__MANAGEMENT_TOKEN__#${MANAGEMENT_TOKEN}#g;
+         s#__RELEASE__#${RELEASE}#g;
+         s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g;
+         s#__VERSION__#${VERSION}#g;
+         s#__CLUSTER_INT_CIDR__#${CLUSTER_INT_CIDR}#g;
+         s#__CONTROLLER_INT_IP__#${CONTROLLER_INT_IP}#g;
+         s#__WEBSOCKET_INT_IP__#${WEBSOCKET_INT_IP}#g;
+         s#__KEEP_INT_IP__#${KEEP_INT_IP}#g;
+         s#__KEEPSTORE0_INT_IP__#${KEEPSTORE0_INT_IP}#g;
+         s#__KEEPSTORE1_INT_IP__#${KEEPSTORE1_INT_IP}#g;
+         s#__KEEPWEB_INT_IP__#${KEEPWEB_INT_IP}#g;
+         s#__WEBSHELL_INT_IP__#${WEBSHELL_INT_IP}#g;
+         s#__WORKBENCH1_INT_IP__#${WORKBENCH1_INT_IP}#g;
+         s#__WORKBENCH2_INT_IP__#${WORKBENCH2_INT_IP}#g;
+         s#__DATABASE_INT_IP__#${DATABASE_INT_IP}#g;
+         s#__WEBSHELL_EXT_SSL_PORT__#${WEBSHELL_EXT_SSL_PORT}#g;
+         s#__WEBSOCKET_EXT_SSL_PORT__#${WEBSOCKET_EXT_SSL_PORT}#g;
+         s#__WORKBENCH1_EXT_SSL_PORT__#${WORKBENCH1_EXT_SSL_PORT}#g;
+         s#__WORKBENCH2_EXT_SSL_PORT__#${WORKBENCH2_EXT_SSL_PORT}#g;
+         s#__WORKBENCH_SECRET_KEY__#${WORKBENCH_SECRET_KEY}#g" \
     "${f}" > "${F_DIR}/extra/extra"/$(basename "${f}")
   done
 fi
@@ -318,6 +346,9 @@ fi
 if [ -z "${ROLES}" ]; then
   # States
   echo "    - nginx.passenger" >> ${S_DIR}/top.sls
+  if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+    grep -q "letsencrypt" ${S_DIR}/top.sls || echo "    - letsencrypt" >> ${S_DIR}/top.sls
+  fi
   echo "    - postgres" >> ${S_DIR}/top.sls
   echo "    - docker" >> ${S_DIR}/top.sls
   echo "    - arvados" >> ${S_DIR}/top.sls
@@ -334,6 +365,9 @@ if [ -z "${ROLES}" ]; then
   echo "    - nginx_workbench2_configuration" >> ${P_DIR}/top.sls
   echo "    - nginx_workbench_configuration" >> ${P_DIR}/top.sls
   echo "    - postgresql" >> ${P_DIR}/top.sls
+  if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+    grep -q "letsencrypt" ${P_DIR}/top.sls || echo "    - letsencrypt" >> ${P_DIR}/top.sls
+  fi
 else
   # If we add individual roles, make sure we add the repo first
   echo "    - arvados.repo" >> ${S_DIR}/top.sls
@@ -350,6 +384,11 @@ else
         # FIXME: https://dev.arvados.org/issues/17352
         grep -q "postgres.client" ${S_DIR}/top.sls || echo "    - postgres.client" >> ${S_DIR}/top.sls
         grep -q "nginx.passenger" ${S_DIR}/top.sls || echo "    - nginx.passenger" >> ${S_DIR}/top.sls
+        ### If we don't install and run LE before arvados-api-server, it fails and breaks everything
+        ### after it so we add this here, as we are, after all, sharing the host for api and controller
+        if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+          grep -q "letsencrypt" ${S_DIR}/top.sls || echo "    - letsencrypt" >> ${S_DIR}/top.sls
+        fi
         grep -q "arvados.${R}" ${S_DIR}/top.sls    || echo "    - arvados.${R}" >> ${S_DIR}/top.sls
         # Pillars
         grep -q "docker" ${P_DIR}/top.sls                   || echo "    - docker" >> ${P_DIR}/top.sls
@@ -360,10 +399,17 @@ else
       "controller" | "websocket" | "workbench" | "workbench2" | "keepweb" | "keepproxy")
         # States
         grep -q "nginx.passenger" ${S_DIR}/top.sls || echo "    - nginx.passenger" >> ${S_DIR}/top.sls
+        if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+          grep -q "letsencrypt" ${S_DIR}/top.sls || echo "    - letsencrypt" >> ${S_DIR}/top.sls
+        fi
         grep -q "arvados.${R}" ${S_DIR}/top.sls    || echo "    - arvados.${R}" >> ${S_DIR}/top.sls
         # Pillars
         grep -q "nginx_passenger" ${P_DIR}/top.sls          || echo "    - nginx_passenger" >> ${P_DIR}/top.sls
         grep -q "nginx_${R}_configuration" ${P_DIR}/top.sls || echo "    - nginx_${R}_configuration" >> ${P_DIR}/top.sls
+        if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
+          grep -q "letsencrypt" ${P_DIR}/top.sls || echo "    - letsencrypt" >> ${P_DIR}/top.sls
+          grep -q "letsencrypt_${R}_configuration" ${P_DIR}/top.sls || echo "    - letsencrypt_${R}_configuration" >> ${P_DIR}/top.sls
+        fi
       ;;
       "shell")
         # States

commit 1a0cdc10f409fe410594f62a252c1aa5f264f345
Author: Javier Bértoli <jbertoli at curii.com>
Date:   Tue Feb 16 11:21:20 2021 -0300

    fix(provision): force user to properly set cluster & domain parameters
    
    Also improved single host installation options (single, multiple hostnames)
    
    refs #17246
    Arvados-DCO-1.1-Signed-off-by: Javier Bértoli <jbertoli at curii.com>

diff --git a/tools/salt-install/Vagrantfile b/tools/salt-install/Vagrantfile
index 666c6c48f..6a093b152 100644
--- a/tools/salt-install/Vagrantfile
+++ b/tools/salt-install/Vagrantfile
@@ -11,10 +11,45 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   config.ssh.insert_key = false
   config.ssh.forward_x11 = true
 
-  # A single_host multiple_hostnames example
-  config.vm.define "arvados-sh-mn" do |arv|
+##   # A single_host multiple_hostnames example
+##   config.vm.define "arvados-sh-mn" do |arv|
+##     arv.vm.box = "bento/debian-10"
+##     arv.vm.hostname = "harpo"
+##     # CPU/RAM
+##     config.vm.provider :virtualbox do |v|
+##       v.memory = 2048
+##       v.cpus = 2
+##     end
+##
+##     # Networking
+##     # WEBUI PORT
+##     arv.vm.network "forwarded_port", guest: 8443, host: 8443
+##     # KEEPPROXY
+##     arv.vm.network "forwarded_port", guest: 25101, host: 25101
+##     # KEEPWEB
+##     arv.vm.network "forwarded_port", guest: 9002, host: 9002
+##     # WEBSOCKET
+##     arv.vm.network "forwarded_port", guest: 8002, host: 8002
+##     arv.vm.provision "shell",
+##                      inline: "sed 's#cluster_fixme_or_this_wont_work#harpo#g;
+##                                    s#domain_fixme_or_this_wont_work#local#g;
+##                                    s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=8443#g' \
+##                                    /vagrant/local.params.example.single_host_multiple_hostnames > /tmp/local.params.single_host_multiple_hostnames"
+##                                    # s#production#development#g;
+##     arv.vm.provision "shell",
+##                      path: "provision.sh",
+##                      args: [
+##                        # "--debug",
+##                        "--config /tmp/local.params.single_host_multiple_hostnames",
+##                        "--test",
+##                        "--vagrant"
+##                      ].join(" ")
+##   end
+
+  # A single_host single_hostname example
+  config.vm.define "arvados-sh-sn" do |arv|
     arv.vm.box = "bento/debian-10"
-    arv.vm.hostname = "harpo.local"
+    arv.vm.hostname = "zeppo"
     # CPU/RAM
     config.vm.provider :virtualbox do |v|
       v.memory = 2048
@@ -22,66 +57,33 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
     end
 
     # Networking
-    # WEBUI PORT
-    arv.vm.network "forwarded_port", guest: 8443, host: 8443
-    # KEEPPROXY
-    arv.vm.network "forwarded_port", guest: 25101, host: 25101
-    # KEEPWEB
-    arv.vm.network "forwarded_port", guest: 9002, host: 9002
-    # WEBSOCKET
-    arv.vm.network "forwarded_port", guest: 8002, host: 8002
+    arv.vm.network "forwarded_port", guest: 9443, host: 9443
+    arv.vm.network "forwarded_port", guest: 9444, host: 9444
+    arv.vm.network "forwarded_port", guest: 9445, host: 9445
+    arv.vm.network "forwarded_port", guest: 35101, host: 35101
+    arv.vm.network "forwarded_port", guest: 10002, host: 10002
+    arv.vm.network "forwarded_port", guest: 14202, host: 14202
+    arv.vm.network "forwarded_port", guest: 18002, host: 18002
     arv.vm.provision "shell",
-                     inline: "sed 's#fixme#harpo#g;
-                                   s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=8443#g' \
-                                   /vagrant/local.params.example > /vagrant/local.params.single_host_multiple_hostnames"
+                     inline: "sed 's#HOSTNAME_EXT=\"\"#HOSTNAME_EXT=\"zeppo.local\"#g;
+                                   s#cluster_fixme_or_this_wont_work#harpo#g;
+                                   s#domain_fixme_or_this_wont_work#local#g;
+                                   s#CONFIG_DIR=\"config_examples/single_host/multiple_hostnames\"#CONFIG_DIR=\"config_examples/single_host/single_hostname\"#g;
+                                   s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=9443#g;
+                                   s#KEEP_EXT_SSL_PORT=25101#KEEP_EXT_SSL_PORT=35101#g;
+                                   s#KEEPWEB_EXT_SSL_PORT=9002#KEEPWEB_EXT_SSL_PORT=11002#g;
+                                   s#WEBSHELL_EXT_SSL_PORT=4202#WEBSHELL_EXT_SSL_PORT=14202#g;
+                                   s#WEBSOCKET_EXT_SSL_PORT=8002#WEBSOCKET_EXT_SSL_PORT=18002#g;
+                                   s#WORKBENCH1_EXT_SSL_PORT=443#WORKBENCH1_EXT_SSL_PORT=9444#g;
+                                   s#WORKBENCH2_EXT_SSL_PORT=3001#WORKBENCH2_EXT_SSL_PORT=9445#g;' \
+                                   /vagrant/local.params.example.single_host_single_hostname > /tmp/local.params.single_host_single_hostname"
     arv.vm.provision "shell",
                      path: "provision.sh",
                      args: [
                        # "--debug",
-                       "--config /vagrant/local.params.single_host_multiple_hostnames",
+                       "--config /tmp/local.params.single_host_single_hostname",
                        "--test",
                        "--vagrant"
                      ].join(" ")
   end
-
-  ## # A single_host single_hostname example
-  ## config.vm.define "arvados-sh-sn" do |arv|
-  ##   arv.vm.box = "bento/debian-10"
-  ##   arv.vm.hostname = "zeppo.local"
-  ##   # CPU/RAM
-  ##   config.vm.provider :virtualbox do |v|
-  ##     v.memory = 2048
-  ##     v.cpus = 2
-  ##   end
-
-  ##   # Networking
-  ##   arv.vm.network "forwarded_port", guest: 9443, host: 9443
-  ##   arv.vm.network "forwarded_port", guest: 9444, host: 9444
-  ##   arv.vm.network "forwarded_port", guest: 9445, host: 9445
-  ##   arv.vm.network "forwarded_port", guest: 35101, host: 35101
-  ##   arv.vm.network "forwarded_port", guest: 10002, host: 10002
-  ##   arv.vm.network "forwarded_port", guest: 14202, host: 14202
-  ##   arv.vm.network "forwarded_port", guest: 18002, host: 18002
-  ##   arv.vm.provision "shell",
-  ##                    inline: "sed 's#HOSTNAME_EXT=\"\"#HOSTNAME_EXT=\"zeppo.local.cluster\"#g;
-  ##                                  s#CLUSTER=\"fixme\"#CLUSTER=\"zeppo\"#g;
-  ##                                  s#DOMAIN=\"some.domain\"#DOMAIN=\"local.cluster\"#g;
-  ##                                  s#CONFIG_DIR=\"config_examples/single_host/multiple_hostnames\"#CONFIG_DIR=\"config_examples/single_host/single_hostname\"#g;
-  ##                                  s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=9443#g;
-  ##                                  s#KEEP_EXT_SSL_PORT=25101#KEEP_EXT_SSL_PORT=35101#g;
-  ##                                  s#KEEPWEB_EXT_SSL_PORT=9002#KEEPWEB_EXT_SSL_PORT=11002#g;
-  ##                                  s#WEBSHELL_EXT_SSL_PORT=4202#WEBSHELL_EXT_SSL_PORT=14202#g;
-  ##                                  s#WEBSOCKET_EXT_SSL_PORT=8002#WEBSOCKET_EXT_SSL_PORT=18002#g;
-  ##                                  s#WORKBENCH1_EXT_SSL_PORT=443#WORKBENCH1_EXT_SSL_PORT=9444#g;
-  ##                                  s#WORKBENCH2_EXT_SSL_PORT=3001#WORKBENCH2_EXT_SSL_PORT=9445#g;' \
-  ##                                 /vagrant/local.params.example > /vagrant/local.params.single_host_single_hostname"
-  ##   arv.vm.provision "shell",
-  ##                    path: "provision.sh",
-  ##                    args: [
-  ##                      # "--debug",
-  ##                      "--config /vagrant/local.params.single_host_single_hostname",
-  ##                      "--test",
-  ##                      "--vagrant"
-  ##                    ].join(" ")
-  ## end
 end
diff --git a/tools/salt-install/config_examples/single_host/single_hostname/pillars/arvados.sls b/tools/salt-install/config_examples/single_host/single_hostname/pillars/arvados.sls
index 31d3a0d50..8fcad0116 100644
--- a/tools/salt-install/config_examples/single_host/single_hostname/pillars/arvados.sls
+++ b/tools/salt-install/config_examples/single_host/single_hostname/pillars/arvados.sls
@@ -81,6 +81,7 @@ arvados:
       system_root: __SYSTEM_ROOT_TOKEN__
       management: __MANAGEMENT_TOKEN__
       anonymous_user: __ANONYMOUS_USER_TOKEN__
+      rails_secret: YDLxHf4GqqmLXYAMgndrAmFEdqgC0sBqX7TEjMN2rw9D6EVwgx
 
     ### KEYS
     secrets:
diff --git a/tools/salt-install/config_examples/single_host/single_hostname/states/host_entries.sls b/tools/salt-install/config_examples/single_host/single_hostname/states/host_entries.sls
index 7e3957c57..eac854523 100644
--- a/tools/salt-install/config_examples/single_host/single_hostname/states/host_entries.sls
+++ b/tools/salt-install/config_examples/single_host/single_hostname/states/host_entries.sls
@@ -29,4 +29,5 @@ arvados_test_salt_states_examples_single_host_etc_hosts_host_present:
         ]
       %}
       - {{ entry }}
+      - {{ entry }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
       {%- endfor %}
diff --git a/tools/salt-install/local.params.example b/tools/salt-install/local.params.example.single_host_multiple_hostnames
similarity index 88%
copy from tools/salt-install/local.params.example
copy to tools/salt-install/local.params.example.single_host_multiple_hostnames
index 88d6a75d6..78c26af0e 100644
--- a/tools/salt-install/local.params.example
+++ b/tools/salt-install/local.params.example.single_host_multiple_hostnames
@@ -5,11 +5,11 @@
 
 # These are the basic parameters to configure the installation
 
-# The 5 letters name you want to give your cluster
-CLUSTER="fixme"
+# The FIVE ALPHANUMERIC CHARACTERS name you want to give your cluster
+CLUSTER="cluster_fixme_or_this_wont_work"
 
 # The domainname you want tou give to your cluster's hosts
-DOMAIN="some.domain"
+DOMAIN="domain_fixme_or_this_wont_work"
 
 # When setting the cluster in a single host, you can use a single hostname
 # to access all the instances. When using virtualization (ie AWS), this should be
@@ -36,7 +36,7 @@ INITIAL_USER="admin"
 
 # If not specified, the initial user email will be composed as
 # INITIAL_USER at CLUSTER.DOMAIN
-INITIAL_USER_EMAIL="admin at fixme.localdomain"
+INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
 INITIAL_USER_PASSWORD="password"
 
 # YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
@@ -51,7 +51,8 @@ WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
 # variable, the single_host, multiple_hostnames config will be used
 # CONFIG_DIR="config_examples/single_host/single_hostname"
 CONFIG_DIR="config_examples/single_host/multiple_hostnames"
-# Extra states to pply. iIf you use your own subdir, change this value accordingly
+# Extra states to apply. If you use your own subdir, change this value accordingly
+# This is the value for the single_host/multiple_hostnames example
 EXTRA_STATES_DIR="${F_DIR}/arvados-formula/test/salt/states/examples/single_host"
 
 # When using the single_host/single_hostname example, change to this one
diff --git a/tools/salt-install/local.params.example b/tools/salt-install/local.params.example.single_host_single_hostname
similarity index 76%
rename from tools/salt-install/local.params.example
rename to tools/salt-install/local.params.example.single_host_single_hostname
index 88d6a75d6..110d79429 100644
--- a/tools/salt-install/local.params.example
+++ b/tools/salt-install/local.params.example.single_host_single_hostname
@@ -5,11 +5,11 @@
 
 # These are the basic parameters to configure the installation
 
-# The 5 letters name you want to give your cluster
-CLUSTER="fixme"
+# The FIVE ALPHANUMERIC CHARACTERS name you want to give your cluster
+CLUSTER="cluster_fixme_or_this_wont_work"
 
 # The domainname you want tou give to your cluster's hosts
-DOMAIN="some.domain"
+DOMAIN="domain_fixme_or_this_wont_work"
 
 # When setting the cluster in a single host, you can use a single hostname
 # to access all the instances. When using virtualization (ie AWS), this should be
@@ -23,20 +23,20 @@ HOSTNAME_INT="127.0.1.1"
 # Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
 # You can point it to another port if desired
 # In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
-CONTROLLER_EXT_SSL_PORT=443
-KEEP_EXT_SSL_PORT=25101
+CONTROLLER_EXT_SSL_PORT=9443
+KEEP_EXT_SSL_PORT=35101
 # Both for collections and downloads
-KEEPWEB_EXT_SSL_PORT=9002
-WEBSHELL_EXT_SSL_PORT=4202
-WEBSOCKET_EXT_SSL_PORT=8002
-WORKBENCH1_EXT_SSL_PORT=443
-WORKBENCH2_EXT_SSL_PORT=3001
+KEEPWEB_EXT_SSL_PORT=11002
+WEBSHELL_EXT_SSL_PORT=14202
+WEBSOCKET_EXT_SSL_PORT=18002
+WORKBENCH1_EXT_SSL_PORT=9444
+WORKBENCH2_EXT_SSL_PORT=9445
 
 INITIAL_USER="admin"
 
 # If not specified, the initial user email will be composed as
 # INITIAL_USER at CLUSTER.DOMAIN
-INITIAL_USER_EMAIL="admin at fixme.localdomain"
+INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
 INITIAL_USER_PASSWORD="password"
 
 # YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
@@ -50,12 +50,13 @@ WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
 # There are a few examples under 'config_examples'. If you don't change this
 # variable, the single_host, multiple_hostnames config will be used
 # CONFIG_DIR="config_examples/single_host/single_hostname"
-CONFIG_DIR="config_examples/single_host/multiple_hostnames"
-# Extra states to pply. iIf you use your own subdir, change this value accordingly
-EXTRA_STATES_DIR="${F_DIR}/arvados-formula/test/salt/states/examples/single_host"
+CONFIG_DIR="config_examples/single_host/single_hostname"
+# Extra states to apply. If you use your own subdir, change this value accordingly
+# This is the value for the single_host/multiple_hostnames example
+# EXTRA_STATES_DIR="${F_DIR}/arvados-formula/test/salt/states/examples/single_host"
 
 # When using the single_host/single_hostname example, change to this one
-# EXTRA_STATES_DIR="${CONFIG_DIR}/states"
+EXTRA_STATES_DIR="${CONFIG_DIR}/states"
 
 # Which release of Arvados repo you want to use
 RELEASE="production"
@@ -66,7 +67,7 @@ VERSION="latest"
 # This is an arvados-formula setting.
 # If branch is set, the script will switch to it before running salt
 # Usually not needed, only used for testing
-BRANCH="master"
+# BRANCH="master"
 
 ##########################################################
 # Usually there's no need to modify things below this line
diff --git a/tools/salt-install/provision.sh b/tools/salt-install/provision.sh
index 9b19854d0..5174f2398 100755
--- a/tools/salt-install/provision.sh
+++ b/tools/salt-install/provision.sh
@@ -151,6 +151,12 @@ else
   exit 1
 fi
 
+if grep -q 'fixme_or_this_wont_work' ${CONFIG_FILE} ; then
+  echo >&2 "The config file ${CONFIG_FILE} has some parameters that need to be modified."
+  echo >&2 "Please, fix them and re-run the provision script."
+  exit 1
+fi
+
 if ! grep -E '^[[:alnum:]]{5}$' <<<${CLUSTER} ; then
   echo >&2 "ERROR: <CLUSTER> must be exactly 5 alphanumeric characters long"
   echo >&2 "Fix the cluster name in the 'local.params' file and re-run the provision script"
@@ -192,9 +198,10 @@ git clone --branch "${LOCALE_TAG}" https://github.com/saltstack-formulas/locale-
 git clone --branch "${NGINX_TAG}" https://github.com/saltstack-formulas/nginx-formula.git
 git clone --branch "${POSTGRES_TAG}" https://github.com/saltstack-formulas/postgres-formula.git
 
+# If we want to try a specific branch of the formula
 if [ "x${BRANCH}" != "x" ]; then
   cd ${F_DIR}/arvados-formula || exit 1
-  git checkout -t origin/"${BRANCH}"
+  git checkout -t origin/"${BRANCH}" -b "${BRANCH}"
   cd -
 fi
 
diff --git a/tools/salt-install/tests/run-test.sh b/tools/salt-install/tests/run-test.sh
index 6bc8422f8..53c51a2c5 100755
--- a/tools/salt-install/tests/run-test.sh
+++ b/tools/salt-install/tests/run-test.sh
@@ -10,7 +10,7 @@ export ARVADOS_API_HOST_INSECURE=true
 set -o pipefail
 
 # First, validate that the CA is installed and that we can query it with no errors.
-if ! curl -s -o /dev/null https://workbench.${ARVADOS_API_HOST}/users/welcome?return_to=%2F; then
+if ! curl -s -o /dev/null https://${ARVADOS_API_HOST}/users/welcome?return_to=%2F; then
   echo "The Arvados CA was not correctly installed. Although some components will work,"
   echo "others won't. Please verify that the CA cert file was installed correctly and"
   echo "retry running these tests."

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list