[ARVADOS] updated: 2.1.0-470-g6587faf67

Git user git at public.arvados.org
Wed Mar 31 13:23:28 UTC 2021


Summary of changes:
 .../salt-install/config_examples/multi_host/aws/pillars/letsencrypt.sls | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

  discards  609d60c0f4178f213ebe024f09d97ba8124bde07 (commit)
       via  6587faf679632643cbe843ad946032373d9e2372 (commit)

This update added new revisions after undoing existing revisions.  That is
to say, the old revision is not a strict subset of the new revision.  This
situation occurs when you --force push a change and generate a repository
containing something like this:

 * -- * -- B -- O -- O -- O (609d60c0f4178f213ebe024f09d97ba8124bde07)
            \
             N -- N -- N (6587faf679632643cbe843ad946032373d9e2372)

When this happens we assume that you've already had alert emails for all
of the O revisions, and so we here report only the revisions in the N
branch from the common base, B.

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 6587faf679632643cbe843ad946032373d9e2372
Author: Javier Bértoli <jbertoli at curii.com>
Date:   Wed Mar 24 15:25:33 2021 -0300

    docs(provision): add salt usage with roles in multi-host
    
    refs #17246
    Arvados-DCO-1.1-Signed-off-by: Javier Bértoli <jbertoli at curii.com>

diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index 709c32e2a..2084bab26 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -9,90 +9,164 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-# "Install Saltstack":#saltstack
-# "Install dependencies":#dependencies
-# "Install Arvados using Saltstack":#saltstack
-# "DNS configuration":#final_steps
+# "Hosts preparation":#hosts_preparation
+## "Hosts setup using terraform (experimental)":#hosts_setup_using_terraform
+## "Create a compute image":#create_a_compute_image
+# "Multi host install using the provision.sh script":#multi_host
+# "Choose the desired configuration":#choose_configuration
+## "Multi host / multi hostname":#multi_host_multi_hostnames
+## "Multi host / multiple hostnames (Alternative configuration)":#multi_host_multiple_hostnames
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+# "Installation order":#installation_order
+# "Run the provision.sh script":#run_provision_script
 # "Initial user and login":#initial_user
+# "Test the installed cluster running a simple workflow":#test_install
 
-h2(#saltstack). Install Saltstack
+h2(#hosts_preparation). Hosts preparation
 
-If you already have a Saltstack environment you can skip this section.
+In order to run Arvados on a multi-host installation, there are a few requirements that your infrastructure has to fulfill.
 
-The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
+These instructions explain how to setup a multi-host environment that is suitable for production use of Arvados.
 
+We suggest distributing the Arvados components in the following way, creating at least 6 hosts:
+
+# Database server:
+## postgresql server
+# API node:
+## arvados api server
+## arvados controller
+## arvados websocket
+## arvados cloud dispatcher
+# WORKBENCH node:
+## arvados workbench
+## arvados workbench2
+# KEEPPROXY node:
+## arvados keepproxy
+## arvados keepweb
+# KEEPSTOREs (at least 2)
+## arvados keepstore
+# SHELL node (optional):
+## arvados shell
+
+Note that these hosts can be virtual machines in your infrastructure and they don't need to be physical machines.
+
+h3(#hosts_setup_using_terraform). Hosts setup using terraform (experimental)
+
+We added a few "terraform":https://terraform.io/ scripts (https://github.com/arvados/arvados/tree/master/tools/terraform) to let you create these instances easier.
+Check "the Arvados terraform documentation":/doc/install/terraform.html for more details.
+
+h2(#multi_host). Multi host install using the provision.sh script
+
+This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+
+This procedure will install all the main Arvados components to get you up and running in a multi host environment.
+
+We suggest you to use the @provision.sh@ script to deploy Arvados, which is implemented with the @arvados-formula@ in a Saltstack master-less setup. After setting up a few variables in a config file (next step), you'll be ready to run it and get Arvados deployed.
+
+h3(#create_a_compute_image). Create a compute image
+
+In a multi-host installation, containers are dispatched in docker daemons running in the <i>compute instances</i>, which need some special setup. We provide a "compute image builder script":https://github.com/arvados/arvados/tree/master/tools/compute-images that you can use to build a template image following "these instructions":https://doc.arvados.org/main/install/crunch2-cloud/install-compute-node.html . Once you have that image created, you can use the image reference in the Arvados configuration in the next steps.
+
+h2(#choose_configuration). Choose the desired configuration
+
+For documentation's sake, we will use the cluster name <i>arva2</i> and the domain <i>arv.local</i>. If you don't change them as required in the next steps, installation won't proceed.
+
+We will try to provide a few Arvados' multi host installation configurations examples for different infrastructure providers. Currently only AWS is available but they can be used with almost any provider with little changes.
+
+You need to copy one of the example configuration files and directory, and edit them to suit your needs.
+
+h3(#multi_host_multi_hostnames). Multi host / multi hostname
 <notextile>
-<pre><code>curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
-sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
+<pre><code>cp local.params.example.multiple_hosts local.params
+cp -r config_examples/multi_host/aws local_config_dir
 </code></pre>
 </notextile>
 
-For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html
+Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_INT_IP, *_TOKEN</b> and <b>*KEY</b> variables. Those variables will be used to do a search and replace on the <i>pillars/*</i> in place of any matching __VARIABLE__.
 
-h2(#dependencies). Install dependencies
+The <i>multi_host</i> include LetsEncrypt salt code to automatically request and install the certificates for the public-facing hosts (API, Workbench) so it will need the hostnames to be reachable from the Internet. If this cluster will not be the case, please set the variable <i>USE_LETSENCRYPT=no</i>.
 
-Arvados depends in a few applications and packages (postgresql, nginx+passenger, ruby) that can also be installed using their respective Saltstack formulas.
+## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
 
-The formulas we use are:
+You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> file, where you will need to provide some information that can be retrieved as output of the terraform run.
 
-* "postgres":https://github.com/saltstack-formulas/postgres-formula.git
-* "nginx":https://github.com/saltstack-formulas/nginx-formula.git
-* "docker":https://github.com/saltstack-formulas/docker-formula.git
-* "locale":https://github.com/saltstack-formulas/locale-formula.git
-* "letsencrypt":https://github.com/saltstack-formulas/letsencrypt-formula.git
+Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the hosts.
 
-There are example Salt pillar files for each of those formulas in the "arvados-formula's test/salt/pillar/examples":https://github.com/arvados/arvados-formula/tree/master/test/salt/pillar/examples directory. As they are, they allow you to get all the main Arvados components up and running.
+h2(#installation_order). Installation order
 
-h2(#saltstack). Install Arvados using Saltstack
+A few Arvados nodes need to be installed in certain order. The required order is
 
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+#. Database
+#. API server
+#. The other nodes can be installed in any order after the two above
 
-The Arvados formula we maintain is located in Arvados' Github account and should be considered the canonical place to download its most up-to-date version:
+h2(#run_provision_script). Run the provision.sh script
 
-* "arvados-formula":https://github.com/arvados/arvados-formula.git
+When you finished customizing the configuration, you are ready to copy the files to the hosts and run the @provision.sh@ script. The script allows you to specify the <i>role/s</i> a node will have and it will install only the Arvados components required for such role. The general format of the command is:
 
-As the Saltstack's community keeps a "repository of formulas":https://github.com/saltstack-formulas/ in Github, we also provide
-
-* "a copy of the formula":https://github.com/saltstack-formulas/arvados-formula.git
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --roles comma,separated,list,of,roles,to,apply
+</code></pre>
+</notextile>
 
-there, and do our best effort to keep it in sync with ours.
+and wait for it to finish.
 
-For those familiar with Saltstack, the process to get Arvados deployed is similar to any other formula:
+If everything goes OK, you'll get some final lines stating something like:
 
-1. Fork/copy the formula to your Salt master host.
-2. Edit the Arvados, nginx, postgres, locale and docker pillars to match your desired configuration.
-3. Run a @state.apply@ to get it deployed.
+<notextile>
+<pre><code>arvados: Succeeded: 109 (changed=9)
+arvados: Failed:      0
+</code></pre>
+</notextile>
 
-h2(#final_steps). DNS configuration
+The distribution of role as described above can be applied running these commands:
 
-After the setup is done, you need to set up your DNS to be able to access the cluster's nodes.
+#. Database
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles database
+</code></pre>
+</notextile>
 
-The simplest way to do this is to add entries in the @/etc/hosts@ file of every host:
+#. API
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles api,controller,websocket,dispatcher
+</code></pre>
+</notextile>
 
+#. Keepstore/s
 <notextile>
-<pre><code>export CLUSTER="arva2"
-export DOMAIN="arv.local"
-
-echo A.B.C.a  api ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.b  keep keep.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.c  keep0 keep0.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.d  collections collections.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.e  download download.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.f  ws ws.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.g  workbench workbench.${CLUSTER}.${DOMAIN} >> /etc/hosts
-echo A.B.C.h  workbench2 workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles keepstore
 </code></pre>
 </notextile>
 
-Replacing in each case de @A.B.C.x@ IP with the corresponding IP of the node.
+#. Workbench
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles workbench,workbench2
+</code></pre>
+</notextile>
 
-If your infrastructure uses another DNS service setup, add the corresponding entries accordingly.
+#. Keepproxy / Keepweb
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles keepproxy,keepweb
+</code></pre>
+</notextile>
 
-h2(#initial_user). Initial user and login
+#. Shell
+<notextile>
+<pre><code>scp -r provision.sh local* user at host:
+ssh user at host sudo provision.sh --config local.params --roles shell
+</code></pre>
+</notextile>
 
-At this point you should be able to log into the Arvados cluster.
+h2(#initial_user). Initial user and login 
 
-If you did not change the defaults, the initial URL will be:
+At this point you should be able to log into the Arvados cluster. The initial URL will be:
 
 * https://workbench.arva2.arv.local
 
@@ -102,8 +176,100 @@ or, in general, the url format will be:
 
 By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.
 
-Assuming you didn't change the defaults, the initial credentials are:
+Assuming you didn't change these values in the @local.params@ file, the initial credentials are:
 
 * User: 'admin'
 * Password: 'password'
 * Email: 'admin at arva2.arv.local'
+
+h2(#test_install). Test the installed cluster running a simple workflow
+
+The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the node. If you want to run it, just ssh to the node, change to that directory and run:
+
+<notextile>
+<pre><code>cd /tmp/cluster_tests
+./run-test.sh
+</code></pre>
+</notextile>
+
+It will create a test user (by default, the same one as the admin user), upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
+
+<notextile>
+<pre><code>Creating Arvados Standard Docker Images project
+Arvados project uuid is 'arva2-j7d0g-0prd8cjlk6kfl7y'
+{
+ ...
+ "uuid":"arva2-o0j2j-n4zu4cak5iifq2a",
+ "owner_uuid":"arva2-tpzed-000000000000000",
+ ...
+}
+Uploading arvados/jobs' docker image to the project
+2.1.1: Pulling from arvados/jobs
+8559a31e96f4: Pulling fs layer
+...
+Status: Downloaded newer image for arvados/jobs:2.1.1
+docker.io/arvados/jobs:2.1.1
+2020-11-23 21:43:39 arvados.arv_put[32678] INFO: Creating new cache file at /home/vagrant/.cache/arvados/arv-put/c59256eda1829281424c80f588c7cc4d
+2020-11-23 21:43:46 arvados.arv_put[32678] INFO: Collection saved as 'Docker image arvados jobs:2.1.1 sha256:0dd50'
+arva2-4zz18-1u5pvbld7cvxuy2
+Creating initial user ('admin')
+Setting up user ('admin')
+{
+ "items":[
+  {
+   ...
+   "owner_uuid":"arva2-tpzed-000000000000000",
+   ...
+   "uuid":"arva2-o0j2j-1ownrdne0ok9iox"
+  },
+  {
+   ...
+   "owner_uuid":"arva2-tpzed-000000000000000",
+   ...
+   "uuid":"arva2-o0j2j-1zbeyhcwxc1tvb7"
+  },
+  {
+   ...
+   "email":"admin at arva2.arv.local",
+   ...
+   "owner_uuid":"arva2-tpzed-000000000000000",
+   ...
+   "username":"admin",
+   "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
+   ...
+  }
+ ],
+ "kind":"arvados#HashList"
+}
+Activating user 'admin'
+{
+ ...
+ "email":"admin at arva2.arv.local",
+ ...
+ "username":"admin",
+ "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
+ ...
+}
+Running test CWL workflow
+INFO /usr/bin/cwl-runner 2.1.1, arvados-python-client 2.1.1, cwltool 3.0.20200807132242
+INFO Resolved 'hasher-workflow.cwl' to 'file:///tmp/cluster_tests/hasher-workflow.cwl'
+...
+INFO Using cluster arva2 (https://arva2.arv.local:8443/)
+INFO Upload local files: "test.txt"
+INFO Uploaded to ea34d971b71d5536b4f6b7d6c69dc7f6+50 (arva2-4zz18-c8uvwqdry4r8jao)
+INFO Using collection cache size 256 MiB
+INFO [container hasher-workflow.cwl] submitted container_request arva2-xvhdp-v1bkywd58gyocwm
+INFO [container hasher-workflow.cwl] arva2-xvhdp-v1bkywd58gyocwm is Final
+INFO Overall process status is success
+INFO Final output collection d6c69a88147dde9d52a418d50ef788df+123
+{
+    "hasher_out": {
+        "basename": "hasher3.md5sum.txt",
+        "class": "File",
+        "location": "keep:d6c69a88147dde9d52a418d50ef788df+123/hasher3.md5sum.txt",
+        "size": 95
+    }
+}
+INFO Final process status is success
+</code></pre>
+</notextile>
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt.sls
index 8906ac073..6ba8b9b09 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/letsencrypt.sls
@@ -10,7 +10,7 @@ letsencrypt:
     - certbot: latest
     - python3-certbot-nginx
   config:
-    server: https://acme-staging-v02.api.letsencrypt.org/directory
+    server: https://acme-v02.api.letsencrypt.org/directory
     email: __INITIAL_USER_EMAIL__
     authenticator: nginx
     webroot-path: /var/www

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list