[arvados] created: 2.1.0-2850-gaa9d34fcd
git repository hosting
git at public.arvados.org
Thu Aug 18 22:04:53 UTC 2022
at aa9d34fcd23b035000021615c0261174eb92c54c (commit)
commit aa9d34fcd23b035000021615c0261174eb92c54c
Author: Peter Amstutz <peter.amstutz at curii.com>
Date: Thu Aug 18 18:04:23 2022 -0400
19215: Incorporate many details into install doc
Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>
diff --git a/doc/_includes/_download_installer.liquid b/doc/_includes/_download_installer.liquid
index 5dfcd089e..10909088b 100644
--- a/doc/_includes/_download_installer.liquid
+++ b/doc/_includes/_download_installer.liquid
@@ -6,8 +6,6 @@ SPDX-License-Identifier: CC-BY-SA-3.0
{% include 'branchname' %}
-This procedure will install all the main Arvados components to get you up and running in a single host.
-
This is a package-based installation method, however the installation script is currently distributed in source form via @git at . We recommend checking out the git tree on your local workstation, not directly on the target(s) where you want to install and run Arvados.
<notextile>
@@ -17,4 +15,16 @@ cd arvados/tools/salt-install
</code></pre>
</notextile>
-The @provision.sh@ script will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located in the "arvados-formula git repository":https://git.arvados.org/arvados-formula.git/tree/refs/heads/{{ branchname }} and will be cloned during the running of the @provision.sh@ script. The installer is built using "Saltstack":https://saltproject.io/ and @provision.sh@ performs the install using master-less mode.
+The @install.sh@ and @provision.sh@ scripts will help you deploy Arvados by preparing your environment to be able to run the installer, then running it. The actual installer is located in the "arvados-formula git repository":https://git.arvados.org/arvados-formula.git/tree/refs/heads/{{ branchname }} and will be cloned during the running of the @provision.sh@ script. The installer is built using "Saltstack":https://saltproject.io/ and @provision.sh@ performs the install using masterless mode.
+
+h2(#copy_config). Initialize the installer
+
+<notextile>
+<pre><code>./installer.sh initialize ~/setup-arvados-xarv1 {{local_params_src}} {{config_examples_src}}
+cd ~/setup-arvados-xarv1
+</code></pre>
+</notextile>
+
+This creates a git repository in @~/setup-arvados-xarv1 at . The @installer.sh@ will record all the configuration changes you make, as well as using @git push@ to synchronize configuration edits across all the nodes.
+
+Important! All further commands must be run in the @~/setup-arvados-xarv1@
diff --git a/doc/_includes/_multi_host_install_custom_certificates.liquid b/doc/_includes/_multi_host_install_custom_certificates.liquid
index b831aadcf..7672372af 100644
--- a/doc/_includes/_multi_host_install_custom_certificates.liquid
+++ b/doc/_includes/_multi_host_install_custom_certificates.liquid
@@ -8,16 +8,16 @@ Copy your certificates to the directory specified with the variable @CUSTOM_CERT
The script expects cert/key files with these basenames (matching the role except for <i>keepweb</i>, which is split in both <i>download / collections</i>):
-* "controller"
-* "websocket"
-* "workbench"
-* "workbench2"
-* "webshell"
-* "download" # Part of keepweb
-* "collections" # Part of keepweb
-* "keepproxy"
+# @controller@
+# @websocket@ # note: corresponds to default domain @ws.${CLUSTER}.${DOMAIN}@
+# @keepproxy@ # note: corresponds to default domain @keep.${CLUSTER}.${DOMAIN}@
+# @download@ # Part of keepweb
+# @collections@ # Part of keepweb -- important note, this should be a wildcard for @*.collections.${CLUSTER}.${DOMAIN}@
+# @workbench@
+# @workbench2@
+# @webshell@
-E.g. for 'keepproxy', the script will look for
+For example, for the 'keepproxy' service the script will expect to find this certificate:
<notextile>
<pre><code>${CUSTOM_CERTS_DIR}/keepproxy.crt
@@ -26,3 +26,5 @@ ${CUSTOM_CERTS_DIR}/keepproxy.key
</notextile>
Make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable.
+
+It may be easier to create a single certificate wh
\ No newline at end of file
diff --git a/doc/_includes/_ssl_config_multi.liquid b/doc/_includes/_ssl_config_multi.liquid
index 1bcd1b64e..d001a5f22 100644
--- a/doc/_includes/_ssl_config_multi.liquid
+++ b/doc/_includes/_ssl_config_multi.liquid
@@ -8,9 +8,9 @@ h2(#certificates). Choose the SSL configuration (SSL_MODE)
Arvados requires an SSL certificate to work correctly. This installer supports these options:
-* @self-signed@: let the installer create self-signed certificates
-* @lets-encrypt@: automatically obtain and install an SSL certificates for your hostnames
-* @bring-your-own@: supply your own certificates in the `certs` directory
+# @self-signed@: "let the installer create self-signed certificates":#self-signed
+# @lets-encrypt@: "automatically obtain and install an SSL certificates for your hostnames":#lets-encrypt
+# @bring-your-own@: "supply your own certificates in the @certs@ directory":#bring-your-own
h3(#self-signed). Using self-signed certificates
@@ -21,7 +21,7 @@ To make the installer use self-signed certificates, change the configuration lik
</code></pre>
</notextile>
-When connecting to the Arvados web interface for the first time, you will need to accept the self-signed certificates as trusted to bypass the browser warnings. This can be a little tricky to do. Alternatively, you can also install the self-signed root certificate in your browser, see <a href="#ca_root_certificate">below</a>.
+Before connecting to the Arvados web interface for the first time, anyone accessing the instance will need to "install the self-signed root certificate in their browser.":#ca_root_certificate
h3(#lets-encrypt). Using a Let's Encrypt certificate
diff --git a/doc/admin/maintenance-and-upgrading.html.textile.liquid b/doc/admin/maintenance-and-upgrading.html.textile.liquid
index 3cc80a356..480f5114e 100644
--- a/doc/admin/maintenance-and-upgrading.html.textile.liquid
+++ b/doc/admin/maintenance-and-upgrading.html.textile.liquid
@@ -42,9 +42,9 @@ Run @arvados-server config-check@ to make sure the configuration file has no err
h3(#distribution). Distribute the configuration file
-We recommend to keep the @config.yml@ file in sync between all the Arvados system nodes, to avoid issues with services running on different versions of the configuration.
+It is very important to keep the @config.yml@ file in sync between all the Arvados system nodes, to avoid issues with services running on different versions of the configuration.
-Distribution of the configuration file can be done in many ways, e.g. scp, configuration management software, etc.
+We provide "installer.sh":../install/salt-multi-host-install.html#installation to distribute config changes. You may also do your own orchestration e.g. @scp@, configuration management software, etc.
h3(#restart). Restart the services affected by the change
diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index 5145d433b..1a70d46ef 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -12,15 +12,14 @@ SPDX-License-Identifier: CC-BY-SA-3.0
# "Introduction":#introduction
# "Prerequisites and planning":#prerequisites
# "Download the installer":#download
-# "Copy and customize the configuration files":#copy_config
+# "Initialize the installer":#copy_config
# "Choose the SSL configuration":#certificates
## "Using a self-signed certificates":#self-signed
## "Using a Let's Encrypt certificates":#lets-encrypt
## "Bring your own certificates":#bring-your-own
# "Create a compute image":#create_a_compute_image
# "Further customization of the installation (modifying the salt pillars and states)":#further_customization
-# "Installation order":#installation_order
-# "Run the provision.sh script":#run_provision_script
+# "Begin installation":#installation
# "Install the CA root certificate":#ca_root_certificate
# "Initial user and login":#initial_user
# "Test the installed cluster running a simple workflow":#test_install
@@ -28,156 +27,227 @@ SPDX-License-Identifier: CC-BY-SA-3.0
h2(#introduction). Introduction
-This multi host installer is an AWS specific example that is generally useful, but will likely need to be adapted for your environment. The installer is highly configurable.
+This multi host installer is the recommendend way to set up a production Arvados cluster. These instructions include speciic details for installing on Amazon Web Services (AWS), which are marked as "AWS specific". However with additional customization the installer can be used as a template for deployment on other cloud provider or HPC systems.
h2(#prerequisites). Prerequisites and planning
-Prerequisites:
+h3. Cluster ID and base domain
-* git
-* a number of (virtual) machines for your Arvados cluster with at least 2 cores and 8 GiB of RAM, running a "supported Arvados distribution":{{site.baseurl}}/install/install-manual-prerequisites.html#supportedlinux
-* a number of DNS hostnames that resolve to the IP addresses of your Arvados hosts
-* ports 443 need to be reachable from your client (configurable in @local.params@, see below)
-* port 80 needs to be reachable from everywhere on the internet (only when using "Let's Encrypt":#lets-encrypt without Route53 integration)
-* SSL certificatse matching the hostnames in use (only when using "bring your own certificates":#bring-your-own)
+Choose a 5-character cluster identifier that will represent the cluster. Here are "guidelines on choosing a cluster identifier":../architecture/federation.html#cluster_id . Only lowercase letters and digits 0-9 are allowed. Examples will use @xarv1@ or ${CLUSTER}, you should substitute the cluster id you have selected.
-Planning:
+Determine the base domain for the cluster. This will be referred to as ${DOMAIN}
-We suggest distributing the Arvados components in the following way, creating at least 6 hosts:
+For example, if CLUSTER is "xarv1" and DOMAIN is "example.com", then "controller.${CLUSTER}.${DOMAIN}" means "controller.xargv1.example.com".
-# Database server:
+h3. Virtual Private Cloud (AWS specific)
+
+We recommend setting Arvados up in a "Virtual Private Cloud (VPC)":https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
+
+When you do so, you need to configure a couple of additional things:
+
+# "Create a subnet for the compute nodes":https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html
+# You should set up a "security group which allows SSH access (port 22)":https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
+# Make sure to add a "VPC S3 endpoint":https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
+
+h3. S3 Bucket (AWS specific)
+
+We recommend "creating an S3 bucket":https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html for data storage named @${CLUSTER}-nyw5e-000000000000000-volume@
+
+Then create an IAM role called @${CLUSTER}-keepstore-00-iam-role@ which has "permission to read and write the bucket":https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html
+
+h3. Machines
+
+You will need to allocate (virtual) machines for the fixed infrastructure of the Arvados cluster. These machines should have at least 2 cores and 8 GiB of RAM, running a "supported Arvados distribution":{{site.baseurl}}/install/install-manual-prerequisites.html#supportedlinux
+
+Allocate these as appropriate for your site. On AWS you may choose to do it manually with the AWS console, or using a DevOps tool such as CloudFormation or Terraform.
+
+The installer will set up the Arvados services on your machines. Here is the default assignment of services to machines:
+
+# API node
## postgresql server
-# API node:
## arvados api server
-## arvados controller
-## arvados websocket
+## arvados controller (recommendend hostname @controller.${CLUSTER}.${DOMAIN}@)
+## arvados websocket (recommendend hostname @ws.${CLUSTER}.${DOMAIN}@)
## arvados cloud dispatcher
## arvados keepbalance
-# WORKBENCH node:
-## arvados workbench
-## arvados workbench2
-## arvados webshell
-# KEEPPROXY node:
-## arvados keepproxy
-## arvados keepweb
# KEEPSTORE nodes (at least 2)
-## arvados keepstore
-# SHELL node (optional):
-## arvados shell
-
-If your infrastructure differs from the setup proposed above (ie, using RDS or an existing DB server), remember that you will need to edit the configuration files for the scripts so they work with your infrastructure.
+## arvados keepstore (recommendend hostnames @keep0.${CLUSTER}.${DOMAIN}@ and @keep1.${CLUSTER}.${DOMAIN}@)
+# KEEPPROXY node
+## arvados keepproxy (recommendend hostname @keep.${CLUSTER}.${DOMAIN}@)
+## arvados keepweb (recommendend hostname @download.${CLUSTER}.${DOMAIN}@ and @*.collections.${CLUSTER}.${DOMAIN}@)
+# WORKBENCH node
+## arvados workbench (recommendend hostname @workbench.${CLUSTER}.${DOMAIN}@)
+## arvados workbench2 (recommendend hostname @workbench2.${CLUSTER}.${DOMAIN}@)
+## arvados webshell (recommendend hostname @webshell.${CLUSTER}.${DOMAIN}@)
+# SHELL node (optional)
+## arvados shell (recommended hostname @shell.${CLUSTER}.${DOMAIN}@)
+
+Additional prerequisites when preparing machines to run the installer:
+
+# root or passwordless sudo access
+# from the account where you are performing the install, passwordless @ssh@ to each machine (meaning, the client's public key added to @~/.ssh/authorized_keys@ on each node)
+# @git@ installed on each machine
+# port 443 reachable by clients
+# DNS hostnames for each service
+## @controller.${CLUSTER}.${DOMAIN}@
+## @ws.${CLUSTER}.${DOMAIN}@
+## @keep0.${CLUSTER}.${DOMAIN}@
+## @keep1.${CLUSTER}.${DOMAIN}@
+## @keep.${CLUSTER}.${DOMAIN}@
+## @download.${CLUSTER}.${DOMAIN}@
+## @*.collections.${CLUSTER}.${DOMAIN}@ -- important note, this should be a wildcard DNS, going to the keepweb service
+## @workbench.${CLUSTER}.${DOMAIN}@
+## @workbench2.${CLUSTER}.${DOMAIN}@
+## @webshell.${CLUSTER}.${DOMAIN}@
+## @shell.${CLUSTER}.${DOMAIN}@
+
+(AWS specific) The machine that runs the arvados cloud dispatcher will need an "IAM role that allows it to create EC2 instances, see here for details .":https://doc.arvados.org/v2.4/install/crunch2-cloud/install-dispatch-cloud.html
+
+If your infrastructure differs from the setup proposed above (ie, different hostnames, or using AWS RDS or an existing DB server), you can still use the installer, but additional customization will be necessary.
h2(#download). Download the installer
+{% assign local_params_src = 'multiple_hosts' %}
+{% assign config_examples_src = 'multi_host/aws'%}
{% include 'download_installer' %}
-h2(#copy_config). Copy and customize the configuration files
-
-<notextile>
-<pre><code>cp local.params.example.multiple_hosts local.params
-cp -r config_examples/multi_host/aws local_config_dir
+h2. Edit @local.params@
+
+This can be found wherever you choose to initialize the install files (@~/setup-arvados-xarv1@ in these examples).
+
+# Set @CLUSTER@ to the 5-character cluster identifier (e.g "xarv1")
+# Set @DOMAIN@ to the base DNS domain of the environment, e.g. "example.com"
+# Edit Internal IP settings. Since services share hosts, some hosts are the same.
+# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC
+ AWS Specific: Go to the AWS console and into the VPC service, there is a column in
+ this table view of the VPCs that gives the CIDR for the VPC (IPv4 CIDR).
+# Set @INITIAL_USER_EMAIL@ to your email address, as you will be the first admin user of the system.
+# Set each @KEY@ / @TOKEN@ to a random string
+ Here's an easy way to create five random tokens:
+<pre><code>for i in 1 2 3 4 5; do
+ tr -dc A-Za-z0-9 </dev/urandom | head -c 32 ; echo ''
+done
</code></pre>
-</notextile>
-
-Edit the variables in the <i>local.params</i> file. Pay attention to the <notextile><b>*_INT_IP, *_TOKEN</b> and <b>*_KEY</b></notextile> variables. The *SSL_MODE* variable is discussed in the next section.
+# Set @DATABASE_PASSWORD@ to a random string
+ Important! If this contains any non-alphanumeric characters, in particular ampersand ('&'), it is necessary to add backslash quoting.
+ For example, if the password is `Cq&WU<A']p?j`
+ With backslash quoting the special characters it should appear like this in local.params:
+<pre><code>DATABASE_PASSWORD="Cq\&WU\<A\'\]p\?j"</code></pre>
{% include 'ssl_config_multi' %}
+h2(#create_a_compute_image). Configure Keep on S3 (AWS specific)
+
+Once you have that image created, Open @local_config_dir/pillars/arvados.sls@ and edit as follows:
+
+1. In the @arvados.cluster.Volumes@ section, set @Region@ to the appropriate AWS region (e.g. 'us-east-1')
+
h2(#create_a_compute_image). Create a compute image
{% include 'branchname' %}
-In a multi-host installation, containers are dispatched in docker daemons running in the <i>compute instances</i>, which need some special setup. We provide a "compute image builder script":https://github.com/arvados/arvados/tree/{{ branchname }}/tools/compute-images that you can use to build a template image following "these instructions":https://doc.arvados.org/install/crunch2-cloud/install-compute-node.html. Once you have that image created, you will need to update the <i>pillars/arvados.sls</i> file with the AMI ID and the private ssh key for the dispatcher.
+On cloud installations, containers are dispatched in Docker daemons running in the <i>compute instances</i>, which need some special setup. Follow "the instructions build a cloud compute node image":https://doc.arvados.org/install/crunch2-cloud/install-compute-node.html using the "compute image builder script":https://github.com/arvados/arvados/tree/{{ branchname }}/tools/compute-images .
-h2(#further_customization). Further customization of the installation (modifying the salt pillars and states)
+Once you have that image created, Open @local_config_dir/pillars/arvados.sls@ and edit as follows (AWS specific settings described here, configuration for Azure is similar):
-You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> file, where you will need to provide some information that describes your environment.
+# In the @arvados.cluster.Containers.CloudVMs@ section:
+## Set @ImageID@ to the AMI output from Packer
+## Set @Region@ to the appropriate AWS region
+## Set @AdminUsername@ to the admin user account on the image
+## Set the @SecurityGroupIDs@ list to the VPC security group which you set up to allow SSH connections to these nodes
+## Set @SubnetID@ to the value of SubnetId of your VPC
+# Update @arvados.cluster.Containers.DispatchPrivateKey@ and paste the contents of the @~/.ssh/id_dispatcher@ file you generated in an earlier step.
+# Update @arvados.cluster.InstanceTypes@ as necessary. If t3 and m5/c5 node types are not available, replace them with t2 and m4/c4. You'll need to double check the values for Price and IncludedScratch/AddedScratch for each type that is changed.
-Any extra <i>state</i> file you add under <i>local_config_dir/states</i> will be added to the salt run and applied to the hosts.
+h2(#further_customization). Further customization of the installation
-h2(#installation_order). Installation order
+If you are installing on AWS and following the naming conventions recommend in this guide, then likely no further configuration is necessary and you can begin installation.
-A few Arvados nodes need to be installed in certain order. The required order is
+If your infrastructure differs from the setup proposed above (ie, using AWS RDS or an existing DB server), you can still use the installer, but additional customization will be necessary.
-* Database
-* API server
-* The other nodes can be installed in any order after the two above
+This is done by editing the Saltstack pillars and states files found in @local_config_dir at . In particular, @local_config_dir/pillars/arvados.sls@ has the template used to produce the Arvados configuration file that is distributed to all the nodes.
-h2(#run_provision_script). Run the provision.sh script
+Any extra salt <i>state</i> file you add under @local_config_dir/states@ will be added to the salt run and applied to the hosts.
-When you finished customizing the configuration, you are ready to copy the files to the hosts and run the @provision.sh@ script. The script allows you to specify the <i>role/s</i> a node will have and it will install only the Arvados components required for such role. The general format of the command is:
+h2(#installation). Begin installation
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply
-</code></pre>
-</notextile>
+At this point, you are ready to run the installer script in deploy mode that will conduct all of the Arvados installation.
-and wait for it to finish.
+Run this in ~/arvados-setup-xarv1:
-If everything goes OK, you'll get some final lines stating something like:
+<pre>
+./installer.sh deploy
+</pre>
-<notextile>
-<pre><code>arvados: Succeeded: 109 (changed=9)
-arvados: Failed: 0
-</code></pre>
-</notextile>
+This will deploy all the nodes. It will take a while and produce a lot of logging. If it runs into an error, it will stop.
-The distribution of role as described above can be applied running these commands:
+When everything has finished, you can run the diagnostics.
-h4. Database
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles database
-</code></pre>
-</notextile>
+Depending on where you are running the installer, you need to provide @-internal-client@ or @-external-client at .
-h4. API
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher,keepbalance
-</code></pre>
-</notextile>
+You are probably an "internal client" if you are running the diagnostics from one of the Arvados machines inside the VPC.
-h4. Keepstore(s)
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles keepstore
-</code></pre>
-</notextile>
+You are an "external client" if you running the diagnostics from your workstation outside of the VPC.
-h4. Workbench
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles workbench,workbench2,webshell
-</code></pre>
-</notextile>
+<pre>
+./installer.sh diagnostics (-internal-client|-external-client)
+</pre>
-h4. Keepproxy / Keepweb
-<notextile>
-<pre><code>scp -r provision.sh local* user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles keepproxy,keepweb
-</code></pre>
-</notextile>
+h3. Diagnosing issues
-h4. Shell (here we copy the CLI test workflow too)
-<notextile>
-<pre><code>scp -r provision.sh local* tests user at host:
-ssh user at host sudo ./provision.sh --config local.params --roles shell
-</code></pre>
-</notextile>
+Most service logs go to @/var/log/syslog@
-{% include 'install_ca_cert' %}
+The logs for Rails API server and for Workbench can be found in
-h2(#initial_user). Initial user and login
+@/var/www/arvados-api/current/log/production.log@
+and
+@/var/www/arvados-workbench/current/log/production.log@
+
+on the appropriate instances.
+
+Workbench2 is a client-side Javascript application, if it having trouble loading, check the browser's developer console.
+
+h3(#iterating). Iterating on config changes
+
+You can iterate on the config and maintain the cluster by making changes to @local.params@ and @local_config_dir@ and running @installer.sh deploy@ again.
-At this point you should be able to log into the Arvados cluster. The initial URL will be:
+If you are debugging a configuration issue on a specific node, you can speed up the cycle a bit by deploying just one node:
-* https://workbench.arva2.arv.local
+ at installer.sh deploy keep0.xarv1.example.com@
-or, in general, the url format will be:
+However, once you have a final configuration, you should run a full deploy to ensure that the configuration has been synchronized on all the nodes.
-* https://workbench.@<cluster>.<domain>@
+h3. Common problems and solutions
+
+* (AWS Specific) If the AMI wasn't built with ENA (extended networking) support and the instance type requires it, it'll fail to start. You'll see an error in syslog on the node that runs @arvados-dispatch-cloud at . The solution is to build a new AMI with --aws-ena-support true
+
+* The arvados-api-server package sets up the database as a post-install script. If the database host or password wasn't set correctly (or quoted correctly) at the time that package is installed, it won't be able to set up the database.
+
+This will manifest as an error like this:
+
+<pre>
+#<ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation \"api_clients\" does not exist
+</pre>
+
+If this happens, you need to
+
+1. correct the database information
+2. run "installer.sh deploy xngs2.rdcloud.bms.com" to update the configuration on the API/controller node
+3. On the API/controller server node, run this command to re-run the post-install script, which will set up the database:
+
+<pre>
+dpkg-reconfigure arvados-api-server
+</pre>
+
+4. Re-run 'installer.sh deploy' again to synchronize everything, and so that the install steps that need to contact the API server are run successfully.
+
+{% include 'install_ca_cert' %}
+
+h2(#initial_user). Initial user and login
+
+At this point you should be able to log into the Arvados cluster. The initial URL will be
+
+* https://workbench.${CLUSTER}.${DOMAIN}
By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster.
@@ -185,11 +255,11 @@ Assuming you didn't change these values in the @local.params@ file, the initial
* User: 'admin'
* Password: 'password'
-* Email: 'admin at arva2.arv.local'
+* Email: 'admin@${CLUSTER}.${DOMAIN}'
h2(#test_install). Test the installed cluster running a simple workflow
-If you followed the instructions above, the @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the @shell@ node. If you want to run it, just ssh to the node, change to that directory and run:
+As part of the installation, the @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the @shell@ node. If you want to run it, just ssh to the node, then run:
<notextile>
<pre><code>cd /tmp/cluster_tests
@@ -281,6 +351,10 @@ INFO Final process status is success
h2(#post_install). After the installation
-Once the installation is complete, it is recommended to keep a copy of your local configuration files. Committing them to version control is a good idea.
+As part of the operation of @installer.sh@, it automatically creates a @git@ repository with your configuration templates. You should retain this repository but be aware that it contains sensitive information (passwords and tokens used by the Arvados services).
+
+As described in "Iterating on config changes":#iterating you may use @installer.sh deploy@ to re-run the Salt to deploy configuration changes and upgrades. However, be aware that the configuration templates created for you by @installer.sh@ are a snapshot which are not automatically kept up to date.
+
+When deploying upgrades, consult the "Arvados upgrade notes":{{site.baseurl}}/admin/upgrading.html to see if changes need to be made to the configuration file template in @local_config_dir/pillars/arvados.sls at .
-Re-running the Salt-based installer is not recommended for maintaining and upgrading Arvados, please see "Maintenance and upgrading":{{site.baseurl}}/admin/maintenance-and-upgrading.html for more information.
+See "Maintenance and upgrading":{{site.baseurl}}/admin/maintenance-and-upgrading.html for more information.
diff --git a/tools/salt-install/local.params.example.multiple_hosts b/tools/salt-install/local.params.example.multiple_hosts
index ade1ad467..5e7ae7ca1 100644
--- a/tools/salt-install/local.params.example.multiple_hosts
+++ b/tools/salt-install/local.params.example.multiple_hosts
@@ -20,7 +20,7 @@ DEPLOY_USER=root
# installer.sh will log in to each of these nodes and then provision
# it for the specified roles.
NODES=(
- [controller.${CLUSTER}.${DOMAIN}]=api,controller,websocket,dispatcher,keepbalance
+ [controller.${CLUSTER}.${DOMAIN}]=database,api,controller,websocket,dispatcher,keepbalance
[keep0.${CLUSTER}.${DOMAIN}]=keepstore
[keep1.${CLUSTER}.${DOMAIN}]=keepstore
[keep.${CLUSTER}.${DOMAIN}]=keepproxy,keepweb
@@ -67,12 +67,12 @@ INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_w
INITIAL_USER_PASSWORD="password"
# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
-BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
-MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
-SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
-ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
-WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
-DATABASE_PASSWORD=please_set_this_to_some_secure_value
+BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=fixmemanagementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=fixmesystemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=fixmeanonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=fixmeworkbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=fixmeplease_set_this_to_some_secure_value
# SSL CERTIFICATES
# Arvados requires SSL certificates to work correctly. This installer supports these options:
diff --git a/tools/salt-install/local.params.example.single_host_multiple_hostnames b/tools/salt-install/local.params.example.single_host_multiple_hostnames
index f072fedb4..de2fb4e04 100644
--- a/tools/salt-install/local.params.example.single_host_multiple_hostnames
+++ b/tools/salt-install/local.params.example.single_host_multiple_hostnames
@@ -19,7 +19,7 @@ DEPLOY_USER=root
# installer.sh will log in to each of these nodes and then provision
# it for the specified roles.
NODES=(
- [localhost]=api,controller,websocket,dispatcher,keepbalance,keepstore,keepproxy,keepweb,workbench,workbench2,webshell,shell
+ [localhost]=database,api,controller,websocket,dispatcher,keepbalance,keepstore,keepproxy,keepweb,workbench,workbench2,webshell,shell
)
# External ports used by the Arvados services
@@ -38,12 +38,12 @@ INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_w
INITIAL_USER_PASSWORD="password"
# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
-BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
-MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
-SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
-ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
-WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
-DATABASE_PASSWORD=please_set_this_to_some_secure_value
+BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=fixmemanagementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=fixmesystemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=fixmeanonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=fixmeworkbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=fixmeplease_set_this_to_some_secure_value
# SSL CERTIFICATES
# Arvados requires SSL certificates to work correctly. This installer supports these options:
diff --git a/tools/salt-install/local.params.example.single_host_single_hostname b/tools/salt-install/local.params.example.single_host_single_hostname
index fdb10cf63..0a9965426 100644
--- a/tools/salt-install/local.params.example.single_host_single_hostname
+++ b/tools/salt-install/local.params.example.single_host_single_hostname
@@ -19,7 +19,7 @@ DEPLOY_USER=root
# installer.sh will log in to each of these nodes and then provision
# it for the specified roles.
NODES=(
- [localhost]=api,controller,websocket,dispatcher,keepbalance,keepstore,keepproxy,keepweb,workbench,workbench2,webshell,shell
+ [localhost]=database,api,controller,websocket,dispatcher,keepbalance,keepstore,keepproxy,keepweb,workbench,workbench2,webshell,shell
)
# Set this value when installing a cluster in a single host with a single
@@ -46,12 +46,12 @@ INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_w
INITIAL_USER_PASSWORD="password"
# Populate these values with random strings
-BLOB_SIGNING_KEY=blobsigningkeymushaveatleast32characters
-MANAGEMENT_TOKEN=managementtokenmushaveatleast32characters
-SYSTEM_ROOT_TOKEN=systemroottokenmushaveatleast32characters
-ANONYMOUS_USER_TOKEN=anonymoususertokenmushaveatleast32characters
-WORKBENCH_SECRET_KEY=workbenchsecretkeymushaveatleast32characters
-DATABASE_PASSWORD=please_set_this_to_some_secure_value
+BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
+MANAGEMENT_TOKEN=fixmemanagementtokenmushaveatleast32characters
+SYSTEM_ROOT_TOKEN=fixmesystemroottokenmushaveatleast32characters
+ANONYMOUS_USER_TOKEN=fixmeanonymoususertokenmushaveatleast32characters
+WORKBENCH_SECRET_KEY=fixmeworkbenchsecretkeymushaveatleast32characters
+DATABASE_PASSWORD=fixmeplease_set_this_to_some_secure_value
# SSL CERTIFICATES
# Arvados requires SSL certificates to work correctly. This installer supports these options:
-----------------------------------------------------------------------
hooks/post-receive
--
More information about the arvados-commits
mailing list