[arvados] updated: 2.1.0-2965-g0cd28d672

git repository hosting git at public.arvados.org
Wed Nov 30 22:13:27 UTC 2022


Summary of changes:
 doc/_includes/_download_installer.liquid           | 17 ++++++++--
 .../install-compute-node.html.textile.liquid       | 15 ++++++---
 doc/install/salt-multi-host.html.textile.liquid    | 39 +++++++++++++++-------
 3 files changed, 51 insertions(+), 20 deletions(-)

       via  0cd28d6727516a0461cd9e10ae7a960d2dcb747d (commit)
      from  87c0aff0cdbaf9b0779bb253fa707dfba1bfebb9 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 0cd28d6727516a0461cd9e10ae7a960d2dcb747d
Author: Peter Amstutz <peter.amstutz at curii.com>
Date:   Wed Nov 30 17:13:04 2022 -0500

    19215: More documentation details
    
    Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>

diff --git a/doc/_includes/_download_installer.liquid b/doc/_includes/_download_installer.liquid
index 758d195a3..31c3f4362 100644
--- a/doc/_includes/_download_installer.liquid
+++ b/doc/_includes/_download_installer.liquid
@@ -22,13 +22,24 @@ h2(#copy_config). Initialize the installer
 
 Replace "xarv1" with the cluster id you selected earlier.
 
+This creates a git repository in @~/setup-arvados-xarv1 at .  The @installer.sh@ will record all the configuration changes you make, as well as using @git push@ to synchronize configuration edits if you have multiple nodes.
+
+Important!  Once you have initialized the installer directory, all further commands must be run with @~/setup-arvados-${CLUSTER}@ as the current working directory.
+
+h3. Using Terraform (AWS specific)
+
 <notextile>
 <pre><code>CLUSTER=xarv1
-./installer.sh initialize ~/setup-arvados-${CLUSTER} {{local_params_src}} {{config_examples_src}}
+./installer.sh initialize ~/setup-arvados-${CLUSTER} {{local_params_src}} {{config_examples_src}} {{terraform_src}}
 cd ~/setup-arvados-${CLUSTER}
 </code></pre>
 </notextile>
 
-This creates a git repository in @~/setup-arvados-xarv1 at .  The @installer.sh@ will record all the configuration changes you make, as well as using @git push@ to synchronize configuration edits if you have multiple nodes.
+h3. Without Terraform
 
-Important!  All further commands must be run with @~/setup-arvados-xarv1@ as the current working directory.
+<notextile>
+<pre><code>CLUSTER=xarv1
+./installer.sh initialize ~/setup-arvados-${CLUSTER} {{local_params_src}} {{config_examples_src}}
+cd ~/setup-arvados-${CLUSTER}
+</code></pre>
+</notextile>
diff --git a/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid b/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
index ed5ccb9ee..d282a304b 100644
--- a/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
+++ b/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
@@ -165,6 +165,16 @@ The desired amount of memory to make available for @mksquashfs@ can be configure
 
 h2(#aws). Build an AWS image
 
+For @ClusterID@, fill in your cluster ID.
+
+ at AWSProfile@ is the name of an AWS profile in your "credentials file":https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file (@~/.aws/credentials@) listing the @aws_access_key_id@ and @aws_secret_access_key@ to use.
+
+The @AMI@ is the identifier for the base image to be used. Current AMIs are maintained by "Debian":https://wiki.debian.org/Cloud/AmazonEC2Image/Buster and "Ubuntu":https://cloud-images.ubuntu.com/locator/ec2/.
+
+The @VPC@ and @Subnet@ should be configured for where you want the compute image to be generated and stored.
+
+ at ArvadosDispatchCloudPublicKeyPath@ should be replaced with the path to the ssh *public* key file generated in "Create an SSH keypair":#sshkeypair, above.
+
 <notextile><pre><code>~$ <span class="userinput">./build.sh --json-file arvados-images-aws.json \
            --arvados-cluster-id ClusterID \
            --aws-profile AWSProfile \
@@ -177,11 +187,6 @@ h2(#aws). Build an AWS image
 </span>
 </code></pre></notextile>
 
-For @ClusterID@, fill in your cluster ID. The @VPC@ and @Subnet@ should be configured for where you want the compute image to be generated and stored. The @AMI@ is the identifier for the base image to be used. Current AMIs are maintained by "Debian":https://wiki.debian.org/Cloud/AmazonEC2Image/Buster and "Ubuntu":https://cloud-images.ubuntu.com/locator/ec2/.
-
- at AWSProfile@ should be replaced with the name of an AWS profile with sufficient permissions to create the image.
-
- at ArvadosDispatchCloudPublicKeyPath@ should be replaced with the path to the ssh *public* key file generated in "Create an SSH keypair":#sshkeypair, above.
 
 h3(#aws-ebs-autoscaler). Autoscaling compute node scratch space
 
diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index d6a5b3bde..88501c100 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -72,20 +72,24 @@ If you are going to use Terraform to set up the infrastructure on AWS, you will
 h2(#download). Download the installer
 
 {% assign local_params_src = 'multiple_hosts' %}
-{% assign config_examples_src = 'multi_host/aws terraform/aws'%}
+{% assign config_examples_src = 'multi_host/aws' %}
+{% assign terraform_src = 'terraform/aws' %}
 {% include 'download_installer' %}
 
 h2(#setup-infra). Set up your infrastructure
 
+## "Create AWS infrastructure with Terraform":#terraform
+## "Create required infrastructure manually":#inframanual
+
 h3(#terraform). Create AWS infrastructure with Terraform (AWS specific)
 
 We provide a set of Terraform code files that you can run to create the necessary infrastructure on Amazon Web Services.
 
-These files are located in the @arvados/tools/salt-install/terraform/aws/@ directory and are divided in three sections:
+These files are located in the @terraform@ installer directory and are divided in three sections:
 
-# The @vpc/@ subdirectory controls the network related infrastructure of your cluster, including firewall rules and split-horizon DNS resolution.
-# The @data-storage/@ subdirectory controls the stateful part of your cluster, currently only sets up the S3 bucket for holding the Keep blocks and in the future it'll also manage the database service.
-# The @services/@ subdirectory controls the hosts that will run the different services on your cluster, makes sure that they have the required software for the installer to do its job.
+# The @terraform/vpc/@ subdirectory controls the network related infrastructure of your cluster, including firewall rules and split-horizon DNS resolution.
+# The @terraform/data-storage/@ subdirectory controls the stateful part of your cluster, currently only sets up the S3 bucket for holding the Keep blocks and in the future it'll also manage the database service.
+# The @terraform/services/@ subdirectory controls the hosts that will run the different services on your cluster, makes sure that they have the required software for the installer to do its job.
 
 h4. Software requirements & considerations
 
@@ -107,7 +111,7 @@ The @data-storage/terraform.tfvars@ and @services/terraform.tfvars@ let you conf
 
 h4. Create the infrastructure
 
-Build the infrastructure by running @./installer.sh terraform at .  The last stage @services/@ will output the information needed to set up the cluster's domain and continue with the installer. for example:
+Build the infrastructure by running @./installer.sh terraform at .  The last stage will output the information needed to set up the cluster's domain and continue with the installer. for example:
 
 <pre><code>$ ./installer.sh terraform
 ...
@@ -151,11 +155,11 @@ vpc_id = "vpc-0999994998399923a"
 
 h4. Additional DNS configuration
 
-Once Terraform has completed, the infrastructure for your Arvados cluster is up and running.  You are almost ready to have the installer connect to the instances to install and configure the software.
+Once Terraform has completed, the infrastructure for your Arvados cluster is up and running.  One last piece of DNS configuration is required.
 
 The domain names for your cluster (e.g.: controller.xarv1.example.com) are managed via "Route 53":https://aws.amazon.com/route53/ and the TLS certificates will be issued using "Let's Encrypt":https://letsencrypt.org/ .
 
-You will need to configure the parent domain to delegate to the newly created zone.  In other words, you need to configure @${DOMAIN}@ (e.g. "example.com") to delegate the subdomain @${CLUSTER}.${DOMAIN}@ (e.g. "xarv1.example.com") to the nameservers that contain the Arvados hostname records created by Terraform.  You do this by creating an @NS@ record on the parent domain that refers to the appropriate name servers.  These are the domain name servers listed in the Terraform output parameter @route53_dns_ns at .
+You need to configure the parent domain to delegate to the newly created zone.  In other words, you need to configure @${DOMAIN}@ (e.g. "example.com") to delegate the subdomain @${CLUSTER}.${DOMAIN}@ (e.g. "xarv1.example.com") to the nameservers for the Arvados hostname records created by Terraform.  You do this by creating a @NS@ record on the parent domain that refers to the name servers listed in the Terraform output parameter @route53_dns_ns at .
 
 If your parent domain is also controlled by Route 53, the process will be like this:
 
@@ -167,16 +171,24 @@ If your parent domain is also controlled by Route 53, the process will be like t
 # For *Value* add the values from Terraform output parameter @route53_dns_ns@, one hostname per line, with punctuation (quotes and commas) removed.
 # Click *Create records*
 
+If the parent domain is controlled by some other service, follow the guide for the the appropriate service.
+
 h4. Other important output parameters
 
-Take note of @letsencrypt_iam_access_key_id@ and @letsencrypt_iam_secret_access_key@ for setting up @LE_AWS_*@ variables in @local.params at .  The certificates will be requested when you run the installer.
+* Take note of @letsencrypt_iam_access_key_id@ and @letsencrypt_iam_secret_access_key@ for setting up @LE_AWS_*@ variables in @local.params at .
 
 You'll see that the @letsencrypt_iam_secret_access_key@ data is obscured; to retrieve it you'll need to run the following command inside the @services/@ subdirectory:
 
 <pre><code>$ terraform output letsencrypt_iam_secret_access_key
 "FQ3+3lxxOxxUu+Nw+qx3xixxxExxxV9jFC+XxxRl"</code></pre>
 
-You'll also need @subnet_id@ and @arvados_sg_id@ to set up @DriverParameters.SubnetID@ and @DriverParameters.SecurityGroupIDs@ in @local_config_dir/pillars/arvados.sls@ for when you "create a compute image":#create_a_compute_image.
+The certificates will be requested from Let's Encrypt when you run the installer.
+
+* @vpc_cidr@ will be used to set @CLUSTER_INT_CIDR@
+
+* You'll also need @subnet_id@ and @arvados_sg_id@ to set @DriverParameters.SubnetID@ and @DriverParameters.SecurityGroupIDs@ in @local_config_dir/pillars/arvados.sls@ and when you "create a compute image":#create_a_compute_image.
+
+You can now proceed to "edit local.params":#localparams.
 
 h3(#inframanual). Create required infrastructure manually
 
@@ -249,7 +261,7 @@ This can be found wherever you choose to initialize the install files (@~/setup-
 
 # Set @CLUSTER@ to the 5-character cluster identifier (e.g "xarv1")
 # Set @DOMAIN@ to the base DNS domain of the environment, e.g. "example.com"
-# Edit Internal IP settings. Since services share hosts, some hosts are the same.  See "note about /etc/hosts":#etchosts
+# Set the @*_INT_IP@ variables with the internal (private) IP addresses of each host. Since services share hosts, some hosts are the same.  See "note about /etc/hosts":#etchosts
 # Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC.
 CIDR stands for "Classless Inter-Domain Routing" and describes which portion of the IP address that refers to the network.  For example 192.168.3.0/24 means that the first 24 bits are the network (192.168.3) and the last 8 bits are a specific host on that network.
 _AWS Specific: Go to the AWS console and into the VPC service, there is a column in this table view of the VPCs that gives the CIDR for the VPC (IPv4 CIDR)._
@@ -262,6 +274,7 @@ SYSTEM_ROOT_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 ANONYMOUS_USER_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 WORKBENCH_SECRET_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 DATABASE_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+</code></pre>
 # Set @DATABASE_PASSWORD@ to a random string (unless you "already have a database":#ext-database then you should set it to that database's password)
    Important! If this contains any non-alphanumeric characters, in particular ampersand ('&'), it is necessary to add backslash quoting.
    For example, if the password is @Lq&MZ<V']d?j@
@@ -286,7 +299,9 @@ Open @local_config_dir/pillars/arvados.sls@ and edit as follows:
 
 # In the @arvados.cluster.Volumes.DriverParameters@ section, set @Region@ to the appropriate AWS region (e.g. 'us-east-1')
 
-If you did not "follow the recommendend naming scheme":#keep-bucket for either the bucket or role, you'll need to update these parameters as well:
+If "followed the recommendend naming scheme":#keep-bucket for both the bucket and role (or used the provided Terraform script), you're done.
+
+If you did not follow the recommendend naming scheme for either the bucket or role, you'll need to update these parameters as well:
 
 # Set @Bucket@ to the value of "keepstore bucket you created earlier":#keep-bucket
 # Set @IAMRole@ to "keepstore role you created earlier":#keep-bucket

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list