[arvados] updated: 2.1.0-2966-gafe3ee7d4

git repository hosting git at public.arvados.org
Thu Dec 1 16:16:50 UTC 2022


Summary of changes:
 .../install-compute-node.html.textile.liquid             | 11 ++++++++---
 doc/install/salt-multi-host.html.textile.liquid          | 16 +++++++++++-----
 tools/compute-images/build.sh                            |  6 ++++--
 .../config_examples/multi_host/aws/pillars/arvados.sls   |  2 +-
 4 files changed, 24 insertions(+), 11 deletions(-)

       via  afe3ee7d4dc3c5820e3af561f81c8267c671180b (commit)
      from  0cd28d6727516a0461cd9e10ae7a960d2dcb747d (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit afe3ee7d4dc3c5820e3af561f81c8267c671180b
Author: Peter Amstutz <peter.amstutz at curii.com>
Date:   Thu Dec 1 11:15:52 2022 -0500

    19215: A few tweaks about setting up compute image.
    
    Make packer script create a log file.
    
    Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>

diff --git a/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid b/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
index d282a304b..fb69a0df3 100644
--- a/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
+++ b/doc/install/crunch2-cloud/install-compute-node.html.textile.liquid
@@ -14,6 +14,7 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 {% include 'notebox_end' %}
 
 # "Introduction":#introduction
+# "Install Packer":#install-packer
 # "Create an SSH keypair":#sshkeypair
 # "Compute image requirements":#requirements
 # "The build script":#building
@@ -34,6 +35,9 @@ Packer templates for AWS and Azure are provided with Arvados. To use them, the f
 * credentials for your cloud account
 * configuration details for your cloud account
 
+h2(#install-packer). Install Packer
+
+"Download Packer here":https://developer.hashicorp.com/packer/downloads
 
 h2(#sshkeypair). Create a SSH keypair
 
@@ -142,9 +146,11 @@ Options:
 
 h2(#dns-resolution). DNS resolution
 
-Compute nodes must be able to resolve the hostnames of the API server and any keepstore servers to your internal IP addresses. You can do this by running an internal DNS resolver. The IP address of the resolver should be passed as the value for the @--resolver@ argument to "the build script":#building.
+Compute nodes must be able to resolve the hostnames of the API server and any keepstore servers to your internal IP addresses.  If you are on AWS and using Route 53 for your DNS, the default resolver configuration can be used with no extra options.
+
+You can also run your own internal DNS resolver. In that case, the IP address of the resolver should be passed as the value for the @--resolver@ argument to "the build script":#building.
 
-Alternatively, the services could be hardcoded into an @/etc/hosts@ file. For example:
+As a third option, the services could be hardcoded into an @/etc/hosts@ file. For example:
 
 <notextile><pre><code>10.20.30.40     <span class="userinput">ClusterID.example.com</span>
 10.20.30.41     <span class="userinput">keep1.ClusterID.example.com</span>
@@ -182,7 +188,6 @@ The @VPC@ and @Subnet@ should be configured for where you want the compute image
            --aws-vpc-id VPC \
            --aws-subnet-id Subnet \
            --ssh_user admin \
-           --resolver ResolverIP \
            --public-key-file ArvadosDispatchCloudPublicKeyPath
 </span>
 </code></pre></notextile>
diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index 88501c100..98f69da4a 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -338,15 +338,21 @@ If you are installing on a different cloud provider or on HPC, other changes may
 
 Any extra Salt "state" files you add under @local_config_dir/states@ will be added to the Salt run and applied to the hosts.
 
-h2(#create_a_compute_image). Create a compute image
+h2(#create_a_compute_image). Configure compute nodes
 
 {% include 'branchname' %}
 
-On cloud installations, containers are dispatched in Docker daemons running in the _compute instances_, which need some additional setup.  If you will use a HPC scheduler such as SLURM you can skip this section.
+If you will use fixed compute nodes with an HPC scheduler such as SLURM or LSF, you will need to "Set up your compute nodes with Docker":{{site.baseurl}}/install/crunch2/install-compute-node-docker.html or "Set up your compute nodes with Singularity":{{site.baseurl}}/install/crunch2/install-compute-node-singularity.html.
 
-*Start by following "the instructions to build a cloud compute node image":{{site.baseurl}}/install/crunch2-cloud/install-compute-node.html using the "compute image builder script":https://github.com/arvados/arvados/tree/{{ branchname }}/tools/compute-images* .
+On cloud installations, containers are dispatched in Docker daemons running in the _compute instances_, which need some additional setup.
 
-Once you have that image created, Open @local_config_dir/pillars/arvados.sls@ and edit as follows (AWS specific settings described here, other cloud providers will have similar settings in their respective configuration section):
+h3. Build the compute image
+
+Follow "the instructions to build a cloud compute node image":{{site.baseurl}}/install/crunch2-cloud/install-compute-node.html using the compute image builder script found in @arvados/tools/compute-images@ in your Arvados clone from "step 3":#download.
+
+h3. Configure the compute image
+
+Once the image has been created, open @local_config_dir/pillars/arvados.sls@ and edit as follows (AWS specific settings described here, other cloud providers will have similar settings in their respective configuration section):
 
 # In the @arvados.cluster.Containers.CloudVMs@ section:
 ## Set @ImageID@ to the AMI produced by Packer
@@ -448,7 +454,7 @@ If you *did* configure a different authentication provider, the first user to lo
 
 h2(#post_install). After the installation
 
-As part of the operation of @installer.sh@, it automatically creates a @git@ repository with your configuration templates.  You should retain this repository but be aware that it contains sensitive information (passwords and tokens used by the Arvados services).
+As part of the operation of @installer.sh@, it automatically creates a @git@ repository with your configuration templates.  You should retain this repository but *be aware that it contains sensitive information* (passwords and tokens used by the Arvados services as well as cloud credentials if you used Terraform to create the infrastructure).
 
 As described in "Iterating on config changes":#iterating you may use @installer.sh deploy@ to re-run the Salt to deploy configuration changes and upgrades.  However, be aware that the configuration templates created for you by @installer.sh@ are a snapshot which are not automatically kept up to date.
 
diff --git a/tools/compute-images/build.sh b/tools/compute-images/build.sh
index 5b3db262c..6a17e8df1 100755
--- a/tools/compute-images/build.sh
+++ b/tools/compute-images/build.sh
@@ -318,8 +318,10 @@ fi
 GOVERSION=$(grep 'const goversion =' ../../lib/install/deps.go |awk -F'"' '{print $2}')
 EXTRA2+=" -var goversion=$GOVERSION"
 
+logfile=packer-$(date -Iseconds).log
+
 echo
 packer version
 echo
-echo packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE
-packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE
+echo packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE | tee -a $logfile
+packer build$EXTRA -var "arvados_cluster=$ARVADOS_CLUSTER_ID"$EXTRA2 $JSON_FILE 2>&1 | tee -a $logfile
diff --git a/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls b/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
index 941bd95f1..25f68ca04 100644
--- a/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
+++ b/tools/salt-install/config_examples/multi_host/aws/pillars/arvados.sls
@@ -128,7 +128,7 @@ arvados:
         -----BEGIN OPENSSH PRIVATE KEY-----
         Read https://doc.arvados.org/install/crunch2-cloud/install-compute-node.html#sshkeypair
         for details on how to create this key.
-        FIXMEFIXMEFIXMEFI
+        FIXMEFIXMEFIXME replace this with your dispatcher ssh private key
         -----END OPENSSH PRIVATE KEY-----
 
     ### VOLUMES

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list