[arvados] updated: 2.1.0-2960-gadfec6352

git repository hosting git at public.arvados.org
Mon Nov 28 14:54:18 UTC 2022


Summary of changes:
 doc/install/salt-multi-host.html.textile.liquid   | 106 ++++++++++++++++++++--
 tools/salt-install/provision.sh                   |   7 +-
 tools/salt-install/terraform/aws/services/main.tf |   1 -
 3 files changed, 105 insertions(+), 9 deletions(-)

       via  adfec635268de089f04c1a4a5d1244ccbce201ae (commit)
       via  ae718ad33a5ec4ee88f92477f4353927e0fe9d39 (commit)
      from  664cc427ef0b3bdd896240f4e1c80b033b90982c (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit adfec635268de089f04c1a4a5d1244ccbce201ae
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date:   Mon Nov 28 11:53:07 2022 -0300

    19215: Sets up the provision.sh script to use our own postgres formula fork.
    
    This is a temporary measure until it gets properly fixed. See:
    https://github.com/saltstack-formulas/postgres-formula/issues/327
    
    Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>

diff --git a/tools/salt-install/provision.sh b/tools/salt-install/provision.sh
index 63e2d886a..77c201615 100755
--- a/tools/salt-install/provision.sh
+++ b/tools/salt-install/provision.sh
@@ -211,7 +211,10 @@ VERSION="latest"
 SALT_VERSION="3004"
 
 # Other formula versions we depend on
-POSTGRES_TAG="v0.44.0"
+#POSTGRES_TAG="v0.44.0"
+#POSTGRES_URL="https://github.com/saltstack-formulas/postgres-formula.git"
+POSTGRES_TAG="0.45.0-bugfix327"
+POSTGRES_URL="https://github.com/arvados/postgres-formula.git"
 NGINX_TAG="v2.8.1"
 DOCKER_TAG="v2.4.2"
 LOCALE_TAG="v0.3.4"
@@ -352,7 +355,7 @@ test -d nginx && ( cd nginx && git fetch ) \
 
 echo "...postgres"
 test -d postgres && ( cd postgres && git fetch ) \
-  || git clone --quiet https://github.com/saltstack-formulas/postgres-formula.git ${F_DIR}/postgres
+  || git clone --quiet ${POSTGRES_URL} ${F_DIR}/postgres
 ( cd postgres && git checkout --quiet tags/"${POSTGRES_TAG}" )
 
 echo "...letsencrypt"

commit ae718ad33a5ec4ee88f92477f4353927e0fe9d39
Author: Lucas Di Pentima <lucas.dipentima at curii.com>
Date:   Mon Nov 28 11:39:49 2022 -0300

    19215: Adds documentation on Terraform code.
    
    Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas.dipentima at curii.com>

diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index 368d2ed99..9aa3556e3 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -11,7 +11,8 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 # "Introduction":#introduction
 # "Prerequisites and planning":#prerequisites
-# "Required hosts":#hosts
+## "Create AWS infrastructure with Terraform":#terraform
+## "Create required insfrastructure manually":#inframanual
 # "Download the installer":#download
 # "Initialize the installer":#copy_config
 # "Edit local.params":#localparams
@@ -43,7 +44,100 @@ Determine the base domain for the cluster.  This will be referred to as @${DOMAI
 
 For example, if CLUSTER is @xarv1@ and DOMAIN is @example.com@, then @controller.${CLUSTER}.${DOMAIN}@ means @controller.xarv1.example.com at .
 
-h3. Virtual Private Cloud (AWS specific)
+h3(#terraform). Create AWS infrastructure with Terraform
+
+To simplify the tedious and error-prone process of building a working cloud infrastructure for your Arvados cluster, we provide a set of Terraform code files that you can run against Amazon Web Services.
+
+These files are located in the @tools/salt-install/terraform/aws/@ directory and are divided in three sections:
+
+# The @vpc/@ subdirectory controls the network related infrastructure of your cluster, including firewall rules and split-horizon DNS resolution.
+# The @data-storage/@ subdirectory controls the stateful part of your cluster, currently only sets up the S3 bucket for holding the Keep blocks and in the future it'll also manage the database service.
+# The @services/@ subdirectory controls the hosts that will run the different services on your cluster, makes sure that they have the required software for the installer to do its job.
+
+h4. Software requirements & considerations
+
+In addition of having "Terraform CLI":https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli tool installed on your computer, you'll also need the "AWS CLI":https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html tool and proper credentials already configured.
+
+Once all the required tools are present, as a first step you should run @terraform init@ inside each subdirectory, so that all the required modules get downloaded. If you happen to miss this step, running @terraform apply@ will just exit with a message asking for this.
+
+{% include 'notebox_begin' %}
+The Terraform state files (that keep crucial infrastructure information from the cloud) will be saved inside each subdirectory, under the @terraform.tfstate@ name. You should keep these files secure as they contain unencrypted secrets. Research on state files management best practices is left as an exercise to the reader.
+{% include 'notebox_end' %}
+
+h4. Terraform code configuration
+
+Each section described above contain a @terraform.tfvars@ file with some configuration values that you should set before applying each configuration. You'll at least need to set up the cluster prefix and domain name in @vpc/terraform.tfvars@:
+
+<pre><code>region_name = "us-east-1"
+# cluster_name = "xarv1"
+# domain_name = "example.com"</code></pre>
+
+The other @data-storage/terraform.tfvars@ and @services/terraform.tfvars@ files already have sensible defaults so you may not need to modify them.
+
+h4. Create the infrastructure
+
+The whole infrastructure needs to be built in stages by running @terraform apply@ inside each subdirectory in the order they are listed above. Each stage will output information that is needed by a following stage. The last stage @services/@ will output the information needed to set up the cluster's domain and continue with the installer, for example:
+
+<pre><code>$ terraform apply
+...
+Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
+
+Outputs:
+
+arvados_sg_id = "sg-02fa04a2c273166d7"
+cluster_name = "xarv1"
+domain_name = "example.com"
+letsencrypt_iam_access_key_id = "AKIA43MU4DW7K57DBVSD"
+letsencrypt_iam_secret_access_key = <sensitive>
+private_ip = {
+  "controller" = "10.1.1.1"
+  "keep0" = "10.1.1.3"
+  "keep1" = "10.1.1.4"
+  "keepproxy" = "10.1.1.2"
+  "shell" = "10.1.1.7"
+  "workbench" = "10.1.1.5"
+}
+public_ip = {
+  "controller" = "18.235.116.23"
+  "keep0" = "34.202.85.86"
+  "keep1" = "38.22.123.98"
+  "keepproxy" = "34.231.9.201"
+  "shell" = "44.208.155.240"
+  "workbench" = "52.204.134.136"
+}
+route53_dns_ns = tolist([
+  "ns-1119.awsdns-11.org",
+  "ns-1812.awsdns-34.co.uk",
+  "ns-437.awsdns-54.com",
+  "ns-809.awsdns-37.net",
+])
+subnet_id = "subnet-072a139f03938b710"
+vpc_cidr = "10.1.0.0/16"
+vpc_id = "vpc-0934aa4738300423a"
+</code></pre>
+
+You'll see that the @letsencrypt_iam_secret_access_key@ data is obscured; to retrieve it you'll need to run the following command inside the @services/@ subdirectory:
+
+<pre><code>$ terraform output letsencrypt_iam_secret_access_key
+"FQ3+3lnBOtWUu+Nw+qb3RiAGqE7DxV9jFC+XTARl"</code></pre>
+
+h4. Final actions
+
+At this stage, the infrastructure for your Arvados cluster is up and running, ready for the installer to connect to the instances and do the final set up.
+
+The domain name for your cluster (e.g.: xarv1.example.com) is managed via Route53 and the SSL certificates will be issued using Let's Encrypt.
+
+Take note of the domain servers listed in @route53_dns_ns@ so you can delegate the zone to them.
+
+You'll need to take note of @letsencrypt_iam_access_key_id@ and @letsencrypt_iam_secret_access_key@ for setting up @LE_AWS_*@ variables in @local.params at .
+
+You'll also need @subnet_id@ and @arvados_sg_id@ to set up @DriverParameters.SubnetID@ and @DriverParameters.SecurityGroupIDs@ in @local_config_dir/pillars/arvados.sls@ as "described below":#create_a_compute_image.
+
+h3(#inframanual). Create required infrastructure manually
+
+If you would rather prefer to create/set up your infrastructure manually, below we provide some recommendations you will need to consider.
+
+h4. Virtual Private Cloud (AWS specific)
 
 We recommend setting Arvados up in a "Virtual Private Cloud (VPC)":https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
 
@@ -53,13 +147,13 @@ When you do so, you need to configure a couple of additional things:
 # You should set up a "security group which allows SSH access (port 22)":https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
 # Make sure to add a "VPC S3 endpoint":https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
 
-h3(#keep-bucket). S3 Bucket (AWS specific)
+h4(#keep-bucket). S3 Bucket (AWS specific)
 
 We recommend "creating an S3 bucket":https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html for data storage named @${CLUSTER}-nyw5e-000000000000000-volume at .  We recommend creating an IAM role called @${CLUSTER}-keepstore-00-iam-role@ with a "policy that can read, write, list and delete objects in the bucket":configure-s3-object-storage.html#IAM .  With the example cluster id @xarv1@ the bucket would be called @xarv1-nyw5e-000000000000000-volume@ and the role would be called @xarv1-keepstore-00-iam-role at .
 
 These names are recommended because they are default names used in the configuration template.  If you use different names, you will need to edit the configuration template later.
 
-h2(#hosts). Required hosts
+h4(#hosts). Required hosts
 
 You will need to allocate several hosts (physical or virtual machines) for the fixed infrastructure of the Arvados cluster.  These machines should have at least 2 cores and 8 GiB of RAM, running a supported Linux distribution.
 
@@ -90,7 +184,7 @@ The installer will set up the Arvados services on your machines.  Here is the de
 
 When using the database installed by Arvados (and not an "external database":#ext-database), the database is stored under @/var/lib/postgresql at .  Arvados logs are also kept in @/var/log@ and @/var/www/arvados-api/shared/log at .  Accordingly, you should ensure that the disk partition containing @/var@ has adequate storage for your planned usage.  We suggest starting with 50GiB of free space on the database host.
 
-h3(#DNS). DNS hostnames for each service
+h4(#DNS). DNS hostnames for each service
 
 You will need a DNS entry for each service.  In the default configuration these are:
 
@@ -108,7 +202,7 @@ You will need a DNS entry for each service.  In the default configuration these
 
 This is described in more detail in "DNS entries and TLS certificates":install-manual-prerequisites.html#dnstls.
 
-h3. Additional prerequisites when preparing machines to run the installer
+h4. Additional prerequisites when preparing machines to run the installer
 
 # From the account where you are performing the install, passwordless @ssh@ to each machine
 This means the client's public key should added to @~/.ssh/authorized_keys@ on each node.
diff --git a/tools/salt-install/terraform/aws/services/main.tf b/tools/salt-install/terraform/aws/services/main.tf
index 911311f83..34eba5e61 100644
--- a/tools/salt-install/terraform/aws/services/main.tf
+++ b/tools/salt-install/terraform/aws/services/main.tf
@@ -105,5 +105,4 @@ resource "aws_eip_association" "eip_assoc" {
   for_each = toset(local.hostnames)
   instance_id = aws_instance.arvados_service[each.value].id
   allocation_id = data.terraform_remote_state.vpc.outputs.eip_id[each.value]
-  # private_ip_address = local.private_ip[each.value]
 }

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list