[arvados] updated: 2.5.0-324-g3add83871
git repository hosting
git at public.arvados.org
Thu Mar 30 14:57:19 UTC 2023
Summary of changes:
doc/install/salt-multi-host.html.textile.liquid | 5 +-
doc/install/salt-single-host.html.textile.liquid | 20 ++++++
.../local.params.example.multiple_hosts | 84 +++++++++++-----------
...l.params.example.single_host_multiple_hostnames | 39 +++++-----
...ocal.params.example.single_host_single_hostname | 59 +++++++--------
5 files changed, 118 insertions(+), 89 deletions(-)
via 3add838714e2b62091881506b90ce915b4f68501 (commit)
via ec733bd6285dcb6a7ab029ba02fe6ddbb64da8d9 (commit)
from 908d141b6564f90c2ed9e0e6c9d7a4397a528c9f (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
commit 3add838714e2b62091881506b90ce915b4f68501
Author: Peter Amstutz <peter.amstutz at curii.com>
Date: Thu Mar 30 10:56:44 2023 -0400
16379: Reorganize local.params examples
Make sure the things that always need to be changed are at the top.
Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>
diff --git a/tools/salt-install/local.params.example.multiple_hosts b/tools/salt-install/local.params.example.multiple_hosts
index ef682c319..72f910b73 100644
--- a/tools/salt-install/local.params.example.multiple_hosts
+++ b/tools/salt-install/local.params.example.multiple_hosts
@@ -16,47 +16,7 @@ DOMAIN="domain_fixme_or_this_wont_work"
# must be root or able to sudo
DEPLOY_USER=admin
-# The mapping of nodes to roles
-# installer.sh will log in to each of these nodes and then provision
-# it for the specified roles.
-NODES=(
- [controller.${CLUSTER}.${DOMAIN}]=database,api,controller,websocket,dispatcher,keepbalance
- [workbench.${CLUSTER}.${DOMAIN}]=monitoring,workbench,workbench2,webshell,keepproxy,keepweb
- [keep0.${CLUSTER}.${DOMAIN}]=keepstore
- [shell.${CLUSTER}.${DOMAIN}]=shell
-)
-
-# Host SSL port where you want to point your browser to access Arvados
-# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
-# You can point it to another port if desired
-# In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
-CONTROLLER_EXT_SSL_PORT=443
-KEEP_EXT_SSL_PORT=443
-# Both for collections and downloads
-KEEPWEB_EXT_SSL_PORT=443
-WEBSHELL_EXT_SSL_PORT=443
-WEBSOCKET_EXT_SSL_PORT=443
-WORKBENCH1_EXT_SSL_PORT=443
-WORKBENCH2_EXT_SSL_PORT=443
-
-# Internal IPs for the configuration
-CLUSTER_INT_CIDR=10.1.0.0/16
-
-# Note the IPs in this example are shared between roles, as suggested in
-# https://doc.arvados.org/main/install/salt-multi-host.html
-CONTROLLER_INT_IP=10.1.1.11
-WEBSOCKET_INT_IP=10.1.1.11
-KEEP_INT_IP=10.1.1.15
-# Both for collections and downloads
-KEEPWEB_INT_IP=10.1.1.15
-KEEPSTORE0_INT_IP=10.1.2.13
-WORKBENCH1_INT_IP=10.1.1.15
-WORKBENCH2_INT_IP=10.1.1.15
-WEBSHELL_INT_IP=10.1.1.15
-DATABASE_INT_IP=10.1.1.11
-SHELL_INT_IP=10.1.2.17
-
-INITIAL_USER="admin"
+INITIAL_USER=admin
# If not specified, the initial user email will be composed as
# INITIAL_USER at CLUSTER.DOMAIN
@@ -113,6 +73,8 @@ LE_AWS_SECRET_ACCESS_KEY="thisistherandomstringthatisyoursecretkey"
# "download" # Part of keepweb
# "collections" # Part of keepweb
# "keepproxy" # Keepproxy
+# "prometheus"
+# "grafana"
# Ie., 'keep', the script will lookup for
# ${CUSTOM_CERTS_DIR}/keepproxy.crt
# ${CUSTOM_CERTS_DIR}/keepproxy.key
@@ -130,6 +92,46 @@ MONITORING_EMAIL=${INITIAL_USER_EMAIL}
# Sets the directory for Grafana dashboards
# GRAFANA_DASHBOARDS_DIR="${SCRIPT_DIR}/local_config_dir/dashboards"
+# The mapping of nodes to roles
+# installer.sh will log in to each of these nodes and then provision
+# it for the specified roles.
+NODES=(
+ [controller.${CLUSTER}.${DOMAIN}]=database,api,controller,websocket,dispatcher,keepbalance
+ [workbench.${CLUSTER}.${DOMAIN}]=monitoring,workbench,workbench2,webshell,keepproxy,keepweb
+ [keep0.${CLUSTER}.${DOMAIN}]=keepstore
+ [shell.${CLUSTER}.${DOMAIN}]=shell
+)
+
+# Host SSL port where you want to point your browser to access Arvados
+# Defaults to 443 for regular runs, and to 8443 when called in Vagrant.
+# You can point it to another port if desired
+# In Vagrant, make sure it matches what you set in the Vagrantfile (8443)
+CONTROLLER_EXT_SSL_PORT=443
+KEEP_EXT_SSL_PORT=443
+# Both for collections and downloads
+KEEPWEB_EXT_SSL_PORT=443
+WEBSHELL_EXT_SSL_PORT=443
+WEBSOCKET_EXT_SSL_PORT=443
+WORKBENCH1_EXT_SSL_PORT=443
+WORKBENCH2_EXT_SSL_PORT=443
+
+# Internal IPs for the configuration
+CLUSTER_INT_CIDR=10.1.0.0/16
+
+# Note the IPs in this example are shared between roles, as suggested in
+# https://doc.arvados.org/main/install/salt-multi-host.html
+CONTROLLER_INT_IP=10.1.1.11
+WEBSOCKET_INT_IP=10.1.1.11
+KEEP_INT_IP=10.1.1.15
+# Both for collections and downloads
+KEEPWEB_INT_IP=10.1.1.15
+KEEPSTORE0_INT_IP=10.1.2.13
+WORKBENCH1_INT_IP=10.1.1.15
+WORKBENCH2_INT_IP=10.1.1.15
+WEBSHELL_INT_IP=10.1.1.15
+DATABASE_INT_IP=10.1.1.11
+SHELL_INT_IP=10.1.2.17
+
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CONFIG_DIR="local_config_dir"
diff --git a/tools/salt-install/local.params.example.single_host_multiple_hostnames b/tools/salt-install/local.params.example.single_host_multiple_hostnames
index b94d687e4..5633c6cbf 100644
--- a/tools/salt-install/local.params.example.single_host_multiple_hostnames
+++ b/tools/salt-install/local.params.example.single_host_multiple_hostnames
@@ -13,29 +13,14 @@ DOMAIN="domain_fixme_or_this_wont_work"
# For multi-node installs, the ssh log in for each node
# must be root or able to sudo
-DEPLOY_USER=root
+DEPLOY_USER=admin
-# The mapping of nodes to roles
-# installer.sh will log in to each of these nodes and then provision
-# it for the specified roles.
-NODES=(
- [localhost]=''
-)
-
-# External ports used by the Arvados services
-CONTROLLER_EXT_SSL_PORT=443
-KEEP_EXT_SSL_PORT=25101
-KEEPWEB_EXT_SSL_PORT=9002
-WEBSHELL_EXT_SSL_PORT=4202
-WEBSOCKET_EXT_SSL_PORT=8002
-WORKBENCH1_EXT_SSL_PORT=443
-WORKBENCH2_EXT_SSL_PORT=3001
+INITIAL_USER=admin
-INITIAL_USER="admin"
# If not specified, the initial user email will be composed as
# INITIAL_USER at CLUSTER.DOMAIN
INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
-INITIAL_USER_PASSWORD="password"
+INITIAL_USER_PASSWORD="fixmepassword"
# YOU SHOULD CHANGE THESE TO SOME RANDOM STRINGS
BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
@@ -70,6 +55,22 @@ MONITORING_EMAIL=${INITIAL_USER_EMAIL}
# Sets the directory for Grafana dashboards
# GRAFANA_DASHBOARDS_DIR="${SCRIPT_DIR}/local_config_dir/dashboards"
+# The mapping of nodes to roles
+# installer.sh will log in to each of these nodes and then provision
+# it for the specified roles.
+NODES=(
+ [localhost]=''
+)
+
+# External ports used by the Arvados services
+CONTROLLER_EXT_SSL_PORT=443
+KEEP_EXT_SSL_PORT=25101
+KEEPWEB_EXT_SSL_PORT=9002
+WEBSHELL_EXT_SSL_PORT=4202
+WEBSOCKET_EXT_SSL_PORT=8002
+WORKBENCH1_EXT_SSL_PORT=443
+WORKBENCH2_EXT_SSL_PORT=3001
+
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CONFIG_DIR="local_config_dir"
@@ -98,3 +99,5 @@ RELEASE="production"
# DOCKER_TAG="v2.4.2"
# LOCALE_TAG="v0.3.4"
# LETSENCRYPT_TAG="v2.1.0"
+# PROMETHEUS_TAG="v5.6.5"
+# GRAFANA_TAG="v3.1.3"
diff --git a/tools/salt-install/local.params.example.single_host_single_hostname b/tools/salt-install/local.params.example.single_host_single_hostname
index 42c1ebb72..0c4f5c356 100644
--- a/tools/salt-install/local.params.example.single_host_single_hostname
+++ b/tools/salt-install/local.params.example.single_host_single_hostname
@@ -13,39 +13,14 @@ DOMAIN="domain_fixme_or_this_wont_work"
# For multi-node installs, the ssh log in for each node
# must be root or able to sudo
-DEPLOY_USER=root
+DEPLOY_USER=admin
-# The mapping of nodes to roles
-# installer.sh will log in to each of these nodes and then provision
-# it for the specified roles.
-NODES=(
- [localhost]=''
-)
-
-# HOSTNAME_EXT must be set to the address that users will use to
-# connect to the instance (e.g. what they will type into the URL bar
-# of the browser to get to workbench). If you haven't given the
-# instance a working DNS name, you might need to use an IP address
-# here.
-HOSTNAME_EXT="hostname_ext_fixme_or_this_wont_work"
+INITIAL_USER=admin
-# The internal IP address for the host.
-IP_INT="ip_int_fixme_or_this_wont_work"
-
-# External ports used by the Arvados services
-CONTROLLER_EXT_SSL_PORT=8800
-KEEP_EXT_SSL_PORT=8801
-KEEPWEB_EXT_SSL_PORT=8802
-WEBSHELL_EXT_SSL_PORT=8803
-WEBSOCKET_EXT_SSL_PORT=8804
-WORKBENCH1_EXT_SSL_PORT=8805
-WORKBENCH2_EXT_SSL_PORT=443
-
-INITIAL_USER="admin"
# If not specified, the initial user email will be composed as
# INITIAL_USER at CLUSTER.DOMAIN
INITIAL_USER_EMAIL="admin at cluster_fixme_or_this_wont_work.domain_fixme_or_this_wont_work"
-INITIAL_USER_PASSWORD="password"
+INITIAL_USER_PASSWORD="fixmepassword"
# Populate these values with random strings
BLOB_SIGNING_KEY=fixmeblobsigningkeymushaveatleast32characters
@@ -80,6 +55,32 @@ MONITORING_EMAIL=${INITIAL_USER_EMAIL}
# Sets the directory for Grafana dashboards
# GRAFANA_DASHBOARDS_DIR="${SCRIPT_DIR}/local_config_dir/dashboards"
+# The mapping of nodes to roles
+# installer.sh will log in to each of these nodes and then provision
+# it for the specified roles.
+NODES=(
+ [localhost]=''
+)
+
+# HOSTNAME_EXT must be set to the address that users will use to
+# connect to the instance (e.g. what they will type into the URL bar
+# of the browser to get to workbench). If you haven't given the
+# instance a working DNS name, you might need to use an IP address
+# here.
+HOSTNAME_EXT="hostname_ext_fixme_or_this_wont_work"
+
+# The internal IP address for the host.
+IP_INT="ip_int_fixme_or_this_wont_work"
+
+# External ports used by the Arvados services
+CONTROLLER_EXT_SSL_PORT=8800
+KEEP_EXT_SSL_PORT=8801
+KEEPWEB_EXT_SSL_PORT=8802
+WEBSHELL_EXT_SSL_PORT=8803
+WEBSOCKET_EXT_SSL_PORT=8804
+WORKBENCH1_EXT_SSL_PORT=8805
+WORKBENCH2_EXT_SSL_PORT=443
+
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CONFIG_DIR="local_config_dir"
@@ -108,3 +109,5 @@ RELEASE="production"
# DOCKER_TAG="v2.4.2"
# LOCALE_TAG="v0.3.4"
# LETSENCRYPT_TAG="v2.1.0"
+# PROMETHEUS_TAG="v5.6.5"
+# GRAFANA_TAG="v3.1.3"
commit ec733bd6285dcb6a7ab029ba02fe6ddbb64da8d9
Author: Peter Amstutz <peter.amstutz at curii.com>
Date: Thu Mar 30 10:30:41 2023 -0400
16379: Small doc updates about prometheus and grafana
Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>
diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid
index 2a8708fd7..0bbc2f2d9 100644
--- a/doc/install/salt-multi-host.html.textile.liquid
+++ b/doc/install/salt-multi-host.html.textile.liquid
@@ -63,7 +63,8 @@ In the default configuration these are:
# @workbench2.${CLUSTER}.${DOMAIN}@
# @webshell.${CLUSTER}.${DOMAIN}@
# @shell.${CLUSTER}.${DOMAIN}@
-# @monitoring.${CLUSTER}.${DOMAIN}@
+# @prometheus.${CLUSTER}.${DOMAIN}@
+# @grafana.${CLUSTER}.${DOMAIN}@
For more information, see "DNS entries and TLS certificates":install-manual-prerequisites.html#dnstls.
@@ -415,7 +416,7 @@ You can iterate on the config and maintain the cluster by making changes to @loc
If you are debugging a configuration issue on a specific node, you can speed up the cycle a bit by deploying just one node:
<pre>
-./installer.sh deploy keep0.xarv1.example.com@
+./installer.sh deploy keep0.xarv1.example.com
</pre>
However, once you have a final configuration, you should run a full deploy to ensure that the configuration has been synchronized on all the nodes.
diff --git a/doc/install/salt-single-host.html.textile.liquid b/doc/install/salt-single-host.html.textile.liquid
index 28a03a9c5..f0a393828 100644
--- a/doc/install/salt-single-host.html.textile.liquid
+++ b/doc/install/salt-single-host.html.textile.liquid
@@ -80,6 +80,8 @@ In the default configuration these are:
# @workbench2.${CLUSTER}.${DOMAIN}@
# @webshell.${CLUSTER}.${DOMAIN}@
# @shell.${CLUSTER}.${DOMAIN}@
+# @prometheus.${CLUSTER}.${DOMAIN}@
+# @grafana.${CLUSTER}.${DOMAIN}@
This is described in more detail in "DNS entries and TLS certificates":install-manual-prerequisites.html#dnstls.
@@ -229,6 +231,24 @@ If you did *not* "configure a different authentication provider":#authentication
If you *did* configure a different authentication provider, the first user to log in will automatically be given Arvados admin privileges.
+h2(#monitoring). Monitoring and Metrics
+
+You can monitor the health and performance of the system using the admin dashboard.
+
+For the multi-hostname install, it will be:
+
+https://grafana.@${CLUSTER}.${DOMAIN}@
+
+To log in, use username "admin" and @${INITIAL_USER_PASSWORD}@ from @local.conf at .
+
+Once logged in, you will want to add the dashboards to the front page.
+
+# On the left icon bar, click on "Browse"
+# If the check box next to "Starred" is selected, click on it to de-select it
+# You should see a folder with "Arvados cluster overview", "Node exporter" and "Postgres exporter"
+# You can visit each dashboard and click on the star next to the title to "Mark as favorite"
+# They should now be linked on the front page.
+
h2(#post_install). After the installation
As part of the operation of @installer.sh@, it automatically creates a @git@ repository with your configuration templates. You should retain this repository but be aware that it contains sensitive information (passwords and tokens used by the Arvados services).
-----------------------------------------------------------------------
hooks/post-receive
--
More information about the arvados-commits
mailing list