[ARVADOS] updated: 53887579ee362b15de70899b7a95aadd0d695f37

Git user git at public.curoverse.com
Mon Feb 20 16:13:34 EST 2017


Summary of changes:
 doc/_config.yml                                    |  1 +
 .../install-compute-node.html.textile.liquid       |  8 ++
 .../install-slurm.html.textile.liquid}             | 96 +---------------------
 .../install-nodemanager.html.textile.liquid        | 16 +++-
 4 files changed, 26 insertions(+), 95 deletions(-)
 copy doc/install/{install-crunch-dispatch.html.textile.liquid => crunch2-slurm/install-slurm.html.textile.liquid} (53%)

       via  53887579ee362b15de70899b7a95aadd0d695f37 (commit)
      from  6f93ea896d6bd244ad42d2b861023b90f2e39efc (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 53887579ee362b15de70899b7a95aadd0d695f37
Author: Peter Amstutz <peter.amstutz at curoverse.com>
Date:   Mon Feb 20 16:13:29 2017 -0500

    6520: Add information about setting up SLURM to crunchv2 documentation.

diff --git a/doc/_config.yml b/doc/_config.yml
index 59242ad..9e49d40 100644
--- a/doc/_config.yml
+++ b/doc/_config.yml
@@ -161,6 +161,7 @@ navbar:
       - install/install-keep-web.html.textile.liquid
     - Containers API support on SLURM:
       - install/crunch2-slurm/install-prerequisites.html.textile.liquid
+      - install/crunch2-slurm/install-slurm.html.textile.liquid
       - install/crunch2-slurm/install-compute-node.html.textile.liquid
       - install/crunch2-slurm/install-dispatch.html.textile.liquid
       - install/crunch2-slurm/install-test.html.textile.liquid
diff --git a/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
index 158616c..330cc3a 100644
--- a/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
@@ -30,3 +30,11 @@ On Debian-based systems:
 {% include 'install_compute_fuse' %}
 
 {% include 'install_docker_cleaner' %}
+
+h2. Set up SLURM
+
+Install SLURM following "the same process you used to install the Crunch dispatcher":install-crunch-dispatch.html#slurm.
+
+h2. Copy configuration files from the dispatcher (API server)
+
+The @slurm.conf@ and @/etc/munge/munge.key@ files need to be identical across the dispatcher and all compute nodes. Copy the files you created in the "Install the Crunch dispatcher":install-crunch-dispatch.html step to this compute node.
diff --git a/doc/install/crunch2-slurm/install-slurm.html.textile.liquid b/doc/install/crunch2-slurm/install-slurm.html.textile.liquid
new file mode 100644
index 0000000..aa7e648
--- /dev/null
+++ b/doc/install/crunch2-slurm/install-slurm.html.textile.liquid
@@ -0,0 +1,110 @@
+---
+layout: default
+navsection: installguide
+title: Set up SLURM
+...
+
+h2(#slurm). Set up SLURM
+
+On the API server, install SLURM and munge, and generate a munge key.
+
+On Debian-based systems:
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo /usr/bin/apt-get install slurm-llnl munge</span>
+~$ <span class="userinput">sudo /usr/sbin/create-munge-key</span>
+</code></pre>
+</notextile>
+
+On Red Hat-based systems:
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo yum install slurm munge slurm-munge</span>
+</code></pre>
+</notextile>
+
+Now we need to give SLURM a configuration file.  On Debian-based systems, this is installed at @/etc/slurm-llnl/slurm.conf at .  On Red Hat-based systems, this is installed at @/etc/slurm/slurm.conf at .  Here's an example @slurm.conf@:
+
+<notextile>
+<pre>
+ControlMachine=uuid_prefix.your.domain
+SlurmctldPort=6817
+SlurmdPort=6818
+AuthType=auth/munge
+StateSaveLocation=/tmp
+SlurmdSpoolDir=/tmp/slurmd
+SwitchType=switch/none
+MpiDefault=none
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmdPidFile=/var/run/slurmd.pid
+ProctrackType=proctrack/pgid
+CacheGroups=0
+ReturnToService=2
+TaskPlugin=task/affinity
+#
+# TIMERS
+SlurmctldTimeout=300
+SlurmdTimeout=300
+InactiveLimit=0
+MinJobAge=300
+KillWait=30
+Waittime=0
+#
+# SCHEDULING
+SchedulerType=sched/backfill
+SchedulerPort=7321
+SelectType=select/linear
+FastSchedule=0
+#
+# LOGGING
+SlurmctldDebug=3
+#SlurmctldLogFile=
+SlurmdDebug=3
+#SlurmdLogFile=
+JobCompType=jobcomp/none
+#JobCompLoc=
+JobAcctGatherType=jobacct_gather/none
+#
+# COMPUTE NODES
+NodeName=DEFAULT
+PartitionName=DEFAULT MaxTime=INFINITE State=UP
+
+NodeName=compute[0-255]
+PartitionName=compute Nodes=compute[0-255] Default=YES Shared=YES
+</pre>
+</notextile>
+
+h3. SLURM configuration essentials
+
+Whenever you change this file, you will need to update the copy _on every compute node_ as well as the controller node, and then run @sudo scontrol reconfigure at .
+
+*@ControlMachine@* should be a DNS name that resolves to the SLURM controller (dispatch/API server). This must resolve correctly on all SLURM worker nodes as well as the controller itself. In general SLURM is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
+
+*@NodeName=compute[0-255]@* establishes that the hostnames of the worker nodes will be compute0, compute1, etc. through compute255.
+* There are several ways to compress sequences of names, like @compute[0-9,80,100-110]@. See the "hostlist" discussion in the @slurm.conf(5)@ and @scontrol(1)@ man pages for more information.
+* It is not necessary for all of the nodes listed here to be alive in order for SLURM to work, although you should make sure the DNS entries exist. It is easiest to define lots of hostnames up front, assigning them to real nodes and updating your DNS records as the nodes appear. This minimizes the frequency of @slurm.conf@ updates and use of @scontrol reconfigure at .
+
+Each hostname in @slurm.conf@ must also resolve correctly on all SLURM worker nodes as well as the controller itself. Furthermore, the hostnames used in the configuration file must match the hostnames reported by @hostname@ or @hostname -s@ on the nodes themselves. This applies to the ControlMachine as well as the worker nodes.
+
+For example:
+* In @slurm.conf@ on control and worker nodes: @ControlMachine=uuid_prefix.your.domain@
+* In @slurm.conf@ on control and worker nodes: @NodeName=compute[0-255]@
+* In @/etc/resolv.conf@ on control and worker nodes: @search uuid_prefix.your.domain@
+* On the control node: @hostname@ reports @uuid_prefix.your.domain@
+* On worker node 123: @hostname@ reports @compute123.uuid_prefix.your.domain@
+
+h3. Automatic hostname assignment
+
+The API server will choose an unused hostname from the set given in @application.yml@, which defaults to @compute[0-255]@.
+
+If it is not feasible to give your compute nodes hostnames like compute0, compute1, etc., you can accommodate other naming schemes with a bit of extra configuration.
+
+If you want Arvados to assign names to your nodes with a different consecutive numeric series like @{worker1-0000, worker1-0001, worker1-0002}@, add an entry to @application.yml@; see @/var/www/arvados-api/current/config/application.default.yml@ for details. Example:
+* In @application.yml@: <code>assign_node_hostname: worker1-%<slot_number>04d</code>
+* In @slurm.conf@: <code>NodeName=worker1-[0000-0255]</code>
+
+If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script (see "Installing a compute node":install-compute-node.html) send its current hostname, rather than expect Arvados to assign one.
+* In @application.yml@: <code>assign_node_hostname: false</code>
+* In @slurm.conf@: <code>NodeName=alice,bob,clay,darlene</code>
+
+If your worker hostnames are already assigned by other means, but the full set of names is _not_ known in advance, you can use the @slurm.conf@ and @application.yml@ settings in the previous example, but you must also update @slurm.conf@ (both on the controller and on all worker nodes) and run @sudo scontrol reconfigure@ whenever a new node comes online.
diff --git a/doc/install/install-nodemanager.html.textile.liquid b/doc/install/install-nodemanager.html.textile.liquid
index 12e2c02..d8ead76 100644
--- a/doc/install/install-nodemanager.html.textile.liquid
+++ b/doc/install/install-nodemanager.html.textile.liquid
@@ -6,6 +6,8 @@ title: Install Node Manager
 
 Arvados Node Manager provides elastic computing for Arvados and SLURM by creating and destroying virtual machines on demand.  Node Manager currently supports Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure.
 
+Note: node manager is only required for elastic computing cloud environments.  Fixed clusters do not require node manager.
+
 h2. Install
 
 Node manager may run anywhere.  It must be able to communicate with the cluster's SLURM controller using the command line tools @sinfo@, @squeue@ and @scontrol at .
@@ -24,7 +26,15 @@ On Red Hat-based systems:
 </code></pre>
 </notextile>
 
-h2. Configure
+h2. Create compute image
+
+First, create a virtual machine operating system image that will be used to
+initialize new compute nodes.  This must have @slurm@ installed with a correct
+ at slurm.conf@.
+
+Configure node manage as described below to use this compute image.
+
+h2. Configure node manager
 
 The configuration file at @/etc/arvados-node-manager/config.ini@ .  Some configuration details are specific to the cloud provider you are using:
 
@@ -528,3 +538,7 @@ price = 1.12
 </pre>
 
 h2. Running
+
+<pre>
+$ arvados-node-manager --config /etc/arvados-node-manager/config.ini
+</pre>

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list