[ARVADOS] updated: 825b1f013631e58d3612d48dae70be8b4b709af3

Git user git at public.curoverse.com
Tue Aug 9 13:59:19 EDT 2016


Summary of changes:
 doc/_includes/_install_compute_docker.liquid       |  2 +-
 doc/_includes/_install_docker_cleaner.liquid       | 46 +++++++++++-----------
 .../install-dispatch.html.textile.liquid           |  9 +++++
 .../crunch2-slurm/install-test.html.textile.liquid |  4 +-
 4 files changed, 36 insertions(+), 25 deletions(-)

       via  825b1f013631e58d3612d48dae70be8b4b709af3 (commit)
       via  2b89981afe484af4335a079580a0619b8997a27e (commit)
      from  368fce7c8f4db5cd32427acb62dcc1ce146d0c37 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 825b1f013631e58d3612d48dae70be8b4b709af3
Author: Brett Smith <brett at curoverse.com>
Date:   Tue Aug 9 13:58:41 2016 -0400

    9705: Add docker-cleaner unit file to Install Guide.

diff --git a/doc/_includes/_install_docker_cleaner.liquid b/doc/_includes/_install_docker_cleaner.liquid
index e26b2be..3745e45 100644
--- a/doc/_includes/_install_docker_cleaner.liquid
+++ b/doc/_includes/_install_docker_cleaner.liquid
@@ -6,34 +6,36 @@ The arvados-docker-cleaner program removes least recently used Docker images as
 This also removes all containers as soon as they exit, as if they were run with @docker run --rm at . If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or run it with @--remove-stopped-containers never at .
 {% include 'notebox_end' %}
 
-Install runit to supervise the Docker cleaner daemon.  {% include 'install_runit' %}
-
-Configure runit to run the image cleaner using a suitable quota for your compute nodes and workload:
+Create a file @/etc/systemd/system/arvados-docker-cleaner.service@ in an editor.  Include the text below as its contents.  Make sure to edit the @ExecStart@ line appropriately for your compute node.
 
 <notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir -p /etc/sv</span>
-~$ <span class="userinput">cd /etc/sv</span>
-/etc/sv$ <span class="userinput">sudo mkdir arvados-docker-cleaner; cd arvados-docker-cleaner</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo mkdir log log/main</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >log/run' <<'EOF'
-#!/bin/sh
-exec svlogd -tt main
-EOF</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >run' <<'EOF'
-#!/bin/sh
-if [ -d /opt/rh/python33 ]; then
-  source scl_source enable python33
-fi
-exec python3 -m arvados_docker.cleaner --quota <b>50G</b>
-EOF</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo chmod +x run log/run</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo ln -s "$(pwd)" /etc/service/</span>
+<pre><code>[Install]
+WantedBy=default.target
+
+[Unit]
+After=docker.service
+
+[Service]
+# Most deployments will want a quota that's at least 10G.  From there,
+# a larger quota can help reduce compute overhead by preventing reloading
+# the same Docker image repeatedly, but will leave less space for other
+# files on the same storage (usually Docker volumes).  Make sure the quota
+# is less than the total space available for Docker images.
+# If your deployment uses a Python 3 Software Collection, uncomment the
+# ExecStart line below, and delete the following one:
+# ExecStart=scl enable python33 "python3 -m arvados_docker.cleaner --quota <span class="userinput">20G</span>"
+ExecStart=python3 -m arvados_docker.cleaner --quota <span class="userinput">20G</span>
+Restart=always
+RestartPreventExitStatus=2
 </code></pre>
 </notextile>
 
-If you are using a different daemon supervisor, or if you want to test the daemon in a terminal window, an equivalent shell command to run arvados-docker-cleaner is:
+Then enable and start the service:
 
 <notextile>
-<pre><code><span class="userinput">python3 -m arvados_docker.cleaner --quota <b>50G</b></span>
+<pre><code>~$ <span class="userinput">sudo systemctl enable arvados-docker-cleaner.service</span>
+~$ <span class="userinput">sudo systemctl start arvados-docker-cleaner.service</span>
 </code></pre>
 </notextile>
+
+If you are using a different daemon supervisor, or if you want to test the daemon in a terminal window, use the command on the @ExecStart@ line above.

commit 2b89981afe484af4335a079580a0619b8997a27e
Author: Brett Smith <brett at curoverse.com>
Date:   Tue Aug 9 13:28:15 2016 -0400

    9705: Fixups from review.

diff --git a/doc/_includes/_install_compute_docker.liquid b/doc/_includes/_install_compute_docker.liquid
index 915db02..ff21d8f 100644
--- a/doc/_includes/_install_compute_docker.liquid
+++ b/doc/_includes/_install_compute_docker.liquid
@@ -4,7 +4,7 @@ Compute nodes must have Docker installed to run containers.  This requires a rel
 
 For Debian-based systems, the Arvados package repository includes a backported @docker.io@ package with a known-good version you can install.
 
-h2. Configure the Docker daemon
+h2(#configure_docker_daemon). Configure the Docker daemon
 
 Crunch runs Docker containers with relatively little configuration.  You may need to start the Docker daemon with specific options to make sure these jobs run smoothly in your environment.  This section highlights options that are useful to most installations.  Refer to the "Docker daemon reference":https://docs.docker.com/reference/commandline/daemon/ for complete information about all available options.
 
diff --git a/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
index 3135411..d8dd1fa 100644
--- a/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
@@ -15,6 +15,7 @@ On Red Hat-based systems:
 
 <notextile>
 <pre><code>$ <span class="userinput">sudo yum install crunch-dispatch-slurm</span>
+$ <span class="userinput">sudo systemctl enable crunch-dispatch-slurm</span>
 </code></pre>
 </notextile>
 
@@ -89,6 +90,14 @@ If your SLURM cluster uses the @task/cgroup@ TaskPlugin, you can configure Crunc
 
 The choice of subsystem ("memory" in this example) must correspond to one of the resource types enabled in SLURM's @cgroup.conf at . Limits for other resource types will also be respected.  The specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM.
 
+{% include 'notebox_begin' %}
+
+Some versions of Docker (at least 1.9), when run under systemd, require the cgroup parent to be specified as a systemd slice.  This causes an error when specifying a cgroup parent created outside systemd, such as those created by SLURM.
+
+You can work around this issue by disabling the Docker daemon's systemd integration.  This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use SLURM's cgroups as container parents.  To do this, "configure the Docker daemon on all compute nodes":install-compute-node.html#configure_docker_daemon to run with the option @--exec-opt native.cgroupdriver=cgroupfs at .
+
+{% include 'notebox_end' %}
+
 h2. Restart the dispatcher
 
 {% include 'notebox_begin' %}
diff --git a/doc/install/crunch2-slurm/install-test.html.textile.liquid b/doc/install/crunch2-slurm/install-test.html.textile.liquid
index 82df496..5dec020 100644
--- a/doc/install/crunch2-slurm/install-test.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-test.html.textile.liquid
@@ -24,7 +24,7 @@ On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
 </code></pre>
 </notextile>
 
-On a shell VM, submit a simple container request:
+*On your shell server*, submit a simple container request:
 
 <notextile>
 <pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
@@ -79,7 +79,7 @@ When the container finishes, the dispatcher will log that, with the final result
 </code></pre>
 </notextile>
 
-After the container finishes, you can get the container record by UUID from a shell VM to see its results:
+After the container finishes, you can get the container record by UUID *from a shell server* to see its results:
 
 <notextile>
 <pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list