[ARVADOS] created: 368fce7c8f4db5cd32427acb62dcc1ce146d0c37

Git user git at public.curoverse.com
Mon Aug 8 13:56:57 EDT 2016


        at  368fce7c8f4db5cd32427acb62dcc1ce146d0c37 (commit)


commit 368fce7c8f4db5cd32427acb62dcc1ce146d0c37
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Aug 8 13:56:08 2016 -0400

    9705: Add crunch-dispatch-slurm to the Install Guide.

diff --git a/doc/_config.yml b/doc/_config.yml
index b3b213b..8fb2ff7 100644
--- a/doc/_config.yml
+++ b/doc/_config.yml
@@ -162,6 +162,12 @@ navbar:
       - install/configure-azure-blob-storage.html.textile.liquid
       - install/install-keepproxy.html.textile.liquid
       - install/install-keep-web.html.textile.liquid
+    - Install Crunch v2 on SLURM:
+      - install/crunch2-slurm/install-prerequisites.html.textile.liquid
+      - install/crunch2-slurm/install-compute-node.html.textile.liquid
+      - install/crunch2-slurm/install-dispatch.html.textile.liquid
+      - install/crunch2-slurm/install-test.html.textile.liquid
+    - Install Crunch v1:
       - install/install-crunch-dispatch.html.textile.liquid
       - install/install-compute-node.html.textile.liquid
     - Helpful hints:
diff --git a/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
new file mode 100644
index 0000000..19f8662
--- /dev/null
+++ b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
@@ -0,0 +1,39 @@
+---
+layout: default
+navsection: installguide
+title: Set up a compute node
+...
+
+h2. Install dependencies
+
+First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+
+{% include 'note_python_sc' %}
+
+On CentOS 6 and RHEL 6:
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo yum install python27-python-arvados-fuse crunch-run arvados-docker-cleaner</span>
+</code></pre>
+</notextile>
+
+On other Red Hat-based systems:
+
+<notextile>
+<pre><code>~$ <span class="userinput">echo 'exclude=python2-llfuse' | sudo tee -a /etc/yum.conf</span>
+~$ <span class="userinput">sudo yum install python-arvados-fuse crunch-run arvados-docker-cleaner</span>
+</code></pre>
+</notextile>
+
+On Debian-based systems:
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-python-client crunch-run arvados-docker-cleaner</span>
+</code></pre>
+</notextile>
+
+{% include 'install_compute_docker' %}
+
+{% include 'install_compute_fuse' %}
+
+{% include 'install_docker_cleaner' %}
diff --git a/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
new file mode 100644
index 0000000..3135411
--- /dev/null
+++ b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
@@ -0,0 +1,105 @@
+---
+layout: default
+navsection: installguide
+title: Install the SLURM dispatcher
+
+...
+
+The SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller.  It is not resource-intensive, so you can run it on the API server node.
+
+h2. Install the dispatcher
+
+First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+
+On Red Hat-based systems:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo yum install crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
+
+On Debian-based systems:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo apt-get install crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
+
+h2. Create a dispatcher token
+
+Create a privileged Arvados API token for use by the dispatcher. If you have multiple dispatch processes, you should give each one a different token.  On the API server, run:
+
+<notextile>
+<pre><code>apiserver:~$ <span class="userinput">cd /var/www/arvados-api/current</span>
+apiserver:/var/www/arvados-api/current$ <span class="userinput">sudo -u <b>webserver-user</b> RAILS_ENV=production bundle exec script/create_superuser_token.rb</span>
+zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
+</code></pre>
+</notextile>
+
+h2. Configure the dispatcher
+
+Set up crunch-dispatch-slurm's configuration directory:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo mkdir -p /etc/arvados</span>
+$ <span class="userinput">sudo install -d -o -root -g <b>crunch</b> -m 0750 /etc/arvados/crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
+
+Edit @/etc/arvados/crunch-dispatch-slurm/config.json@ to authenticate to your Arvados API server, using the token you generated in the previous step.  Follow this JSON format:
+
+<notextile>
+<pre><code class="userinput">{
+  "Client": {
+    "APIHost": <b>"zzzzz.arvadosapi.com"</b>,
+    "AuthToken": <b>"zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"</b>
+  }
+}
+</code></pre>
+</notextile>
+
+This is the only configuration required by crunch-dispatch-slurm.  The subsections below describe optional configuration flags you can set inside the main configuration object.
+
+h3. PollPeriod
+
+crunch-dispatch-slurm polls the API server periodically for new containers to run.  The @PollPeriod@ option controls how often this poll happens.  Set this to a string of numbers suffixed with one of the time units @ns@, @us@, @ms@, @s@, @m@, or @h at .  For example:
+
+<notextile>
+<pre><code class="userinput">"PollPeriod": "3m30s"
+</code></pre>
+</notextile>
+
+h3. SbatchArguments
+
+When crunch-dispatch-slurm invokes @sbatch@, you can add switches to the command by specifying @SbatchArguments at .  You can use this to send the jobs to specific cluster partitions or add resource requests.  Set @SbatchArguments@ to an array of strings.  For example:
+
+<notextile>
+<pre><code class="userinput">"SbatchArguments": ["--partition=PartitionName"]
+</code></pre>
+</notextile>
+
+h3. CrunchRunCommand: Dispatch to SLURM cgroups
+
+If your SLURM cluster uses the @task/cgroup@ TaskPlugin, you can configure Crunch's Docker containers to be dispatched inside SLURM's cgroups.  This provides consistent enforcement of resource constraints.  To do this, add the following to your crunch-dispatch-slurm configuration:
+
+<notextile>
+<pre><code class="userinput">"CrunchRunCommand": ["crunch-run", "-cgroup-parent-subsystem=<b>memory</b>"]
+</code></pre>
+</notextile>
+
+The choice of subsystem ("memory" in this example) must correspond to one of the resource types enabled in SLURM's @cgroup.conf at . Limits for other resource types will also be respected.  The specified subsystem is singled out only to let Crunch determine the name of the cgroup provided by SLURM.
+
+h2. Restart the dispatcher
+
+{% include 'notebox_begin' %}
+
+The crunch-dispatch-slurm package includes configuration files for systemd.  If you're using a different init system, you'll need to configure a service to start and stop a @crunch-dispatch-slurm@ process as desired.  The process should run from a directory where the @crunch@ user has write permission on all compute nodes, such as its home directory or @/tmp at .  You do not need to specify any additional switches or environment variables.
+
+{% include 'notebox_end' %}
+
+Restart the dispatcher to run with your new configuration:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo systemctl restart crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
diff --git a/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid b/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid
new file mode 100644
index 0000000..c4dc929
--- /dev/null
+++ b/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid
@@ -0,0 +1,9 @@
+---
+layout: default
+navsection: installguide
+title: Crunch v2 SLURM prerequisites
+...
+
+Crunch v2 containers can be dispatched to a SLURM cluster.  The dispatcher sends work to the cluster using SLURM's @sbatch@ command, so it works in a variety of SLURM configurations.
+
+In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node.  This install guide refers to this user as the @crunch@ user.  We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions.  However, you can run the dispatcher under any account with sufficient permissions across the cluster.
diff --git a/doc/install/crunch2-slurm/install-test.html.textile.liquid b/doc/install/crunch2-slurm/install-test.html.textile.liquid
new file mode 100644
index 0000000..82df496
--- /dev/null
+++ b/doc/install/crunch2-slurm/install-test.html.textile.liquid
@@ -0,0 +1,109 @@
+---
+layout: default
+navsection: installguide
+title: Test SLURM dispatch
+...
+
+h2. Test compute node setup
+
+You should now be able to submit SLURM jobs that run in Docker containers.  On the node where you're running the dispatcher, you can test this by running:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo -u <b>crunch</b> srun -N1 docker run busybox echo OK
+</code></pre>
+</notextile>
+
+If it works, this command should print @OK@ (it may also show some status messages from SLURM and/or Docker).  If it does not print @OK@, double-check your compute node setup, and that the @crunch@ user can submit SLURM jobs.
+
+h2. Test the dispatcher
+
+On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo journalctl -o cat -fu crunch-dispatch-slurm.service</span>
+</code></pre>
+</notextile>
+
+On a shell VM, submit a simple container request:
+
+<notextile>
+<pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
+  "name":            "test",
+  "state":           "Committed",
+  "priority":        1,
+  "container_image": "arvados/jobs:latest",
+  "command":         ["echo", "Hello, Crunch!"],
+  "output_path":     "/out",
+  "mounts": {
+    "/out": {
+      "kind":        "tmp",
+      "capacity":    1000
+    }
+  },
+  "runtime_constraints": {
+    "vcpus": 1,
+    "ram": 8388608
+  }
+}'</span>
+</code></pre>
+</notextile>
+
+This command should return a record with a @container_uuid@ field.  Once crunch-dispatch-slurm polls the API server for new containers to run, you should see it dispatch that same container.  It will log messages like:
+
+<notextile>
+<pre><code>2016/08/05 13:52:54 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 started
+2016/08/05 13:53:04 About to submit queued container zzzzz-dz642-hdp2vpu9nq14tx0
+2016/08/05 13:53:04 sbatch succeeded: Submitted batch job 8102
+</code></pre>
+</notextile>
+
+If you do not see crunch-dispatch-slurm try to dispatch the container, double-check that it is running and that the API hostname and token in @/etc/arvados/crunch-dispatch-slurm/config.json@ are correct.
+
+Before the container finishes, SLURM's @squeue@ command will show the new job in the list of queued and running jobs.  For example, you might see:
+
+<notextile>
+<pre><code>$ <span class="userinput">squeue --long</span>
+Fri Aug  5 13:57:50 2016
+  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)
+   8103   compute zzzzz-dz   crunch  RUNNING       1:56 UNLIMITED      1 compute0
+</code></pre>
+</notextile>
+
+The job's name corresponds to the container's UUID.  You can get more information about it by running, e.g., <notextile><code>scontrol show job Name=<b>UUID</b></code></notextile>.
+
+When the container finishes, the dispatcher will log that, with the final result:
+
+<notextile>
+<pre><code>2016/08/05 13:53:14 Container zzzzz-dz642-hdp2vpu9nq14tx0 now in state "Complete" with locked_by_uuid ""
+2016/08/05 13:53:14 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 finished
+</code></pre>
+</notextile>
+
+After the container finishes, you can get the container record by UUID from a shell VM to see its results:
+
+<notextile>
+<pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>
+{
+ ...
+ "exit_code":0,
+ "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166",
+ "output":"d41d8cd98f00b204e9800998ecf8427e+0",
+ "state":"Complete",
+ ...
+}
+</code></pre>
+</notextile>
+
+You can use standard Keep tools to view the container's output and logs from their corresponding fields.  For example, to see the logs from the collection referenced in the @log@ field:
+
+<notextile>
+<pre><code>$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span>
+./crunch-run.txt
+./stderr.txt
+./stdout.txt
+$ <span class="userinput">arv keep get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span>
+2016-08-05T13:53:06.201011Z Hello, Crunch!
+</code></pre>
+</notextile>
+
+If the container does not dispatch successfully, refer to the crunch-dispatch-slurm logs for information about why it failed.

commit f9ffb421cddf3a1bfcb6d79b31958b4d54ed5906
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Aug 8 10:35:55 2016 -0400

    9705: Refactor out partials from compute node install guide.
    
    These will be reused in the Crunch2 install guide.

diff --git a/doc/_includes/_install_compute_docker.liquid b/doc/_includes/_install_compute_docker.liquid
new file mode 100644
index 0000000..915db02
--- /dev/null
+++ b/doc/_includes/_install_compute_docker.liquid
@@ -0,0 +1,45 @@
+h2. Install Docker
+
+Compute nodes must have Docker installed to run containers.  This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported).  Follow the "Docker Engine installation documentation":https://docs.docker.com/ for your distribution.
+
+For Debian-based systems, the Arvados package repository includes a backported @docker.io@ package with a known-good version you can install.
+
+h2. Configure the Docker daemon
+
+Crunch runs Docker containers with relatively little configuration.  You may need to start the Docker daemon with specific options to make sure these jobs run smoothly in your environment.  This section highlights options that are useful to most installations.  Refer to the "Docker daemon reference":https://docs.docker.com/reference/commandline/daemon/ for complete information about all available options.
+
+The best way to configure these options varies by distribution.
+
+* If you're using our backported @docker.io@ package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker.io at .
+* If you're using another Debian-based package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker at .
+* On Red Hat-based distributions, you can list these options in the @other_args@ setting in @/etc/sysconfig/docker at .
+
+h3. Default ulimits
+
+Docker containers inherit ulimits from the Docker daemon.  However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job.  You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon.  For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000 at .
+
+h3. DNS
+
+Your containers must be able to resolve the hostname of your API server and any hostnames returned in Keep service records.  If these names are not in public DNS records, you may need to specify a DNS resolver for the containers by setting the @--dns@ address to an IP address of an appropriate nameserver.  You may specify this option more than once to use multiple nameservers.
+
+h2. Configure Linux cgroups accounting
+
+Linux can report what compute resources are used by processes in a specific cgroup or Docker container.  Crunch can use these reports to share that information with users running compute work.  This can help pipeline authors debug and optimize their workflows.
+
+To enable cgroups accounting, you must boot Linux with the command line parameters @cgroup_enable=memory swapaccount=1 at .
+
+On Debian-based systems, open the file @/etc/default/grub@ in an editor.  Find where the string @GRUB_CMDLINE_LINUX@ is set.  Add @cgroup_enable=memory swapaccount=1@ to that string.  Save the file and exit the editor.  Then run:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo update-grub</span>
+</code></pre>
+</notextile>
+
+On Red Hat-based systems, run:
+
+<notextile>
+<pre><code>$ <span class="userinput">sudo grubby --update-kernel=ALL --args='cgroup_enable=memory swapaccount=1'</span>
+</code></pre>
+</notextile>
+
+Finally, reboot the system to make these changes effective.
diff --git a/doc/_includes/_install_compute_fuse.liquid b/doc/_includes/_install_compute_fuse.liquid
new file mode 100644
index 0000000..2bf3152
--- /dev/null
+++ b/doc/_includes/_install_compute_fuse.liquid
@@ -0,0 +1,17 @@
+h2. Configure FUSE
+
+FUSE must be configured with the @user_allow_other@ option enabled for Crunch to set up Keep mounts that are readable by containers.  Install this file as @/etc/fuse.conf@:
+
+<notextile>
+<pre>
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#
+#mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+# mount options.
+#
+user_allow_other
+</pre>
+</notextile>
diff --git a/doc/_includes/_install_docker_cleaner.liquid b/doc/_includes/_install_docker_cleaner.liquid
new file mode 100644
index 0000000..e26b2be
--- /dev/null
+++ b/doc/_includes/_install_docker_cleaner.liquid
@@ -0,0 +1,39 @@
+h2. Configure the Docker cleaner
+
+The arvados-docker-cleaner program removes least recently used Docker images as needed to keep disk usage below a configured limit.
+
+{% include 'notebox_begin' %}
+This also removes all containers as soon as they exit, as if they were run with @docker run --rm at . If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or run it with @--remove-stopped-containers never at .
+{% include 'notebox_end' %}
+
+Install runit to supervise the Docker cleaner daemon.  {% include 'install_runit' %}
+
+Configure runit to run the image cleaner using a suitable quota for your compute nodes and workload:
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo mkdir -p /etc/sv</span>
+~$ <span class="userinput">cd /etc/sv</span>
+/etc/sv$ <span class="userinput">sudo mkdir arvados-docker-cleaner; cd arvados-docker-cleaner</span>
+/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo mkdir log log/main</span>
+/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >log/run' <<'EOF'
+#!/bin/sh
+exec svlogd -tt main
+EOF</span>
+/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >run' <<'EOF'
+#!/bin/sh
+if [ -d /opt/rh/python33 ]; then
+  source scl_source enable python33
+fi
+exec python3 -m arvados_docker.cleaner --quota <b>50G</b>
+EOF</span>
+/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo chmod +x run log/run</span>
+/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo ln -s "$(pwd)" /etc/service/</span>
+</code></pre>
+</notextile>
+
+If you are using a different daemon supervisor, or if you want to test the daemon in a terminal window, an equivalent shell command to run arvados-docker-cleaner is:
+
+<notextile>
+<pre><code><span class="userinput">python3 -m arvados_docker.cleaner --quota <b>50G</b></span>
+</code></pre>
+</notextile>
diff --git a/doc/install/install-compute-node.html.textile.liquid b/doc/install/install-compute-node.html.textile.liquid
index f55bceb..b4d0d59 100644
--- a/doc/install/install-compute-node.html.textile.liquid
+++ b/doc/install/install-compute-node.html.textile.liquid
@@ -32,29 +32,7 @@ On Debian-based systems:
 </code></pre>
 </notextile>
 
-h2. Install Docker
-
-Compute nodes must have Docker installed to run jobs inside containers.  This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported).  Follow the "Docker Engine installation documentation":https://docs.docker.com/ for your distribution.
-
-For Debian-based systems, the Arvados package repository includes a backported @docker.io@ package with a known-good version you can install.
-
-h2. Configure Docker
-
-Crunch runs jobs in Docker containers with relatively little configuration.  You may need to start the Docker daemon with specific options to make sure these jobs run smoothly in your environment.  This section highlights options that are useful to most installations.  Refer to the "Docker daemon reference":https://docs.docker.com/reference/commandline/daemon/ for complete information about all available options.
-
-The best way to configure these options varies by distribution.
-
-* If you're using our backported @docker.io@ package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker.io at .
-* If you're using another Debian-based package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker at .
-* On Red Hat-based distributions, you can list these options in the @other_args@ setting in @/etc/sysconfig/docker at .
-
-h3. Default ulimits
-
-Docker containers inherit ulimits from the Docker daemon.  However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job.  You may want to increase default limits for compute jobs by passing @--default-ulimit@ options to the Docker daemon.  For example, to allow jobs to open 10,000 files, set @--default-ulimit nofile=10000:10000 at .
-
-h3. DNS
-
-Your containers must be able to resolve the hostname in the ARVADOS_API_HOST environment variable (provided by the Crunch dispatcher) and any hostnames returned in Keep service records.  If these names are not in public DNS records, you may need to set a DNS resolver for the containers by specifying the @--dns@ address with the IP address of an appropriate nameserver.  You may specify this option more than once to use multiple nameservers.
+{% include 'install_compute_docker' %}
 
 h2. Set up SLURM
 
@@ -64,63 +42,9 @@ h2. Copy configuration files from the dispatcher (API server)
 
 The @slurm.conf@ and @/etc/munge/munge.key@ files need to be identical across the dispatcher and all compute nodes. Copy the files you created in the "Install the Crunch dispatcher":install-crunch-dispatch.html step to this compute node.
 
-h2. Configure FUSE
-
-Install this file as @/etc/fuse.conf@:
-
-<notextile>
-<pre>
-# Set the maximum number of FUSE mounts allowed to non-root users.
-# The default is 1000.
-#
-#mount_max = 1000
-
-# Allow non-root users to specify the 'allow_other' or 'allow_root'
-# mount options.
-#
-user_allow_other
-</pre>
-</notextile>
+{% include 'install_compute_fuse' %}
 
-h2. Configure the Docker cleaner
-
-The arvados-docker-cleaner program removes least recently used docker images as needed to keep disk usage below a configured limit.
-
-{% include 'notebox_begin' %}
-This also removes all containers as soon as they exit, as if they were run with @docker run --rm at . If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or run it with @--remove-stopped-containers never at .
-{% include 'notebox_end' %}
-
-Install runit to supervise the Docker cleaner daemon.  {% include 'install_runit' %}
-
-Configure runit to run the image cleaner using a suitable quota for your compute nodes and workload:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir -p /etc/sv</span>
-~$ <span class="userinput">cd /etc/sv</span>
-/etc/sv$ <span class="userinput">sudo mkdir arvados-docker-cleaner; cd arvados-docker-cleaner</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo mkdir log log/main</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >log/run' <<'EOF'
-#!/bin/sh
-exec svlogd -tt main
-EOF</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo sh -c 'cat >run' <<'EOF'
-#!/bin/sh
-if [ -d /opt/rh/python33 ]; then
-  source scl_source enable python33
-fi
-exec python3 -m arvados_docker.cleaner --quota <b>50G</b>
-EOF</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo chmod +x run log/run</span>
-/etc/sv/arvados-docker-cleaner$ <span class="userinput">sudo ln -s "$(pwd)" /etc/service/</span>
-</code></pre>
-</notextile>
-
-If you are using a different daemon supervisor, or if you want to test the daemon in a terminal window, an equivalent shell command to run arvados-docker-cleaner is:
-
-<notextile>
-<pre><code><span class="userinput">python3 -m arvados_docker.cleaner --quota <b>50G</b></span>
-</code></pre>
-</notextile>
+{% include 'install_docker_cleaner' %}
 
 h2. Add a Crunch user account
 

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list