[ARVADOS] updated: 1.3.0-1989-g102df1945

Git user git at public.arvados.org
Thu Dec 12 22:24:38 UTC 2019


Summary of changes:
 doc/_config.yml                                    | 15 ++--
 doc/_includes/_install_compute_docker.liquid       | 70 ++++++-----------
 doc/_includes/_install_compute_fuse.liquid         |  3 +-
 doc/_includes/_install_docker_cleaner.liquid       | 17 +----
 doc/_includes/_install_git_curl.liquid             | 19 -----
 doc/_includes/_install_packages.liquid             | 24 ++++++
 doc/_includes/_restart_api.liquid                  |  8 ++
 doc/_includes/_start_service.liquid                | 15 ++++
 .../install-compute-node.html.textile.liquid       | 35 ++++-----
 .../install-dispatch.html.textile.liquid           | 84 ++++++---------------
 .../install-prerequisites.html.textile.liquid      |  4 -
 .../install-slurm.html.textile.liquid              |  5 ++
 .../crunch2-slurm/install-test.html.textile.liquid |  8 +-
 doc/install/index.html.textile.liquid              |  2 +-
 doc/install/install-api-server.html.textile.liquid | 20 ++---
 doc/install/install-composer.html.textile.liquid   | 25 +------
 .../install-dispatch-cloud.html.textile.liquid     | 82 ++++++++++++++++----
 ...e.liquid => install-docker.html.textile.liquid} |  4 +-
 doc/install/install-jobs-image.html.textile.liquid | 38 ++++++++++
 .../install-keep-balance.html.textile.liquid       | 46 ++----------
 doc/install/install-keep-web.html.textile.liquid   | 45 +----------
 doc/install/install-keepproxy.html.textile.liquid  | 47 ++----------
 doc/install/install-keepstore.html.textile.liquid  | 27 +------
 .../install-shell-server.html.textile.liquid       | 55 +++++++++-----
 .../install-workbench-app.html.textile.liquid      | 26 +------
 .../install-workbench2-app.html.textile.liquid     | 25 +------
 doc/install/install-ws.html.textile.liquid         | 54 ++------------
 doc/sdk/cli/install.html.textile.liquid            | 38 +++-------
 doc/sdk/go/example.html.textile.liquid             |  6 +-
 doc/sdk/go/index.html.textile.liquid               |  8 +-
 doc/sdk/index.html.textile.liquid                  |  2 +-
 doc/sdk/java/index.html.textile.liquid             |  2 +
 doc/sdk/perl/index.html.textile.liquid             |  5 +-
 doc/sdk/python/example.html.textile.liquid         | 14 ++++
 doc/sdk/python/sdk-python.html.textile.liquid      | 87 ++++------------------
 35 files changed, 363 insertions(+), 602 deletions(-)
 delete mode 100644 doc/_includes/_install_git_curl.liquid
 create mode 100644 doc/_includes/_install_packages.liquid
 create mode 100644 doc/_includes/_restart_api.liquid
 create mode 100644 doc/_includes/_start_service.liquid
 copy doc/install/{ruby.html.textile.liquid => install-docker.html.textile.liquid} (70%)
 create mode 100644 doc/install/install-jobs-image.html.textile.liquid

       via  102df19458ef2c97d1ef4ba0e571e3204d7073e6 (commit)
      from  4415cbfb461d331af78257838a91d1c1f3d9bb41 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit 102df19458ef2c97d1ef4ba0e571e3204d7073e6
Author: Peter Amstutz <peter.amstutz at curii.com>
Date:   Thu Dec 12 17:21:56 2019 -0500

    15572: Epic documentation commit
    
    * Revise dispatcher install
    * Revise shell and compute node install
    * Fix up Python and CLI SDK install docs
    * Refactor boilerplate sections into parameterized liquid template include
    
    Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz at curii.com>

diff --git a/doc/_config.yml b/doc/_config.yml
index d9c5ece74..8ae6430ec 100644
--- a/doc/_config.yml
+++ b/doc/_config.yml
@@ -88,9 +88,6 @@ navbar:
     - R:
       - sdk/R/index.html.md
       - sdk/R/arvados/index.html.textile.liquid
-    - Perl:
-      - sdk/perl/index.html.textile.liquid
-      - sdk/perl/example.html.textile.liquid
     - Ruby:
       - sdk/ruby/index.html.textile.liquid
       - sdk/ruby/example.html.textile.liquid
@@ -101,6 +98,9 @@ navbar:
     - Java v1:
       - sdk/java/index.html.textile.liquid
       - sdk/java/example.html.textile.liquid
+    - Perl:
+      - sdk/perl/index.html.textile.liquid
+      - sdk/perl/example.html.textile.liquid
   api:
     - Concepts:
       - api/index.html.textile.liquid
@@ -213,12 +213,10 @@ navbar:
       - install/install-ws.html.textile.liquid
       - install/install-arv-git-httpd.html.textile.liquid
       - install/install-shell-server.html.textile.liquid
-    - Containers API support on cloud:
-      - install/install-dispatch-cloud.html.textile.liquid      
-    - Containers API support on SLURM:
-      - install/crunch2-slurm/install-prerequisites.html.textile.liquid
-      - install/crunch2-slurm/install-slurm.html.textile.liquid
+    - Containers API:
       - install/crunch2-slurm/install-compute-node.html.textile.liquid
+      - install/install-jobs-image.html.textile.liquid
+      - install/install-dispatch-cloud.html.textile.liquid
       - install/crunch2-slurm/install-dispatch.html.textile.liquid
       - install/crunch2-slurm/install-test.html.textile.liquid
     - External dependencies:
@@ -226,3 +224,4 @@ navbar:
       - install/ruby.html.textile.liquid
       - install/nginx.html.textile.liquid
       - install/google-auth.html.textile.liquid
+      - install/install-docker.html.textile.liquid
diff --git a/doc/_includes/_install_compute_docker.liquid b/doc/_includes/_install_compute_docker.liquid
index 5a3efee74..63c54aed7 100644
--- a/doc/_includes/_install_compute_docker.liquid
+++ b/doc/_includes/_install_compute_docker.liquid
@@ -4,73 +4,47 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-h2. Install Docker
+h2(#cgroups). Configure Linux cgroups accounting
 
-Compute nodes must have Docker installed to run containers.  This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported).  Follow the "Docker Engine installation documentation":https://docs.docker.com/ for your distribution.
-
-h2(#configure_docker_daemon). Configure the Docker daemon
-
-Crunch runs Docker containers with relatively little configuration.  You may need to start the Docker daemon with specific options to make sure these jobs run smoothly in your environment.  This section highlights options that are useful to most installations.  Refer to the "Docker daemon reference":https://docs.docker.com/reference/commandline/daemon/ for complete information about all available options.
-
-The best way to configure these options varies by distribution.
-
-* If you're using the Debian package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker at .
-* On Red Hat-based distributions, you can list these options in the @other_args@ setting in @/etc/sysconfig/docker at .
-
-h3. Default ulimits
-
-Docker containers inherit ulimits from the Docker daemon.  However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job.  You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon.  For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000 at .
+Linux can report what compute resources are used by processes in a specific cgroup or Docker container.  Crunch can use these reports to share that information with users running compute work.  This can help pipeline authors debug and optimize their workflows.
 
-h3. DNS
+To enable cgroups accounting, you must boot Linux with the command line parameters @cgroup_enable=memory swapaccount=1 at .
 
-Your containers must be able to resolve the hostname of your API server and any hostnames returned in Keep service records.  If these names are not in public DNS records, you may need to specify a DNS resolver for the containers by setting the @--dns@ address to an IP address of an appropriate nameserver.  You may specify this option more than once to use multiple nameservers.
+After making changes, reboot the system to make these changes effective.
 
-h2. Configure Linux cgroups accounting
+h3. Red Hat and CentOS
 
-Linux can report what compute resources are used by processes in a specific cgroup or Docker container.  Crunch can use these reports to share that information with users running compute work.  This can help pipeline authors debug and optimize their workflows.
+<notextile>
+<pre><code>~$ <span class="userinput">sudo grubby --update-kernel=ALL --args='cgroup_enable=memory swapaccount=1'</span>
+</code></pre>
+</notextile>
 
-To enable cgroups accounting, you must boot Linux with the command line parameters @cgroup_enable=memory swapaccount=1 at .
+h3. Debian and Ubuntu
 
-On Debian-based systems, open the file @/etc/default/grub@ in an editor.  Find where the string @GRUB_CMDLINE_LINUX@ is set.  Add @cgroup_enable=memory swapaccount=1@ to that string.  Save the file and exit the editor.  Then run:
+Open the file @/etc/default/grub@ in an editor.  Find where the string @GRUB_CMDLINE_LINUX@ is set.  Add @cgroup_enable=memory swapaccount=1@ to that string.  Save the file and exit the editor.  Then run:
 
 <notextile>
 <pre><code>~$ <span class="userinput">sudo update-grub</span>
 </code></pre>
 </notextile>
 
-On Red Hat-based systems, run:
+h2(#install_docker). Install Docker
+
+Compute nodes must have Docker installed to run containers.  This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported).  Follow the "Docker Engine installation documentation":https://docs.docker.com/install/ for your distribution.
+
+Make sure Docker is enabled to start on boot:
 
 <notextile>
-<pre><code>~$ <span class="userinput">sudo grubby --update-kernel=ALL --args='cgroup_enable=memory swapaccount=1'</span>
+<pre><code># <span class="userinput">systemctl enable --now docker</span>
 </code></pre>
 </notextile>
 
-Finally, reboot the system to make these changes effective.
-
-h2. Create a project for Docker images
+h2(#configure_docker_daemon). Configure the Docker daemon
 
-Here we create a default project for the standard Arvados Docker images, and give all users read access to it. The project is owned by the system user.
+Depending on your anticipated workload or cluster configuration, you may need to tweak Docker options.
 
-<notextile>
-<pre><code>~$ <span class="userinput">uuid_prefix=`arv --format=uuid user current | cut -d- -f1`</span>
-~$ <span class="userinput">project_uuid=`arv --format=uuid group create --group "{\"owner_uuid\":\"$uuid_prefix-tpzed-000000000000000\", \"group_class\":\"project\", \"name\":\"Arvados Standard Docker Images\"}"`</span>
-~$ <span class="userinput">echo "Arvados project uuid is '$project_uuid'"</span>
-~$ <span class="userinput">read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"</span>
-<span class="userinput">{
- "tail_uuid":"$all_users_group_uuid",
- "head_uuid":"$project_uuid",
- "link_class":"permission",
- "name":"can_read"
-}
-EOF</span>
-</code></pre></notextile>
-
-h2. Download and tag the latest arvados/jobs docker image
-
-In order to start workflows from workbench, there needs to be Docker image tagged @arvados/jobs:latest at . The following command downloads the latest arvados/jobs image from Docker Hub, loads it into Keep, and tags it as 'latest'.  In this example @$project_uuid@ should be the UUID of the "Arvados Standard Docker Images" project.
+For information about how to set configuration options for the Docker daemon, see https://docs.docker.com/config/daemon/systemd/
 
-<notextile>
-<pre><code>~$ <span class="userinput">arv-keepdocker --pull arvados/jobs latest --project-uuid $project_uuid</span>
-</code></pre></notextile>
+h3. Changing ulimits
 
-If the image needs to be downloaded from Docker Hub, the command can take a few minutes to complete, depending on available network bandwidth.
+Docker containers inherit ulimits from the Docker daemon.  However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job.  You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon.  For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000 at .
diff --git a/doc/_includes/_install_compute_fuse.liquid b/doc/_includes/_install_compute_fuse.liquid
index b72536272..95679f3fa 100644
--- a/doc/_includes/_install_compute_fuse.liquid
+++ b/doc/_includes/_install_compute_fuse.liquid
@@ -4,7 +4,7 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-h2. Configure FUSE
+h2(#fuse). Update fuse.conf
 
 FUSE must be configured with the @user_allow_other@ option enabled for Crunch to set up Keep mounts that are readable by containers.  Install this file as @/etc/fuse.conf@:
 
@@ -12,7 +12,6 @@ FUSE must be configured with the @user_allow_other@ option enabled for Crunch to
 <pre>
 # Allow non-root users to specify the 'allow_other' or 'allow_root'
 # mount options.
-#
 user_allow_other
 </pre>
 </notextile>
diff --git a/doc/_includes/_install_docker_cleaner.liquid b/doc/_includes/_install_docker_cleaner.liquid
index ecd78390e..f8e9e049d 100644
--- a/doc/_includes/_install_docker_cleaner.liquid
+++ b/doc/_includes/_install_docker_cleaner.liquid
@@ -4,14 +4,10 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-h2. Configure the Docker cleaner
+h2(#docker-cleaner). Update docker-cleaner.json
 
 The @arvados-docker-cleaner@ program removes least recently used Docker images as needed to keep disk usage below a configured limit.
 
-{% include 'notebox_begin' %}
-This also removes all containers as soon as they exit, as if they were run with @docker run --rm at . If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or configure it with @"RemoveStoppedContainers":"never"@.
-{% include 'notebox_end' %}
-
 Create a file @/etc/arvados/docker-cleaner/docker-cleaner.json@ in an editor, with the following contents.
 
 <notextile>
@@ -24,11 +20,6 @@ Create a file @/etc/arvados/docker-cleaner/docker-cleaner.json@ in an editor, wi
 
 *Choosing a quota:* Most deployments will want a quota that's at least 10G.  From there, a larger quota can help reduce compute overhead by preventing reloading the same Docker image repeatedly, but will leave less space for other files on the same storage (usually Docker volumes).  Make sure the quota is less than the total space available for Docker images.
 
-Restart the service after updating the configuration file.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
-
-*If you are using a different daemon supervisor,* or if you want to test the daemon in a terminal window, run @arvados-docker-cleaner at . Run @arvados-docker-cleaner --help@ for more configuration options.
+{% include 'notebox_begin' %}
+This also removes all containers as soon as they exit, as if they were run with @docker run --rm at . If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or configure it with @"RemoveStoppedContainers":"never"@.
+{% include 'notebox_end' %}
diff --git a/doc/_includes/_install_git_curl.liquid b/doc/_includes/_install_git_curl.liquid
deleted file mode 100644
index 40b95d314..000000000
--- a/doc/_includes/_install_git_curl.liquid
+++ /dev/null
@@ -1,19 +0,0 @@
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-On a Debian-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install git curl</span>
-</code></pre>
-</notextile>
-
-On a Red Hat-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install git curl</span>
-</code></pre>
-</notextile>
diff --git a/doc/_includes/_install_packages.liquid b/doc/_includes/_install_packages.liquid
new file mode 100644
index 000000000..bfac32d83
--- /dev/null
+++ b/doc/_includes/_install_packages.liquid
@@ -0,0 +1,24 @@
+{% comment %}
+packages_to_install should be a list
+fallback on arvados_component if not defined
+{% endcomment %}
+
+{% if package_to_install == nil %}
+  {% assign packages_to_install = arvados_component | split: " " %}
+{% endif %}
+
+h2(#install-packages). Install {{packages_to_install | join: " and " }}
+
+h3. Red Hat and Centos
+
+<notextile>
+<pre><code># <span class="userinput">yum install {{packages_to_install | join: " "}}</span>
+</code></pre>
+</notextile>
+
+h3. Debian and Ubuntu
+
+<notextile>
+<pre><code># <span class="userinput">apt-get install {{packages_to_install  | join " "}}</span>
+</code></pre>
+</notextile>
diff --git a/doc/_includes/_restart_api.liquid b/doc/_includes/_restart_api.liquid
new file mode 100644
index 000000000..c3e0330b8
--- /dev/null
+++ b/doc/_includes/_restart_api.liquid
@@ -0,0 +1,8 @@
+h2(#restart-api). Restart the API server and controller
+
+*Make sure the cluster config file is up to date on the API server host* then restart the API server and controller processes to ensure the configuration changes are visible to the whole cluster.
+
+<notextile>
+<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
+</code></pre>
+</notextile>
diff --git a/doc/_includes/_start_service.liquid b/doc/_includes/_start_service.liquid
new file mode 100644
index 000000000..27c42c94c
--- /dev/null
+++ b/doc/_includes/_start_service.liquid
@@ -0,0 +1,15 @@
+h2(#start-service). Start the service
+
+<notextile>
+<pre><code># <span class="userinput">systemctl enable --now {{arvados_component}}</span>
+# <span class="userinput">systemctl status {{arvados_component}}</span>
+[...]
+</code></pre>
+</notextile>
+
+If @systemctl status@ indicates it is not running, use @journalctl@ to check logs for errors:
+
+<notextile>
+<pre><code># <span class="userinput">journalctl -n12 --unit {{arvados_component}}</span>
+</code></pre>
+</notextile>
diff --git a/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
index 6c691f29a..e93332c92 100644
--- a/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-compute-node.html.textile.liquid
@@ -9,33 +9,34 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-h2. Install dependencies
+# "Introduction":#introduction
+# "Set up Docker":#docker
+# "Update fuse.conf":#fuse
+# "Update docker-cleaner.json":#docker-cleaner
+# "Configure Linux cgroups accounting":#cgroups
+# "Install Docker":#install_docker
+# "Configure the Docker daemon":#configure_docker_daemon
+# "Install'python-arvados-fuse and crunch-run and arvados-docker-cleaner":#install-packages
 
-h3. Centos 7
+h2(#introduction). Introduction
 
-{% include 'note_python_sc' %}
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados.
 
-<notextile>
-<pre><code>~$ <span class="userinput">echo 'exclude=python2-llfuse' | sudo tee -a /etc/yum.conf</span>
-~$ <span class="userinput">sudo yum install python-arvados-fuse crunch-run arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
+* If you are using the cloud dispatcher, apply these step and then save a compute node virtual machine image.  The virtual machine image id will go in @config.yml at .
+* If you are using SLURM on a static custer, these steps must be duplicated on every compute node, preferrably using a devops tool such as Puppet.
 
-h3. Debian and Ubuntu
+h2(#docker). Set up Docker
 
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-fuse crunch-run arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
+See "Set up Docker":../install-docker.html
 
-{% include 'install_compute_docker' %}
+{% assign arvados_component = 'python-arvados-fuse crunch-run arvados-docker-cleaner' %}
 
 {% include 'install_compute_fuse' %}
 
 {% include 'install_docker_cleaner' %}
 
-h2. Set up SLURM
+{% include 'install_packages' %}
 
-Install SLURM on the compute node using the same process you used on the API server in the "previous step":install-slurm.html.
+{% assign arvados_component = 'arvados-docker-cleaner' %}
 
-The @slurm.conf@ and @/etc/munge/munge.key@ files must be identical on all SLURM nodes. Copy the files you created on the API server in the "previous step":install-slurm.html to each compute node.
+{% include 'start_service' %}
diff --git a/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
index fccc28b72..d55ca2fca 100644
--- a/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-dispatch.html.textile.liquid
@@ -10,40 +10,30 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-The SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller.  It is not resource-intensive, so you can run it on the API server node.
+# "Introduction":#introduction
+# "Update config.yml":#update-config
+# "Install crunch-dispatch-slurm":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
 
-h2. Install the dispatcher
+h2(#introduction). Introduction
 
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+This assumes you already have a SLURM cluster, and have "set up your compute nodes":install-compute-node.html .  For information on installing SLURM, see https://slurm.schedmd.com/quickstart_admin.html
 
-On Red Hat-based systems:
+The Arvados SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller (via @sbatch@).  It is not resource-intensive, so you can run it on the API server node.
 
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install crunch-dispatch-slurm</span>
-~$ <span class="userinput">sudo systemctl enable crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+h2(#update-config). Update config.yml (optional)
 
-h2. Configure the dispatcher (optional)
+Crunch-dispatch-slurm reads the common configuration file at @/etc/arvados/config.yml at .
 
-Crunch-dispatch-slurm reads the common configuration file at @/etc/arvados/config.yml at .  The essential configuration parameters will already be set by previous install steps, so no additional configuration is required.  The following sections describe optional configuration parameters.
+The following configuration parameters are optional.
 
 h3(#PollPeriod). Containers.PollInterval
 
 crunch-dispatch-slurm polls the API server periodically for new containers to run.  The @PollInterval@ option controls how often this poll happens.  Set this to a string of numbers suffixed with one of the time units @ns@, @us@, @ms@, @s@, @m@, or @h at .  For example:
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       <code class="userinput">PollInterval: <b>3m30s</b>
 </code></pre>
 </notextile>
@@ -55,10 +45,7 @@ Extra RAM to reserve (in bytes) on each SLURM job submitted by Arvados, which is
 Supports suffixes @KB@, @KiB@, @MB@, @MiB@, @GB@, @GiB@, @TB@, @TiB@, @PB@, @PiB@, @EB@, @EiB@ (where @KB@ is 10[^3^], @KiB@ is 2[^10^], @MB@ is 10[^6^], @MiB@ is 2[^20^] and so forth).
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       <code class="userinput">ReserveExtraRAM: <b>256MiB</b></code>
 </pre>
 </notextile>
@@ -68,10 +55,7 @@ h3(#MinRetryPeriod). Containers.MinRetryPeriod: Rate-limit repeated attempts to
 If SLURM is unable to run a container, the dispatcher will submit it again after the next PollPeriod. If PollPeriod is very short, this can be excessive. If MinRetryPeriod is set, the dispatcher will avoid submitting the same container to SLURM more than once in the given time span.
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       <code class="userinput">MinRetryPeriod: <b>30s</b></code>
 </pre>
 </notextile>
@@ -81,10 +65,7 @@ h3(#KeepServiceURIs). Containers.SLURM.SbatchEnvironmentVariables
 Some Arvados installations run a local keepstore on each compute node to handle all Keep traffic.  To override Keep service discovery and access the local keep server instead of the global servers, set ARVADOS_KEEP_SERVICES in SbatchEnvironmentVariables:
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       SLURM:
         <span class="userinput">SbatchEnvironmentVariables:
           ARVADOS_KEEP_SERVICES: "http://127.0.0.1:25107"</span>
@@ -101,10 +82,7 @@ crunch-dispatch-slurm adjusts the "nice" values of its SLURM jobs to ensure cont
 The smallest usable value is @1 at . The default value of @10@ is used if this option is zero or negative. Example:
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       SLURM:
         <code class="userinput">PrioritySpread: <b>1000</b></code></pre>
 </notextile>
@@ -114,10 +92,7 @@ h3(#SbatchArguments). Containers.SLURM.SbatchArgumentsList
 When crunch-dispatch-slurm invokes @sbatch@, you can add arguments to the command by specifying @SbatchArguments at .  You can use this to send the jobs to specific cluster partitions or add resource requests.  Set @SbatchArguments@ to an array of strings.  For example:
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       SLURM:
         <code class="userinput">SbatchArgumentsList:
           - <b>"--partition=PartitionName"</b></code>
@@ -131,10 +106,7 @@ h3(#CrunchRunCommand-cgroups). Containers.CrunchRunArgumentList: Dispatch to SLU
 If your SLURM cluster uses the @task/cgroup@ TaskPlugin, you can configure Crunch's Docker containers to be dispatched inside SLURM's cgroups.  This provides consistent enforcement of resource constraints.  To do this, use a crunch-dispatch-slurm configuration like the following:
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       <code class="userinput">CrunchRunArgumentsList:
         - <b>"-cgroup-parent-subsystem=memory"</b></code>
 </pre>
@@ -155,27 +127,17 @@ h3(#CrunchRunCommand-network). Containers.CrunchRunArgumentList: Using host netw
 Older Linux kernels (prior to 3.18) have bugs in network namespace handling which can lead to compute node lockups.  This by is indicated by blocked kernel tasks in "Workqueue: netns cleanup_net".   If you are experiencing this problem, as a workaround you can disable use of network namespaces by Docker across the cluster.  Be aware this reduces container isolation, which may be a security risk.
 
 <notextile>
-<pre>
-Clusters:
-  zzzzz:
-    Containers:
+<pre>    Containers:
       <code class="userinput">CrunchRunArgumentsList:
         - <b>"-container-enable-networking=always"</b>
         - <b>"-container-network-mode=host"</b></code>
 </pre>
 </notextile>
 
-h2. Restart the dispatcher
+{% assign arvados_component = 'crunch-dispatch-slurm' %}
 
-{% include 'notebox_begin' %}
-
-The crunch-dispatch-slurm package includes configuration files for systemd.  If you're using a different init system, you'll need to configure a service to start and stop a @crunch-dispatch-slurm@ process as desired.  The process should run from a directory where the @crunch@ user has write permission on all compute nodes, such as its home directory or @/tmp at .  You do not need to specify any additional switches or environment variables.
-
-{% include 'notebox_end' %}
+{% include 'install_packages' %}
 
-Restart the dispatcher to run with your new configuration:
+{% include 'start_service' %}
 
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
diff --git a/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid b/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid
index eceeefaf9..39f1b7258 100644
--- a/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-prerequisites.html.textile.liquid
@@ -8,7 +8,3 @@ Copyright (C) The Arvados Authors. All rights reserved.
 
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
-
-Containers can be dispatched to a SLURM cluster.  The dispatcher sends work to the cluster using SLURM's @sbatch@ command, so it works in a variety of SLURM configurations.
-
-In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node.  This install guide refers to this user as the @crunch@ user.  We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions.  However, you can run the dispatcher under any account with sufficient permissions across the cluster.
diff --git a/doc/install/crunch2-slurm/install-slurm.html.textile.liquid b/doc/install/crunch2-slurm/install-slurm.html.textile.liquid
index e1593a430..547d24e73 100644
--- a/doc/install/crunch2-slurm/install-slurm.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-slurm.html.textile.liquid
@@ -9,6 +9,11 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
+Containers can be dispatched to a SLURM cluster.  The dispatcher sends work to the cluster using SLURM's @sbatch@ command, so it works in a variety of SLURM configurations.
+
+In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node.  This install guide refers to this user as the @crunch@ user.  We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions.  However, you can run the dispatcher under any account with sufficient permissions across the cluster.
+
+
 On the API server, install SLURM and munge, and generate a munge key.
 
 On Debian-based systems:
diff --git a/doc/install/crunch2-slurm/install-test.html.textile.liquid b/doc/install/crunch2-slurm/install-test.html.textile.liquid
index 03a5d18b4..ce5aceee6 100644
--- a/doc/install/crunch2-slurm/install-test.html.textile.liquid
+++ b/doc/install/crunch2-slurm/install-test.html.textile.liquid
@@ -29,7 +29,7 @@ On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
 </code></pre>
 </notextile>
 
-*On your shell server*, submit a simple container request:
+Submit a simple container request:
 
 <notextile>
 <pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
@@ -53,7 +53,7 @@ On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
 </code></pre>
 </notextile>
 
-This command should return a record with a @container_uuid@ field.  Once crunch-dispatch-slurm polls the API server for new containers to run, you should see it dispatch that same container.  It will log messages like:
+This command should return a record with a @container_uuid@ field.  Once @crunch-dispatch-slurm@ polls the API server for new containers to run, you should see it dispatch that same container.  It will log messages like:
 
 <notextile>
 <pre><code>2016/08/05 13:52:54 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 started
@@ -62,8 +62,6 @@ This command should return a record with a @container_uuid@ field.  Once crunch-
 </code></pre>
 </notextile>
 
-If you do not see crunch-dispatch-slurm try to dispatch the container, double-check that it is running and that the API hostname and token in @/etc/arvados/crunch-dispatch-slurm/crunch-dispatch-slurm.yml@ are correct.
-
 Before the container finishes, SLURM's @squeue@ command will show the new job in the list of queued and running jobs.  For example, you might see:
 
 <notextile>
@@ -111,4 +109,4 @@ You can use standard Keep tools to view the container's output and logs from the
 </code></pre>
 </notextile>
 
-If the container does not dispatch successfully, refer to the crunch-dispatch-slurm logs for information about why it failed.
+If the container does not dispatch successfully, refer to the @crunch-dispatch-slurm@ logs for information about why it failed.
diff --git a/doc/install/index.html.textile.liquid b/doc/install/index.html.textile.liquid
index efa671db8..506ff4d4b 100644
--- a/doc/install/index.html.textile.liquid
+++ b/doc/install/index.html.textile.liquid
@@ -16,7 +16,7 @@ Arvados components can be installed and configured in a number of different ways
 <div class="offset1">
 table(table table-bordered table-condensed).
 |||\5=. Appropriate for|
-||_. Ease of setup|_. Multiuser/networked access|_. Workflow Development and Testing|_. Large Scale Production|_. Development of Arvados|_. Arvados System Testing|
+||_. Ease of setup|_. Multiuser/networked access|_. Workflow Development and Testing|_. Large Scale Production|_. Development of Arvados|_. Arvados Evaluation|
 |"Arvados-in-a-box":arvbox.html (arvbox)|Easy|no|yes|no|yes|yes|
 |"Arvados on Kubernetes":arvados-on-kubernetes.html|Easy ^1^|yes|yes ^2^|no ^2^|no|yes|
 |"Manual installation":install-manual-prerequisites.html|Complicated|yes|yes|yes|no|no|
diff --git a/doc/install/install-api-server.html.textile.liquid b/doc/install/install-api-server.html.textile.liquid
index 1f885f909..cc3dc3b43 100644
--- a/doc/install/install-api-server.html.textile.liquid
+++ b/doc/install/install-api-server.html.textile.liquid
@@ -188,34 +188,24 @@ server {
 </code></pre>
 </notextile>
 
-h2(#install-packages). Install arvados-api-server and arvados-controller
+{% assign arvados_component = 'arvados-api-server arvados-controller' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install arvados-api-server arvados-controller</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-api-server arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'start_service' %}
 
 h2(#confirm-working). Confirm working installation
 
 Confirm working controller:
 
 <pre>
-$ curl https://xxxxx.example.com/arvados/v1/config
+$ curl https://ClusterID.example.com/arvados/v1/config
 </pre>
 
 Confirm working Rails API server:
 
 <pre>
-$ curl https://xxxxx.example.com/discovery/v1/apis/arvados/v1/rest
+$ curl https://ClusterID.example.com/discovery/v1/apis/arvados/v1/rest
 </pre>
 
 Confirm that you can use the system root token to act as the system root user:
diff --git a/doc/install/install-composer.html.textile.liquid b/doc/install/install-composer.html.textile.liquid
index d27db4d4e..9f37f0e62 100644
--- a/doc/install/install-composer.html.textile.liquid
+++ b/doc/install/install-composer.html.textile.liquid
@@ -50,30 +50,11 @@ location /composer.yml {
 </code></pre>
 </notextile>
 
-h2(#install-packages). Install arvados-composer
+{% assign arvados_component = 'arvados-composer' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install arvados-composer</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-composer</span>
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding Workbench to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
diff --git a/doc/install/install-dispatch-cloud.html.textile.liquid b/doc/install/install-dispatch-cloud.html.textile.liquid
index 772ba548c..4cc5a954d 100644
--- a/doc/install/install-dispatch-cloud.html.textile.liquid
+++ b/doc/install/install-dispatch-cloud.html.textile.liquid
@@ -9,10 +9,15 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
+# "Introduction":#introduction
 # "Update config.yml":#update-config
 # "Install arvados-dispatch-cloud":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
 # "Confirm working installation":#confirm-working
 
+h2(#introduction). Introduction
+
 The cloud dispatch service is for running containers on cloud VMs. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine.
 
 The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs.  It is not resource-intensive, so you can run it on the API server node.
@@ -153,38 +158,83 @@ Run the @cloudtest@ tool to verify that your configuration works. This creates a
 
 Refer to the "cloudtest tool documentation":../admin/cloudtest.html for more information.
 
-h2. Install arvados-dispatch-cloud
+{% assign arvados_component = 'arvados-dispatch-cloud' %}
+
+{% include 'install_packages' %}
 
-{% include 'notebox_begin' %}
+{% include 'start_service' %}
 
-The arvados-dispatch-cloud package includes configuration files for systemd. If you're using a different init system, configure a service to start and stop an @arvados-dispatch-cloud@ process as desired.
+{% include 'restart_api' %}
 
-{% include 'notebox_end' %}
+h2(#confirm-working). Confirm working installation
 
-h3. Centos 7
+On the dispatch node, start monitoring the arvados-dispatch-cloud logs:
 
 <notextile>
-<pre><code># <span class="userinput">yum install arvados-dispatch-cloud</span>
+<pre><code>~$ <span class="userinput">sudo journalctl -o cat -fu arvados-dispatch-cloud.service</span>
 </code></pre>
 </notextile>
 
-h3. Debian and Ubuntu
+Submit a simple container request:
 
 <notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-dispatch-cloud</span>
+<pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
+  "name":            "test",
+  "state":           "Committed",
+  "priority":        1,
+  "container_image": "arvados/jobs:latest",
+  "command":         ["echo", "Hello, Crunch!"],
+  "output_path":     "/out",
+  "mounts": {
+    "/out": {
+      "kind":        "tmp",
+      "capacity":    1000
+    }
+  },
+  "runtime_constraints": {
+    "vcpus": 1,
+    "ram": 8388608
+  }
+}'</span>
 </code></pre>
 </notextile>
 
-h2(#confirm-working). Confirm working installation
+This command should return a record with a @container_uuid@ field.  Once @arvados-dispatch-cloud@ polls the API server for new containers to run, you should see it dispatch that same container.
 
-Use your @ManagementToken@ to test the dispatcher's metrics endpoint.
+The @arvados-dispatch-cloud@ API a list of queued and running jobs.  For example:
 
 <notextile>
-<pre><code>~$ <span class="userinput">token="xyzzy"</span>
-~$ <span class="userinput">curl -H "Authorization: Bearer $token" http://localhost:9006/metrics</span>
-# HELP arvados_dispatchcloud_containers_running Number of containers reported running by cloud VMs.
-# TYPE arvados_dispatchcloud_containers_running gauge
-arvados_dispatchcloud_containers_running 0
-[...]
+<pre><code>~$ <span class="userinput">curl ...</span>
 </code></pre>
 </notextile>
+
+When the container finishes, the dispatcher will log it.
+
+After the container finishes, you can get the container record by UUID *from a shell server* to see its results:
+
+<notextile>
+<pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>
+{
+ ...
+ "exit_code":0,
+ "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166",
+ "output":"d41d8cd98f00b204e9800998ecf8427e+0",
+ "state":"Complete",
+ ...
+}
+</code></pre>
+</notextile>
+
+You can use standard Keep tools to view the container's output and logs from their corresponding fields.  For example, to see the logs from the collection referenced in the @log@ field:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span>
+./crunch-run.txt
+./stderr.txt
+./stdout.txt
+~$ <span class="userinput">arv-get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span>
+2016-08-05T13:53:06.201011Z Hello, Crunch!
+</code></pre>
+</notextile>
+
+If the container does not dispatch successfully, refer to the @arvados-dispatch-cloud@ logs for information about why it failed.
diff --git a/doc/install/install-docker.html.textile.liquid b/doc/install/install-docker.html.textile.liquid
new file mode 100644
index 000000000..34a1993ca
--- /dev/null
+++ b/doc/install/install-docker.html.textile.liquid
@@ -0,0 +1,12 @@
+---
+layout: default
+navsection: installguide
+title: Set up Docker
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'install_compute_docker' %}
diff --git a/doc/install/install-jobs-image.html.textile.liquid b/doc/install/install-jobs-image.html.textile.liquid
new file mode 100644
index 000000000..4b8dd6ce2
--- /dev/null
+++ b/doc/install/install-jobs-image.html.textile.liquid
@@ -0,0 +1,38 @@
+---
+layout: default
+navsection: installguide
+title: Install arvados/jobs image
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h2. Create a project for Docker images
+
+Here we create a default project for the standard Arvados Docker images, and give all users read access to it. The project is owned by the system user.
+
+<notextile>
+<pre><code>~$ <span class="userinput">uuid_prefix=$(arv --format=uuid user current | cut -d- -f1)</span>
+~$ <span class="userinput">project_uuid=$(arv --format=uuid group create --group '{"owner_uuid":"$uuid_prefix-tpzed-000000000000000", "group_class":"project", "name":"Arvados Standard Docker Images"}')</span>
+~$ <span class="userinput">echo "Arvados project uuid is '$project_uuid'"</span>
+~$ <span class="userinput">read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"</span>
+<span class="userinput">{
+ "tail_uuid":"$all_users_group_uuid",
+ "head_uuid":"$project_uuid",
+ "link_class":"permission",
+ "name":"can_read"
+}
+EOF</span>
+</code></pre></notextile>
+
+h2. Import the arvados/jobs docker image
+
+In order to start workflows from workbench, there needs to be Docker image @arvados/jobs@ tagged with the version of Arvados you are installing. The following command downloads the latest arvados/jobs image from Docker Hub, loads it into Keep.  In this example @$project_uuid@ should be the UUID of the "Arvados Standard Docker Images" project.
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv-keepdocker --pull arvados/jobs latest --project-uuid $project_uuid</span>
+</code></pre></notextile>
+
+If the image needs to be downloaded from Docker Hub, the command can take a few minutes to complete, depending on available network bandwidth.
diff --git a/doc/install/install-keep-balance.html.textile.liquid b/doc/install/install-keep-balance.html.textile.liquid
index 09bc04114..072d2b9b5 100644
--- a/doc/install/install-keep-balance.html.textile.liquid
+++ b/doc/install/install-keep-balance.html.textile.liquid
@@ -18,6 +18,10 @@ h2(#introduction). Introduction
 
 Keep-balance deletes unreferenced and overreplicated blocks from Keep servers, makes additional copies of underreplicated blocks, and moves blocks into optimal locations as needed (e.g., after adding new servers). See "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for usage details.
 
+Keep-balance can be installed anywhere with network access to Keep services. Typically it runs on the same host as keepproxy.
+
+*A cluster should have only one instance of keep-balance running at a time.*
+
 {% include 'notebox_begin' %}
 
 If you are installing keep-balance on an existing system with valuable data, you can run keep-balance in "dry run" mode first and review its logs as a precaution. To do this, edit your keep-balance startup script to use the flags @-commit-pulls=false -commit-trash=false at .
@@ -46,44 +50,8 @@ Ensure your cluster configuration has @Collections.BlobTrash: true@ (this is the
 
 If BlobTrash is false, unneeded blocks will be counted and logged by keep-balance, but they will not be deleted.
 
-h2(#install-packages). Install keep-balance package
-
-Keep-balance can be installed anywhere with network access to Keep services. Typically it runs on the same host as keepproxy.
-
-*A cluster should have only one instance of keep-balance running at a time.*
-
-h3. Centos 7
-
-<notextile>
-<pre><code># <span class="userinput">yum install keep-balance</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
+{% assign arvados_component = 'keep-balance' %}
 
-<notextile>
-<pre><code># <span class="userinput">apt-get install keep-balance</span>
-</code></pre>
-</notextile>
+{% include 'install_packages' %}
 
-h2(#start-service). Start the service
-
-If your system uses systemd, the keep-balance service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart keep-balance</span>
-~$ <span class="userinput">sudo systemctl status keep-balance</span>
-&#x25cf; keep-balance.service - Arvados Keep Balance
-   Loaded: loaded (/lib/systemd/system/keep-balance.service; enabled)
-   Active: active (running) since Sat 2017-02-14 18:46:01 UTC; 3 days ago
-     Docs: https://doc.arvados.org/
- Main PID: 541 (keep-balance)
-   CGroup: /system.slice/keep-balance.service
-           └─541 /usr/bin/keep-balance -commit-pulls -commit-trash
-
-Feb 14 18:46:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:46:01 starting up: will scan every 10m0s and on SIGUSR1
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 Run: start
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 skipping zzzzz-bi6l4-rbtrws2jxul6i4t with service type "proxy"
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 clearing existing trash lists, in case the new rendezvous order differs from previous run
-</code></pre>
-</notextile>
+{% include 'start_service' %}
diff --git a/doc/install/install-keep-web.html.textile.liquid b/doc/install/install-keep-web.html.textile.liquid
index f949d0a6c..9f1188831 100644
--- a/doc/install/install-keep-web.html.textile.liquid
+++ b/doc/install/install-keep-web.html.textile.liquid
@@ -147,50 +147,13 @@ Normally, Keep-web accepts requests for multiple collections using the same host
 In such cases -- for example, a site which is not reachable from the internet, where some data is world-readable from Arvados's perspective but is intended to be available only to users within the local network -- the downstream proxy should configured to return 401 for all paths beginning with "/c="
 {% include 'notebox_end' %}
 
-h2. Install Keep-web package
+{% assign arvados_component = 'keep-web' %}
 
-Typically Keep-web runs on the same host as Keepproxy.
+{% include 'install_packages' %}
 
-h3. Centos 7
+{% include 'start_service' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install keep-web</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get install keep-web</span>
-</code></pre>
-</notextile>
-
-h2(#start-service). Start the service
-
-If your system uses systemd, the keep-web service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart keep-web</span>
-# <span class="userinput">systemctl status keep-web</span>
-&#x25cf; keep-web.service - Arvados Keep web gateway
-   Loaded: loaded (/lib/systemd/system/keep-web.service; enabled)
-   Active: active (running) since Sat 2019-08-10 10:33:21 UTC; 3 days ago
-     Docs: https://doc.arvados.org/
- Main PID: 4242 (keep-web)
-   CGroup: /system.slice/keep-web.service
-           └─4242 /usr/bin/keep-web
-[...]
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding WebDAV to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
diff --git a/doc/install/install-keepproxy.html.textile.liquid b/doc/install/install-keepproxy.html.textile.liquid
index 458ee9690..c0d18d396 100644
--- a/doc/install/install-keepproxy.html.textile.liquid
+++ b/doc/install/install-keepproxy.html.textile.liquid
@@ -29,7 +29,7 @@ By convention, we use the following hostname for the Keepproxy server:
 <div class="offset1">
 table(table table-bordered table-condensed).
 |_. Hostname|
-|keep. at uuid_prefix@.your.domain|
+|keep. at ClusterID@.your.domain|
 </div>
 
 This hostname should resolve from anywhere on the internet.
@@ -82,50 +82,13 @@ server {
 
 Note: if the Web uploader is failing to upload data and there are no logs from keepproxy, be sure to check the nginx proxy logs.  In addition to "GET" and "PUT", The nginx proxy must pass "OPTIONS" requests to keepproxy, which should respond with appropriate Cross-origin resource sharing headers.  If the CORS headers are not present, brower security policy will cause the upload request to silently fail.  The CORS headers are generated by keepproxy and should not be set in nginx.
 
-h2(#install-packages). Install Keepproxy package
+{% assign arvados_component = 'keepproxy' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install keepproxy</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get install keepproxy</span>
-</code></pre>
-</notextile>
-
-h2(#start-service). Start the service
-
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
+{% include 'start_service' %}
 
-If your system uses systemd, the keepproxy service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart keepproxy</span>
-# <span class="userinput">systemctl status keepproxy</span>
-&#x25cf; keepproxy.service - Arvados Keep Proxy
-   Loaded: loaded (/lib/systemd/system/keepproxy.service; enabled)
-   Active: active (running) since Tue 2019-07-23 09:33:47 EDT; 3 weeks 1 days ago
-     Docs: https://doc.arvados.org/
- Main PID: 1150 (Keepproxy)
-   CGroup: /system.slice/keepproxy.service
-           └─1150 /usr/bin/keepproxy
-[...]
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding keeproxy to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
diff --git a/doc/install/install-keepstore.html.textile.liquid b/doc/install/install-keepstore.html.textile.liquid
index 70c69c761..d6a529d38 100644
--- a/doc/install/install-keepstore.html.textile.liquid
+++ b/doc/install/install-keepstore.html.textile.liquid
@@ -66,32 +66,13 @@ Add each keepstore server to the @Services.Keepstore@ section of @/etc/arvados/c
 </code></pre>
 </notextile>
 
-h2(#install-packages). Install keepstore package
+{% assign arvados_component = 'keepstore' %}
 
-On each host that will run keepstore, install the @keepstore@ package.
+{% include 'install_packages' %}
 
-h3. Centos 7
+{% include 'start_service' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install keepstore</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get install keepstore</span>
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding all of your keepstore servers to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
diff --git a/doc/install/install-shell-server.html.textile.liquid b/doc/install/install-shell-server.html.textile.liquid
index 5b35aeb89..99d618b37 100644
--- a/doc/install/install-shell-server.html.textile.liquid
+++ b/doc/install/install-shell-server.html.textile.liquid
@@ -1,7 +1,7 @@
 ---
 layout: default
 navsection: installguide
-title: Install a shell node
+title: Set up a shell node
 ...
 {% comment %}
 Copyright (C) The Arvados Authors. All rights reserved.
@@ -9,31 +9,47 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-Arvados support for shell nodes enables using Arvados permissions to grant shell accounts to users.
+# "Introduction":#introduction
+# "Install Dependecies and SDKs":#dependencies
+# "Install git and curl":#install-packages
+# "Update Git Config":#config-git
+# "Create record for VM":#vm-record
+# "Create scoped token":#scoped-token
+# "Install arvados-login-sync":#arvados-login-sync
+# "Confirm working installation":#confirm-working
+
+h2(#introduction). Introduction
+
+Arvados support for shell nodes allows you to use Arvados permissions to grant Linux shell accounts to users.
 
 A shell node runs the @arvados-login-sync@ service, and has some additional configuration to make it convenient for users to use Arvados utilites and SDKs.  Users are allowed to log in and run arbitrary programs.  For optimal performance, the Arvados shell server should be on the same LAN as the Arvados cluster.
 
-h2. Install Dependecies and SDKs
+Because it _contains secrets_ shell nodes should *not* have a copy of the complete @config.yml at .  For example, if users have access to the @docker@ daemon, it is trival to gain *root* access to any file on the system.  Users sharing a shell node should be implicitly trusted, or not given access to Docker.  In more secure environments, the admin should allocate a separate VM for each user.
 
+h2(#dependencies). Install Dependecies and SDKs
+
+# "Install Ruby and Bundler":ruby.html
+# "Install the Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
 # "Install the CLI":{{site.baseurl}}/sdk/cli/install.html
-# "Install the R SDK":{{site.baseurl}}/sdk/R/index.html (optional)
+# "Install the R SDK":{{site.baseurl}}/sdk/R/install.html (optional)
+# "Install Docker":install-docker.html (optional)
 
-h2. Install Git and curl
+{% assign arvados_component = 'git curl' %}
 
-{% include 'install_git_curl' %}
+{% include 'install_packages' %}
 
-h2. Update Git Config
+h2(#config-git). Update Git Config
 
 Configure git to use the ARVADOS_API_TOKEN environment variable to authenticate to arv-git-httpd. We use the @--system@ flag so it takes effect for all current and future user accounts. It does not affect git's behavior when connecting to other git servers.
 
 <notextile>
 <pre>
-<code>~$ <span class="userinput">sudo git config --system 'credential.https://git.<b>uuid_prefix.your.domain</b>/.username' none</span></code>
-<code>~$ <span class="userinput">sudo git config --system 'credential.https://git.<b>uuid_prefix.your.domain</b>/.helper' '!cred(){ cat >/dev/null; if [ "$1" = get ]; then echo password=$ARVADOS_API_TOKEN; fi; };cred'</span></code>
+<code># <span class="userinput">git config --system 'credential.https://git.<b>ClusterID.example.com</b>/.username' none</span></code>
+<code># <span class="userinput">git config --system 'credential.https://git.<b>ClusterID.example.com</b>/.helper' '!cred(){ cat >/dev/null; if [ "$1" = get ]; then echo password=$ARVADOS_API_TOKEN; fi; };cred'</span></code>
 </pre>
 </notextile>
 
-h2. Create database entry for VM
+h2(#vm-record). Create record for VM
 
 This program makes it possible for Arvados users to log in to the shell server -- subject to permissions assigned by the Arvados administrator -- using the SSH keys they upload to Workbench. It sets up login accounts, updates group membership, and adds users' public keys to the appropriate @authorized_keys@ files.
 
@@ -46,9 +62,9 @@ zzzzz-2x53u-zzzzzzzzzzzzzzz</code>
 </pre>
 </notextile>
 
-h2. Create token
+h2(#scoped-token). Create scoped token
 
-As an admin arvados user (such as the system root user), create a token that is allowed to read login information for this VM.
+As an admin arvados user (such as the system root user), create a token that is restricted to only reading login information for this VM.
 
 <notextile>
 <pre>
@@ -63,9 +79,9 @@ As an admin arvados user (such as the system root user), create a token that is
 
 Note the UUID and the API token output by the above commands: you will need them in a minute.
 
-h2. Install arvados-login-sync
+h2(#arvados-login-sync). Install arvados-login-sync
 
-Install the arvados-login-sync program.
+Install the arvados-login-sync program from RubyGems.
 
 <notextile>
 <pre>
@@ -78,7 +94,7 @@ Configure cron to run the @arvados-login-sync@ program every 2 minutes.
 <notextile>
 <pre>
 <code>shellserver:# <span class="userinput">umask 077; tee /etc/cron.d/arvados-login-sync <<EOF
-ARVADOS_API_HOST="<strong>uuid_prefix.your.domain</strong>"
+ARVADOS_API_HOST="<strong>ClusterID.example.com</strong>"
 ARVADOS_API_TOKEN="<strong>the_token_you_created_above</strong>"
 ARVADOS_VIRTUAL_MACHINE_UUID="<strong>zzzzz-2x53u-zzzzzzzzzzzzzzz</strong>"
 */2 * * * * root arvados-login-sync
@@ -86,9 +102,12 @@ EOF</span></code>
 </pre>
 </notextile>
 
+h2(#confirm-working). Confirm working installation
+
 A user should be able to log in to the shell server when the following conditions are satisfied:
-* The user has uploaded an SSH public key: Workbench → Account menu → "SSH keys" item → "Add new SSH key" button.
-* As an admin user, you have given the user permission to log in: Workbench → Admin menu → "Users" item → "Show" button → "Admin" tab → "Setup shell account" button.
-* Two minutes have elapsed since the above conditions were satisfied, and the cron job has had a chance to run.
 
+# The user has uploaded an SSH public key: Workbench → Account menu → "SSH keys" item → "Add new SSH key" button.
+# As an admin user, you have given the user permission to log in using the Workbench → Admin menu → "Users" item → "Show" button → "Admin" tab → "Setup account" button.
+# The cron job has run.
 
+See also "how to add a VM login permission link at the command line":admin/user-management-cli.html
diff --git a/doc/install/install-workbench-app.html.textile.liquid b/doc/install/install-workbench-app.html.textile.liquid
index cf33cca35..8650ecc14 100644
--- a/doc/install/install-workbench-app.html.textile.liquid
+++ b/doc/install/install-workbench-app.html.textile.liquid
@@ -77,30 +77,11 @@ Use a text editor to create a new file @/etc/nginx/conf.d/arvados-workbench.conf
 </code></pre>
 </notextile>
 
-h2(#install-packages). Install arvados-workbench
+{% assign arvados_component = 'arvados-workbench' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install arvados-workbench</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-workbench</span>
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding Workbench to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
@@ -122,4 +103,3 @@ irb(main):003:0> <span class="userinput">act_as_system_user do wb.update_attr
 => true
 </code></pre>
 </notextile>
-
diff --git a/doc/install/install-workbench2-app.html.textile.liquid b/doc/install/install-workbench2-app.html.textile.liquid
index 566d87878..e873ad1e7 100644
--- a/doc/install/install-workbench2-app.html.textile.liquid
+++ b/doc/install/install-workbench2-app.html.textile.liquid
@@ -71,30 +71,11 @@ Use a text editor to create a new file @/etc/nginx/conf.d/arvados-workbench2.con
 </code></pre>
 </notextile>
 
-h2(#install-packages). Install arvados-workbench2
+{% assign arvados_component = 'arvados-workbench2' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install arvados-workbench2</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-workbench2</span>
-</code></pre>
-</notextile>
-
-h2(#restart-api). Restart the API server and controller
-
-After adding Workbench to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
-
-<notextile>
-<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#confirm-working). Confirm working installation
 
diff --git a/doc/install/install-ws.html.textile.liquid b/doc/install/install-ws.html.textile.liquid
index 2a0baa750..f3319fa4c 100644
--- a/doc/install/install-ws.html.textile.liquid
+++ b/doc/install/install-ws.html.textile.liquid
@@ -26,7 +26,7 @@ Edit the cluster config at @/etc/arvados/config.yml@ and set @Services.Websocket
 <pre><code>    Services:
       Websocket:
         InternalURLs:
-	  <span class="userinput">"http://ws.ClusterID.example.com:8005"</span>: {}      
+	  <span class="userinput">"http://ws.ClusterID.example.com:8005"</span>: {}
         ExternalURL: <span class="userinput">wss://ws.ClusterID.example.com/websocket</span>
 </span></code></pre>
 </notextile>
@@ -63,55 +63,13 @@ server {
 }
 </pre></notextile>
 
-h2(#install-packages). Install arvados-ws
+{% assign arvados_component = 'arvados-ws' %}
 
-h3. Centos 7
+{% include 'install_packages' %}
 
-<notextile>
-<pre><code># <span class="userinput">yum install arvados-ws</span>
-</code></pre>
-</notextile>
-
-h3. Debian and Ubuntu
-
-<notextile>
-<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-ws</span>
-</code></pre>
-</notextile>
-
-h3. Start the service
-
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
+{% include 'start_service' %}
 
-If your system uses systemd, the arvados-ws service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-ws</span>
-~$ <span class="userinput">sudo systemctl status arvados-ws</span>
-&#x25cf; arvados-ws.service - Arvados websocket server
-   Loaded: loaded (/lib/systemd/system/arvados-ws.service; enabled)
-   Active: active (running) since Tue 2016-12-06 11:20:48 EST; 10s ago
-     Docs: https://doc.arvados.org/
- Main PID: 9421 (arvados-ws)
-   CGroup: /system.slice/arvados-ws.service
-           └─9421 /usr/bin/arvados-ws
-
-Dec 06 11:20:48 zzzzz arvados-ws[9421]: {"level":"info","msg":"started","time":"2016-12-06T11:20:48.207617188-05:00"}
-Dec 06 11:20:48 zzzzz arvados-ws[9421]: {"Listen":":9003","level":"info","msg":"listening","time":"2016-12-06T11:20:48.244956506-05:00"}
-Dec 06 11:20:48 zzzzz systemd[1]: Started Arvados websocket server.
-</code></pre>
-</notextile>
-
-If it is not running, use @journalctl@ to check logs for errors:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo journalctl -n10 -u arvados-ws</span>
-...
-Dec 06 11:12:48 zzzzz systemd[1]: Starting Arvados websocket server...
-Dec 06 11:12:48 zzzzz arvados-ws[8918]: {"level":"info","msg":"started","time":"2016-12-06T11:12:48.030496636-05:00"}
-Dec 06 11:12:48 zzzzz arvados-ws[8918]: {"error":"pq: password authentication failed for user \"arvados\"","level":"fatal","msg":"db.Ping failed","time":"2016-12-06T11:12:48.058206400-05:00"}
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
 
 h2(#restart-api). Restart the API server and controller
 
@@ -122,7 +80,7 @@ After adding the SSO server to the Services section, make sure the cluster confi
 </code></pre>
 </notextile>
 
-h3(#confirm). Confirm working installation
+h2(#confirm). Confirm working installation
 
 Confirm the service is listening on its assigned port and responding to requests.
 
diff --git a/doc/sdk/cli/install.html.textile.liquid b/doc/sdk/cli/install.html.textile.liquid
index e72dc673a..207809780 100644
--- a/doc/sdk/cli/install.html.textile.liquid
+++ b/doc/sdk/cli/install.html.textile.liquid
@@ -12,43 +12,23 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 Arvados CLI tools are written in Ruby and Python.  To use the @arv@ command, you can either install the @arvados-cli@ gem via RubyGems or build and install the package from source.  The @arv@ command also relies on other Arvados tools.  To get those, install the @arvados-python-client@ and @arvados-cwl-runner@ packages, either from PyPI or source.
 
-h3. Prerequisites: Ruby, Bundler, and curl libraries
+h2. Prerequisites
 
-{% include 'install_ruby_and_bundler' %}
+# "Install Ruby and Bundler":../../install/ruby.html
+# "Install the Python SDK":../python/sdk-python.html
 
-Install curl libraries with your system's package manager. For example, on Debian or Ubuntu:
+h2. Option 1: Install distribution package
 
-<notextile>
-<pre>
-~$ <code class="userinput">sudo apt-get install libcurl3 libcurl3-gnutls libcurl4-openssl-dev</code>
-</pre>
-</notextile>
+First, configure the "Arvados package repositories":../../install/packages.html
 
-h3. Option 1: Install from RubyGems and PyPI
+{% assign arvados_component = 'arvados-cli' %}
 
-<notextile>
-<pre>
-~$ <code class="userinput">sudo -i gem install arvados-cli</code>
-</pre>
-</notextile>
+{% include 'install_packages' %}
 
-<notextile>
-<pre>
-~$ <code class="userinput">pip install arvados-python-client arvados-cwl-runner</code>
-</pre>
-</notextile>
-
-h3. Option 2: Build and install from source
+h2. Option 2: Install from RubyGems
 
 <notextile>
 <pre>
-~$ <code class="userinput">git clone https://github.com/curoverse/arvados.git</code>
-~$ <code class="userinput">cd arvados/sdk/cli</code>
-~/arvados/sdk/cli$ <code class="userinput">gem build arvados-cli.gemspec</code>
-~/arvados/sdk/cli$ <code class="userinput">sudo -i gem install arvados-cli-*.gem</code>
-~/arvados/sdk/cli$ <code class="userinput">cd ../python</code>
-~/arvados/sdk/python$ <code class="userinput">python setup.py install</code>
-~/arvados/sdk/python$ <code class="userinput">cd ../cwl</code>
-~/arvados/sdk/cwl$ <code class="userinput">python setup.py install</code>
+~$ <code class="userinput">sudo -i gem install arvados-cli</code>
 </pre>
 </notextile>
diff --git a/doc/sdk/go/example.html.textile.liquid b/doc/sdk/go/example.html.textile.liquid
index a5a109b85..fd62bb67e 100644
--- a/doc/sdk/go/example.html.textile.liquid
+++ b/doc/sdk/go/example.html.textile.liquid
@@ -10,7 +10,7 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
-See "Arvados GoDoc":https://godoc.org/git.curoverse.com/arvados.git/sdk/go for detailed documentation.
+See "Arvados GoDoc":https://godoc.org/git.arvados.org/arvados.git/sdk/go for detailed documentation.
 
 In these examples, the site prefix is @aaaaa at .
 
@@ -18,8 +18,8 @@ h2.  Initialize SDK
 
 {% codeblock as go %}
 import (
-  "git.curoverse.com/arvados.git/sdk/go/arvados"
-  "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+  "git.arvados.org/arvados.git/sdk/go/arvados"
+  "git.arvados.org/arvados.git/sdk/go/arvadosclient"
 }
 
 func main() {
diff --git a/doc/sdk/go/index.html.textile.liquid b/doc/sdk/go/index.html.textile.liquid
index a06d51866..709b0d524 100644
--- a/doc/sdk/go/index.html.textile.liquid
+++ b/doc/sdk/go/index.html.textile.liquid
@@ -12,16 +12,16 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 The Go ("Golang":http://golang.org) SDK provides a generic set of wrappers so you can make API calls easily.
 
-See "Arvados GoDoc":https://godoc.org/git.curoverse.com/arvados.git/sdk/go for detailed documentation.
+See "Arvados GoDoc":https://godoc.org/git.arvados.org/arvados.git/sdk/go for detailed documentation.
 
 h3. Installation
 
-Use @go get git.curoverse.com/arvados.git/sdk/go/arvadosclient at .  The go tools will fetch the relevant code and dependencies for you.
+Use @go get git.arvados.org/arvados.git/sdk/go/arvadosclient at .  The go tools will fetch the relevant code and dependencies for you.
 
 {% codeblock as go %}
 import (
-	"git.curoverse.com/arvados.git/sdk/go/arvadosclient"
-	"git.curoverse.com/arvados.git/sdk/go/keepclient"
+	"git.arvados.org/arvados.git/sdk/go/arvadosclient"
+	"git.arvados.org/arvados.git/sdk/go/keepclient"
 )
 {% endcodeblock %}
 
diff --git a/doc/sdk/index.html.textile.liquid b/doc/sdk/index.html.textile.liquid
index 5fbc3d5dd..9c964a553 100644
--- a/doc/sdk/index.html.textile.liquid
+++ b/doc/sdk/index.html.textile.liquid
@@ -15,9 +15,9 @@ This section documents language bindings for the "Arvados API":{{site.baseurl}}/
 * "Command line SDK":{{site.baseurl}}/sdk/cli/install.html ("arv")
 * "Go SDK":{{site.baseurl}}/sdk/go/index.html
 * "R SDK":{{site.baseurl}}/sdk/R/index.html
-* "Perl SDK":{{site.baseurl}}/sdk/perl/index.html
 * "Ruby SDK":{{site.baseurl}}/sdk/ruby/index.html
 * "Java SDK v2":{{site.baseurl}}/sdk/java-v2/index.html
 * "Java SDK v1":{{site.baseurl}}/sdk/java/index.html
+* "Perl SDK":{{site.baseurl}}/sdk/perl/index.html
 
 Many Arvados Workbench pages, under the *Advanced* tab, provide examples of API and SDK use for accessing the current resource .
diff --git a/doc/sdk/java/index.html.textile.liquid b/doc/sdk/java/index.html.textile.liquid
index 111c0631d..25b705754 100644
--- a/doc/sdk/java/index.html.textile.liquid
+++ b/doc/sdk/java/index.html.textile.liquid
@@ -12,6 +12,8 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 The Java SDK v1 provides a low level API to call Arvados from Java.
 
+This is a legacy SDK.  It is no longer used or maintained regularly.  The "Arvados Java SDK v2":../java-v2/index.html should be used.
+
 h3. Introdution
 
 * The Java SDK requires Java 6 or later
diff --git a/doc/sdk/perl/index.html.textile.liquid b/doc/sdk/perl/index.html.textile.liquid
index 4ee29c00c..828aab781 100644
--- a/doc/sdk/perl/index.html.textile.liquid
+++ b/doc/sdk/perl/index.html.textile.liquid
@@ -12,10 +12,7 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 The Perl SDK provides a generic set of wrappers so you can make API calls easily.
 
-It should be treated as alpha/experimental. Currently, limitations include:
-* Verbose syntax.
-* No native Keep client.
-* No CPAN package.
+This is a legacy SDK.  It is no longer used or maintained regularly.
 
 h3. Installation
 
diff --git a/doc/sdk/python/example.html.textile.liquid b/doc/sdk/python/example.html.textile.liquid
index 504d0784f..edcdba549 100644
--- a/doc/sdk/python/example.html.textile.liquid
+++ b/doc/sdk/python/example.html.textile.liquid
@@ -12,6 +12,8 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 In these examples, the site prefix is @aaaaa at .
 
+See also the "cookbook":cookbook.html for more complex examples.
+
 h2.  Initialize SDK
 
 {% codeblock as python %}
@@ -54,3 +56,15 @@ h2. Get current user
 {% codeblock as python %}
 result = api.users().current().execute()
 {% endcodeblock %}
+
+h2. Get the User object for the current user
+
+{% codeblock as python %}
+current_user = arvados.api('v1').users().current().execute()
+{% endcodeblock %}
+
+h2. Get the UUID of an object that was retrieved using the SDK
+
+{% codeblock as python %}
+my_uuid = current_user['uuid']
+{% endcodeblock %}
diff --git a/doc/sdk/python/sdk-python.html.textile.liquid b/doc/sdk/python/sdk-python.html.textile.liquid
index c8b2b67b1..381c01d53 100644
--- a/doc/sdk/python/sdk-python.html.textile.liquid
+++ b/doc/sdk/python/sdk-python.html.textile.liquid
@@ -20,9 +20,19 @@ To use the Python SDK elsewhere, you can install from PyPI or a distribution pac
 
 The Python SDK supports Python 2.7 and 3.4+
 
-h3. Option 1: Install with pip
+h2. Option 1: Install from a distribution package
 
-This installation method is recommended to make the SDK available for use in your own Python programs. It can coexist with the system-wide installation method from a distribution package (option 2, below).
+This installation method is recommended to make the CLI tools available system-wide. It can coexist with the installation method described in option 2, below.
+
+First, configure the "Arvados package repositories":../../install/packages.html
+
+{% assign arvados_component = 'python-arvados-python-client' %}
+
+{% include 'install_packages' %}
+
+h2. Option 2: Install with pip
+
+This installation method is recommended to use the SDK in your own Python programs. It can coexist with the system-wide installation method from a distribution package (option 2, below).
 
 Run @pip install arvados-python-client@ in an appropriate installation environment, such as a @virtualenv at .
 
@@ -34,27 +44,7 @@ $ apt-get install git build-essential python3-dev libcurl4-openssl-dev libssl1.0
 
 If your version of @pip@ is 1.4 or newer, the @pip install@ command might give an error: "Could not find a version that satisfies the requirement arvados-python-client". If this happens, try @pip install --pre arvados-python-client at .
 
-h3. Option 2: Install from a distribution package
-
-This installation method is recommended to make the CLI tools available system-wide. It can coexist with the installation method described in option 1, above.
-
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install python-arvados-python-client</code>
-</code></pre>
-</notextile>
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-python-client</code>
-</code></pre>
-</notextile>
-
-h3. Test installation
+h2. Test installation
 
 If the SDK is installed and your @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ environment variables are set up correctly (see "api-tokens":{{site.baseurl}}/user/reference/api-tokens.html for details), @import arvados@ should produce no errors.
 
@@ -98,56 +88,9 @@ Type "help", "copyright", "credits" or "license" for more information.
 </pre>
 </notextile>
 
-h3. Examples
-
-Get the User object for the current user:
-
-<notextile>
-<pre><code class="userinput">current_user = arvados.api('v1').users().current().execute()
-</code></pre>
-</notextile>
-
-Get the UUID of an object that was retrieved using the SDK:
+h2. Usage
 
-<notextile>
-<pre><code class="userinput">my_uuid = current_user['uuid']
-</code></pre>
-</notextile>
-
-Retrieve an object by ID:
-
-<notextile>
-<pre><code class="userinput">some_user = arvados.api('v1').users().get(uuid=my_uuid).execute()
-</code></pre>
-</notextile>
-
-Create an object:
-
-<notextile>
-<pre><code class="userinput">test_link = arvados.api('v1').links().create(
-    body={'link_class':'test','name':'test'}).execute()
-</code></pre>
-</notextile>
-
-Update an object:
-
-<notextile>
-<pre><code class="userinput">arvados.api('v1').links().update(
-    uuid=test_link['uuid'],
-    body={'properties':{'foo':'bar'}}).execute()
-</code></pre>
-</notextile>
-
-Get a list of objects:
-
-<notextile>
-<pre><code class="userinput">repos = arvados.api('v1').repositories().list().execute()
-len(repos['items'])</code>
-2
-<code class="userinput">repos['items'][0]['uuid']</code>
-u'qr1hi-s0uqq-kg8cawglrf74bmw'
-</code></pre>
-</notextile>
+Check out the "examples":example.html and "cookbook":cookbook.html
 
 h3. Notes
 

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list