[ARVADOS] created: 7ff586c2f32f7cd652381ccb7211691cbe66e3a4

git at public.curoverse.com git at public.curoverse.com
Mon Mar 31 09:58:41 EDT 2014


        at  7ff586c2f32f7cd652381ccb7211691cbe66e3a4 (commit)


commit 7ff586c2f32f7cd652381ccb7211691cbe66e3a4
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Mar 31 09:46:50 2014 -0400

    doc: tutorial-job1 updates and style tweaks.
    
    The major change here is that we set no_reuse on the job, to avoid
    confusing people by using results from previous tutorial runs.  These
    updates also reflect some minor changes in the Workbench UI.

diff --git a/doc/user/topics/tutorial-job1.html.textile.liquid b/doc/user/topics/tutorial-job1.html.textile.liquid
index 58fe329..6cbcb01 100644
--- a/doc/user/topics/tutorial-job1.html.textile.liquid
+++ b/doc/user/topics/tutorial-job1.html.textile.liquid
@@ -8,40 +8,42 @@ This tutorial introduces how to run individual Crunch jobs using the @arv@ comma
 
 *This tutorial assumes that you are "logged into an Arvados VM instance":{{site.baseurl}}/user/getting_started/ssh-access.html#login, and have a "working environment.":{{site.baseurl}}/user/getting_started/check-environment.html*
 
-You will create a job to run the "hash" crunch script.  The "hash" script computes the md5 hash of each file in a collection.
+You will create a job to run the "hash" Crunch script.  The "hash" script computes the md5 hash of each file in a collection.
 
 h2. Jobs
 
-Crunch pipelines consist of one or more jobs.  A "job" is a single run of a specific version of a crunch script with a specific input.  You an also run jobs individually.
+Crunch pipelines consist of one or more jobs.  A "job" is a single run of a specific version of a Crunch script with a specific input.  You can also run jobs individually.
 
-A request to run a crunch job are is described using a JSON object.  For example:
+A request to run a Crunch job are is described using a JSON object.  For example:
 
 <notextile>
-<pre><code>~$ <span class="userinput">cat >the_job <<EOF
+<pre><code>~$ <span class="userinput">cat >~/the_job <<EOF
 {
  "script": "hash",
  "repository": "arvados",
  "script_version": "master",
  "script_parameters": {
   "input": "c1bad4b39ca5a924e481008009d94e32+210"
- }
+ },
+ "no_reuse": "true"
 }
 EOF
 </code></pre>
 </notextile>
 
-* @cat@ is a standard Unix utility that simply copies standard input to standard output
-* @<<EOF@ tells the shell to direct the following lines into the standard input for @cat@ up until it sees the line @EOF@
-* @>the_job@ redirects standard output to a file called @the_job@
-* @"script"@ specifies the name of the script to run.  The script is searched for in the "crunch_scripts/" subdirectory of the @git@ checkout specified by @"script_version"@.
-* @"repository"@ is the git repository to search for the script version.  You can access a list of available @git@ repositories on the Arvados workbench under "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}//repositories .
-* @"script_version"@ specifies the version of the script that you wish to run.  This can be in the form of an explicit @git@ revision hash, a tag, or a branch (in which case it will take the HEAD of the specified branch).  Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run.
-* @"script_parameters"@ are provided to the script.  In this case, the input is the locator for the collection that we inspected in the previous section.
+* @cat@ is a standard Unix utility that writes a sequence of input to standard output.
+* @<<EOF@ tells the shell to direct the following lines into the standard input for @cat@ up until it sees the line @EOF at .
+* @>~/the_job@ redirects standard output to a file called @~/the_job at .
+* @"repository"@ is the name of a git repository to search for the script version.  You can access a list of available git repositories on the Arvados Workbench under "*Compute* %(rarr)→% *Code repositories*":https://{{site.arvados_workbench_host}}/repositories.
+* @"script_version"@ specifies the version of the script that you wish to run.  This can be in the form of an explicit git revision hash, a tag, or a branch (in which case it will use the HEAD of the specified branch).  Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run.
+* @"script"@ specifies the name of the script to run.  The script is searched for in the @crunch_scripts/@ subdirectory of the git repository.
+* @"script_parameters"@ are provided to the script.  In this case, the input is the PGP data Collection that we "put in Keep earlier":/user/tutorials/tutorial-keep.html.
+* Setting the @"no_reuse"@ flag tells Crunch not to reuse work from past jobs.  Using this lets you create and watch your own job.  If you didn't set this, Crunch would immediately return the output from someone else's past tutorial run.  (Feel free to try it!)
 
 Use @arv job create@ to actually submit the job.  It should print out a JSON object which describes the newly created job:
 
 <notextile>
-<pre><code>~$ <span class="userinput">arv job create --job "$(cat the_job)"</span>
+<pre><code>~$ <span class="userinput">arv job create --job "$(cat ~/the_job)"</span>
 {
  "href":"https://qr1hi.arvadosapi.com/arvados/v1/jobs/qr1hi-8i9sb-1pm1t02dezhupss",
  "kind":"arvados#job",
@@ -82,12 +84,12 @@ Use @arv job create@ to actually submit the job.  It should print out a JSON obj
 
 The job is now queued and will start running as soon as it reaches the front of the queue.  Fields to pay attention to include:
 
- * @"uuid"@ is the unique identifier for this specific job
+ * @"uuid"@ is the unique identifier for this specific job.
  * @"script_version"@ is the actual revision of the script used.  This is useful if the version was described using the "repository:branch" format.
 
 h2. Monitor job progress
 
-Go to the "Workbench dashboard":https://{{site.arvados_workbench_host}}.  Your job should be at the top of the "Recent jobs" table.  This table refreshes automatically.  When the job has completed successfully, it will show <span class="label label-success">finished</span> in the *Status* column.
+Go to the "Workbench dashboard":https://{{site.arvados_workbench_host}} and visit *Activity* %(rarr)→% *Recent jobs*.  Your job should be near the top of the table.  This table refreshes automatically.  When the job has completed successfully, it will show <span class="label label-success">finished</span> in the *Status* column.
 
 On the command line, you can access log messages while the job runs using @arv job log_tail_follow@:
 
@@ -97,7 +99,7 @@ This will print out the last several lines of the log for that job.
 
 h2. Inspect the job output
 
-On the "Workbench dashboard":https://{{site.arvados_workbench_host}}, look for the *Output* column of the *Recent jobs* table.  Click on the link under *Output* for your job to go to the files page with the job output.  The files page lists all the files that were output by the job.  Click on the link under the *files* column to view a file, or click on the download icon <span class="glyphicon glyphicon-download-alt"></span> to download the output file.
+On the "Workbench dashboard":https://{{site.arvados_workbench_host}}, look for the *Output* column of the *Recent jobs* table.  Click on the link under *Output* for your job to go to the files page with the job output.  The files page lists all the files that were output by the job.  Click on the link under the *file* column to view a file, or click on the download icon <span class="glyphicon glyphicon-download-alt"></span> to download the output file.
 
 On the command line, you can use @arv job get@ to access a JSON object describing the output:
 
@@ -152,7 +154,7 @@ Now you can list the files in the collection:
 
 <notextile>
 <pre><code>~$ <span class="userinput">arv keep ls dd755dbc8d49a67f4fe7dc843e4f10a6+54</span>
-md5sum.txt
+./md5sum.txt
 </code></pre>
 </notextile>
 
@@ -164,59 +166,58 @@ This collection consists of the @md5sum.txt@ file.  Use @arv keep get@ to show t
 </code></pre>
 </notextile>
 
-This md5 hash matches the md5 hash which we computed earlier.
+This md5 hash matches the md5 hash which we "computed earlier":/user/tutorials/tutorial-keep.html.
 
 h2. The job log
 
-When the job completes, you can access the job log.  On the workbench dashboard, this is the link under the *Log* column of the *Recent jobs* table.
+When the job completes, you can access the job log.  On the Workbench, visit *Activity* %(rarr)→% *Recent jobs* %(rarr)→% your job's UUID under the *uuid* column %(rarr)→% the collection link on the *log* row.
 
-On the command line, the keep identifier listed in the @"log"@ field from @arv job get@ specifies a collection.  You can list the files in the collection:
+On the command line, the Keep identifier listed in the @"log"@ field from @arv job get@ specifies a collection.  You can list the files in the collection:
 
 <notextile>
 <pre><code>~$ <span class="userinput">arv keep ls xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx+91</span>
-qr1hi-8i9sb-xxxxxxxxxxxxxxx.log.txt
+./qr1hi-8i9sb-xxxxxxxxxxxxxxx.log.txt
 </code></pre>
 </notextile>
 
-The log collection consists of one log file named with the job id.  You can access it using @arv keep get@:
+The log collection consists of one log file named with the job's UUID.  You can access it using @arv keep get@:
 
 <notextile>
 <pre><code>~$ <span class="userinput">arv keep get xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx+91/qr1hi-8i9sb-xxxxxxxxxxxxxxx.log.txt</span>
-2013-12-16_20:44:35 qr1hi-8i9sb-1pm1t02dezhupss 7575  check slurm allocation
-2013-12-16_20:44:35 qr1hi-8i9sb-1pm1t02dezhupss 7575  node compute13 - 8 slots
-2013-12-16_20:44:36 qr1hi-8i9sb-1pm1t02dezhupss 7575  start
-2013-12-16_20:44:36 qr1hi-8i9sb-1pm1t02dezhupss 7575  Install revision d9cd657b733d578ac0d2167dd75967aa4f22e0ac
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  Clean-work-dir exited 0
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  Install exited 0
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  script hash
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  script_version d9cd657b733d578ac0d2167dd75967aa4f22e0ac
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  script_parameters {"input":"c1bad4b39ca5a924e481008009d94e32+210"}
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  runtime_constraints {"max_tasks_per_node":0}
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  start level 0
-2013-12-16_20:44:37 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 0 done, 0 running, 1 todo
-2013-12-16_20:44:38 qr1hi-8i9sb-1pm1t02dezhupss 7575 0 job_task qr1hi-ot0gb-23c1k3kwrf8da62
-2013-12-16_20:44:38 qr1hi-8i9sb-1pm1t02dezhupss 7575 0 child 7681 started on compute13.1
-
-2013-12-16_20:44:38 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 0 done, 1 running, 0 todo
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575 0 child 7681 on compute13.1 exit 0 signal 0 success=true
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575 0 success in 1 seconds
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575 0 output
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575  wait for last 0 children to finish
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 1 done, 0 running, 1 todo
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575  start level 1
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 1 done, 0 running, 1 todo
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575 1 job_task qr1hi-ot0gb-iwr0o3unqothg28
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575 1 child 7716 started on compute13.1
-2013-12-16_20:44:39 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 1 done, 1 running, 0 todo
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575 1 child 7716 on compute13.1 exit 0 signal 0 success=true
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575 1 success in 13 seconds
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575 1 output dd755dbc8d49a67f4fe7dc843e4f10a6+54
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575  wait for last 0 children to finish
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575  status: 2 done, 0 running, 0 todo
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575  release job allocation
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575  Freeze not implemented
-2013-12-16_20:44:52 qr1hi-8i9sb-1pm1t02dezhupss 7575  collate
-2013-12-16_20:44:53 qr1hi-8i9sb-1pm1t02dezhupss 7575  output dd755dbc8d49a67f4fe7dc843e4f10a6+54+K at qr1hi
-2013-12-16_20:44:53 qr1hi-8i9sb-1pm1t02dezhupss 7575  finish
+2013-12-16_20:44:35 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  check slurm allocation
+2013-12-16_20:44:35 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  node compute13 - 8 slots
+2013-12-16_20:44:36 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start
+2013-12-16_20:44:36 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Install revision d9cd657b733d578ac0d2167dd75967aa4f22e0ac
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Clean-work-dir exited 0
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Install exited 0
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script hash
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script_version d9cd657b733d578ac0d2167dd75967aa4f22e0ac
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script_parameters {"input":"c1bad4b39ca5a924e481008009d94e32+210"}
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  runtime_constraints {"max_tasks_per_node":0}
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start level 0
+2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 0 done, 0 running, 1 todo
+2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 job_task qr1hi-ot0gb-23c1k3kwrf8da62
+2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 child 7681 started on compute13.1
+2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 0 done, 1 running, 0 todo
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 child 7681 on compute13.1 exit 0 signal 0 success=true
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 success in 1 seconds
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 output
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  wait for last 0 children to finish
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 0 running, 1 todo
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start level 1
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 0 running, 1 todo
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 job_task qr1hi-ot0gb-iwr0o3unqothg28
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 child 7716 started on compute13.1
+2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 1 running, 0 todo
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 child 7716 on compute13.1 exit 0 signal 0 success=true
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 success in 13 seconds
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 output dd755dbc8d49a67f4fe7dc843e4f10a6+54
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  wait for last 0 children to finish
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 2 done, 0 running, 0 todo
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  release job allocation
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Freeze not implemented
+2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  collate
+2013-12-16_20:44:53 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  output dd755dbc8d49a67f4fe7dc843e4f10a6+54+K at qr1hi
+2013-12-16_20:44:53 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  finish
 </code></pre>
 </notextile>

commit 4a389d125083663b5b58a3f4ab3d24a842962840
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 17:23:40 2014 -0400

    doc: job-and-pipeline-ref grammar fixes.

diff --git a/doc/user/reference/job-and-pipeline-reference.html.textile.liquid b/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
index 9694798..e48b211 100644
--- a/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
+++ b/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
@@ -27,7 +27,7 @@ notextile. <div class="spaced-out">
 
 # If 'nondeterministic' or 'no_reuse' are true, always create a new job.
 # Find a list of acceptable values for 'script_version'.  If 'minimum_script_version' is specified, this is the set of all revisions in the git commit graph between 'minimum_script_version' and 'script_version' (inclusive) [2].  If 'minimum_script_version' is not specified, only 'script_version' is added to the list.  If 'exclude_script_versions' is specified, the listed versions are excluded from the list.
-# Select jobs have the same 'script' and 'script_parameters' attributes, and where the 'script_version' attribute is in the list of acceptable versions.  Exclude failed jobs or where 'nondeterministic' is true.
+# Select jobs have the same 'script' and 'script_parameters' attributes, and where the 'script_version' attribute is in the list of acceptable versions.  Exclude jobs that failed or set 'nondeterministic' to true.
 # If there is more than one candidate job, check that all selected past jobs actually did produce the same output.
 # If everything passed, re-use one of the selected past jobs (if there is more than one match, which job will be returned is undefined).  Otherwise create a new job.
 
@@ -39,7 +39,7 @@ fn2. This may include parallel branches if there is more than one path between '
 
 h3. Examples
 
-Run the script "crunch_scripts/hash.py" in the repository "you" using the "master" branch head.  Arvados is allowed to re-use a previous job if the script_version of the past job is the same as the "master" branch head (i.e. there have not been any subsequent commits to "master").
+Run the script "crunch_scripts/hash.py" in the repository "you" using the "master" branch head.  Arvados is allowed to re-use a previous job if the script_version of the past job is the same as the "master" branch head (i.e., there have not been any subsequent commits to "master").
 
 <notextile><pre>
 {
@@ -52,7 +52,7 @@ Run the script "crunch_scripts/hash.py" in the repository "you" using the "maste
 }
 </pre></notextile>
 
-Run using exactly the version "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5". Arvados is allowed to re-use a previous job if the script_version of that job is also "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5".
+Run using exactly the version "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5". Arvados is allowed to re-use a previous job if the "script_version" of that job is also "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5".
 
 <notextile><pre>
 {
@@ -65,7 +65,7 @@ Run using exactly the version "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5". Arvado
 }
 </pre></notextile>
 
-Arvados is allowed to re-use a previous job if the script_version of the past job is between "earlier_version_tag" and the head of the "master" branch (inclusive), but not "blacklisted_version_tag".  If there are no previous jobs, run the job using the head of the "master" branch as specified in "script_version".
+Arvados is allowed to re-use a previous job if the "script_version" of the past job is between "earlier_version_tag" and the head of the "master" branch (inclusive), but not "blacklisted_version_tag".  If there are no previous jobs, run the job using the head of the "master" branch as specified in "script_version".
 
 <notextile><pre>
 {
@@ -120,7 +120,7 @@ fn3. The 'File' type refers to a specific file within a Keep collection in the f
 
 h3. Examples
 
-This a pipeline named "Filter md5 hash values" with two components, "do_hash" and "filter".  The "input" script parameter of the "do_hash" component is required to be filled in by the user, and the expected data type is "Collection".  This also specifies that the "input" script parameter of the "filter" component is the output of "do_hash", so "filter" will not run until "do_hash" completes successfully.  When the pipeline runs, past jobs that meet the criteria described above may be substituted for either or both components to avoid redundant computation.
+This is a pipeline named "Filter md5 hash values" with two components, "do_hash" and "filter".  The "input" script parameter of the "do_hash" component is required to be filled in by the user, and the expected data type is "Collection".  This also specifies that the "input" script parameter of the "filter" component is the output of "do_hash", so "filter" will not run until "do_hash" completes successfully.  When the pipeline runs, past jobs that meet the criteria described above may be substituted for either or both components to avoid redundant computation.
 
 <notextile><pre>
 {

commit ec4d769ca8aa94d3406ece303afc3c6fd4f6c9b3
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 17:06:19 2014 -0400

    doc: Keep topic style tweaks.

diff --git a/doc/user/topics/keep.html.textile.liquid b/doc/user/topics/keep.html.textile.liquid
index c4778cd..1b5e62a 100644
--- a/doc/user/topics/keep.html.textile.liquid
+++ b/doc/user/topics/keep.html.textile.liquid
@@ -16,7 +16,7 @@ In this example we will use @c1bad4b39ca5a924e481008009d94e32+210@ which we adde
 </code></pre>
 </notextile>
 
-The command @arv keep get@ fetches the contents of the locator @c1bad4b39ca5a924e481008009d94e32+210 at .  This is a locator for a collection data block, so it fetches the contents of the collection.  In this example, this collection consists of a single file @var-GS000016015-ASM.tsv.bz2@ which is 227212247 bytes long, and is stored using four sequential data blocks, <code>204e43b8a1185621ca55a94839582e6f+67108864</code>, <code>b9677abbac956bd3e86b1deb28dfac03+67108864</code>, <code>fc15aff2a762b13f521baf042140acec+67108864</code>, <code>323d2a3ce20370c4ca1d3462a344f8fd+25885655</code>.
+The command @arv keep get@ fetches the contents of the locator @c1bad4b39ca5a924e481008009d94e32+210 at .  This is a locator for a collection data block, so it fetches the contents of the collection.  In this example, this collection consists of a single file @var-GS000016015-ASM.tsv.bz2@ which is 227212247 bytes long, and is stored using four sequential data blocks, @204e43b8a1185621ca55a94839582e6f+67108864@, @b9677abbac956bd3e86b1deb28dfac03+67108864@, @fc15aff2a762b13f521baf042140acec+67108864@, and @323d2a3ce20370c4ca1d3462a344f8fd+25885655 at .
 
 Let's use @arv keep get@ to download the first datablock:
 
@@ -44,5 +44,5 @@ Let's look at the size and compute the md5 hash of @block1@:
 </notextile>
 
 Notice that the block identifer <code>204e43b8a1185621ca55a94839582e6f+67108864</code> consists of:
-* the md5 hash @204e43b8a1185621ca55a94839582e6f@ which matches the md5 hash of @block1@
-* a size hint @67108864@ which matches the size of @block1@
+* the md5 hash of @block1@, @204e43b8a1185621ca55a94839582e6f@, plus
+* the size of @block1@, @67108864 at .

commit 9c1787c9c626b3d9ca24463410d7bc7d963584d4
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 16:57:09 2014 -0400

    doc: Fix typos in tutorial-parallel.

diff --git a/doc/user/topics/tutorial-parallel.html.textile.liquid b/doc/user/topics/tutorial-parallel.html.textile.liquid
index bbb506f..021d736 100644
--- a/doc/user/topics/tutorial-parallel.html.textile.liquid
+++ b/doc/user/topics/tutorial-parallel.html.textile.liquid
@@ -17,7 +17,7 @@ Next, using @nano@ or your favorite Unix text editor, create a new file called @
 
 notextile. <pre>~/<b>you</b>/crunch_scripts$ <code class="userinput">nano parallel-hash.py</code></pre>
 
-Add the following code to compute the md5 hash of each file in a
+Add the following code to compute the md5 hash of each file in a collection:
 
 <notextile> {% code 'parallel_hash_script_py' as python %} </notextile>
 
@@ -69,9 +69,7 @@ Because the job ran in parallel, each instance of parallel-hash creates a separa
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">arv keep ls e2ccd204bca37c77c0ba59fc470cd0f7+162</span>
-md5sum.txt
-md5sum.txt
-md5sum.txt
+./md5sum.txt
 ~/<b>you</b>/crunch_scripts$ <span class="userinput">arv keep get e2ccd204bca37c77c0ba59fc470cd0f7+162/md5sum.txt</span>
 0f1d6bcf55c34bed7f92a805d2d89bbf alice.txt
 504938460ef369cd275e4ef58994cffe bob.txt

commit 180890198f08267989ab33ad55776999ed66d273
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 16:51:36 2014 -0400

    doc: tutorial-job-debug update and style.
    
    * Jobs now require a "repository" parameter.
    * Use environment variables to specify the job script.

diff --git a/doc/user/topics/tutorial-job-debug.html.textile.liquid b/doc/user/topics/tutorial-job-debug.html.textile.liquid
index 0974e51..104391a 100644
--- a/doc/user/topics/tutorial-job-debug.html.textile.liquid
+++ b/doc/user/topics/tutorial-job-debug.html.textile.liquid
@@ -12,7 +12,7 @@ This tutorial uses *@you@* to denote your username.  Replace *@you@* with your u
 
 h2. Create a new script
 
-Change to your git directory and create a new script in "crunch_scripts/".
+Change to your git directory and create a new script in @crunch_scripts/@.
 
 <notextile>
 <pre><code>~$ <span class="userinput">cd <b>you</b>/crunch_scripts</span>
@@ -27,17 +27,24 @@ EOF</span>
 
 h2. Using arv-crunch-job to run the job in your VM
 
-Instead of a git commit hash, we provide the path to the directory in the "script_version" parameter.  The script specified in "script" will actually be searched for in the "crunch_scripts/" subdirectory of the directory specified "script_version".  Although we are running the script locally, the script still requires access to the Arvados API server and Keep storage service. The job will be recorded in the Arvados job history, and visible in Workbench.
+Instead of a git commit hash, we provide the path to the directory in the "script_version" parameter.  The script specified in "script" will actually be searched for in the @crunch_scripts/@ subdirectory of the directory specified "script_version".  Although we are running the script locally, the script still requires access to the Arvados API server and Keep storage service. The job will be recorded in the Arvados job history, and visible in Workbench.
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">cat >~/the_job <<EOF
 {
+ "repository":"",
  "script":"hello-world.py",
- "script_version":"/home/<b>you</b>/<b>you</b>",
+ "script_version":"$HOME/$USER",
  "script_parameters":{}
 }
 EOF</span>
-~/<b>you</b>/crunch_scripts</span>$ <span class="userinput">arv-crunch-job --job "$(cat ~/the_job)"</span>
+</code></pre>
+</notextile>
+
+Your shell should fill in values for @$HOME@ and @$USER@ so that the saved JSON points "script_version" at the directory with your checkout.  Now you can run that job:
+
+<notextile>
+<pre><code>~/<b>you</b>/crunch_scripts</span>$ <span class="userinput">arv-crunch-job --job "$(cat ~/the_job)"</span>
 2013-12-12_21:36:42 qr1hi-8i9sb-okzukfzkpbrnhst 29827  check slurm allocation
 2013-12-12_21:36:42 qr1hi-8i9sb-okzukfzkpbrnhst 29827  node localhost - 1 slots
 2013-12-12_21:36:42 qr1hi-8i9sb-okzukfzkpbrnhst 29827  start
@@ -53,7 +60,7 @@ EOF</span>
 2013-12-12_21:36:42 qr1hi-8i9sb-okzukfzkpbrnhst 29827 0 stderr hello world
 2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827 0 child 29834 on localhost.1 exit 0 signal 0 success=
 2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827 0 failure (#1, permanent) after 0 seconds
-2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827 0 output 
+2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827 0 output
 2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827  Every node has failed -- giving up on this round
 2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827  wait for last 0 children to finish
 2013-12-12_21:36:43 qr1hi-8i9sb-okzukfzkpbrnhst 29827  status: 0 done, 0 running, 0 todo
@@ -96,8 +103,9 @@ EOF</span>
 ~/<b>you</b>/crunch_scripts$ <span class="userinput">chmod +x hello-world-fixed.py</span>
 ~/<b>you</b>/crunch_scripts$ <span class="userinput">cat >~/the_job <<EOF
 {
+ "repository":"",
  "script":"hello-world-fixed.py",
- "script_version":"/home/<b>you</b>/<b>you</b>",
+ "script_version":"$HOME/$USER",
  "script_parameters":{}
 }
 EOF</span>

commit 075688b8e7bea4ae33b9c949d8e8f8733ec9c6db
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 15:47:52 2014 -0400

    doc: Make running-pipeline-command up-to-date.
    
    * @arv pipeline run@ now requires @--run-here at .
    * Pipelines are now under Activity in Workbench navigation.
    * The format of the example output has changed, along with all the
      Keep hashes.

diff --git a/doc/user/topics/running-pipeline-command-line.html.textile.liquid b/doc/user/topics/running-pipeline-command-line.html.textile.liquid
index 79e122d..1dc69e7 100644
--- a/doc/user/topics/running-pipeline-command-line.html.textile.liquid
+++ b/doc/user/topics/running-pipeline-command-line.html.textile.liquid
@@ -40,34 +40,34 @@ EOF</span>
 You can run this pipeline from the command line using @arv pipeline run@, filling in the UUID that you received from @arv pipeline_template create@:
 
 <notextile>
-<pre><code>~$ <span class="userinput">arv pipeline run --template qr1hi-p5p6p-xxxxxxxxxxxxxxx</span>
+<pre><code>~$ <span class="userinput">arv pipeline run --run-here --template qr1hi-p5p6p-xxxxxxxxxxxxxxx</span>
 2013-12-16 14:08:40 +0000 -- pipeline_instance qr1hi-d1hrv-vxzkp38nlde9yyr
 do_hash qr1hi-8i9sb-hoyc2u964ecv1s6 queued 2013-12-16T14:08:40Z
 filter  -                           -
 
 2013-12-16 14:08:51 +0000 -- pipeline_instance qr1hi-d1hrv-vxzkp38nlde9yyr
-do_hash qr1hi-8i9sb-hoyc2u964ecv1s6 8e1b6acdd3f2f1da722538127c5c6202+56
+do_hash qr1hi-8i9sb-hoyc2u964ecv1s6 1ed9ed18ef31ad21bcabcfeff7777bae+162
 filter  qr1hi-8i9sb-w5k40fztqgg9i2x queued 2013-12-16T14:08:50Z
 
 2013-12-16 14:09:01 +0000 -- pipeline_instance qr1hi-d1hrv-vxzkp38nlde9yyr
-do_hash qr1hi-8i9sb-hoyc2u964ecv1s6 8e1b6acdd3f2f1da722538127c5c6202+56
-filter  qr1hi-8i9sb-w5k40fztqgg9i2x 735ac35adf430126cf836547731f3af6+56
+do_hash qr1hi-8i9sb-hoyc2u964ecv1s6 1ed9ed18ef31ad21bcabcfeff7777bae+162
+filter  qr1hi-8i9sb-w5k40fztqgg9i2x d3bcc2ee0f0ea31049000c721c0f3a2a+56
 </code></pre>
 </notextile>
 
-This instantiates your pipeline and displays a live feed of its status.  The new pipeline instance will also show up on the Workbench %(rarr)→% Compute %(rarr)→% Pipeline instances page.
+This instantiates your pipeline and displays a live feed of its status.  The new pipeline instance will also show up on Workbench *Activity* %(rarr)→% *Recent pipeline instances* page.
 
 Arvados adds each pipeline component to the job queue as its dependencies are satisfied (or immediately if it has no dependencies) and finishes when all components are completed or failed and there is no more work left to do.
 
-The Keep locators of the output of each of @"do_hash"@ and @"filter"@ component are available from the output log shown above.  The output is also available on the Workbench by navigating to %(rarr)→% Compute %(rarr)→% Pipeline instances %(rarr)→% pipeline uuid under the *id* column %(rarr)→% components.
+The Keep locators of the output of each of @"do_hash"@ and @"filter"@ component are available from the output log shown above.  The output is also available on the Workbench by navigating to *Activity* %(rarr)→% *Recent pipeline instances* %(rarr)→% pipeline UUID under the *Instance* column %(rarr)→% *output* column.
 
 <notextile>
-<pre><code>~$ <span class="userinput">arv keep get 8e1b6acdd3f2f1da722538127c5c6202+56/md5sum.txt</span>
-0f1d6bcf55c34bed7f92a805d2d89bbf alice.txt
-504938460ef369cd275e4ef58994cffe bob.txt
-8f3b36aff310e06f3c5b9e95678ff77a carol.txt
-~$ <span class="userinput">arv keep get 735ac35adf430126cf836547731f3af6+56/0-filter.txt</span>
-0f1d6bcf55c34bed7f92a805d2d89bbf alice.txt
+<pre><code>~$ <span class="userinput">arv keep get 1ed9ed18ef31ad21bcabcfeff7777bae+162/md5sum.txt</span>
+0f1d6bcf55c34bed7f92a805d2d89bbf 887cd41e9c613463eab2f0d885c6dd96+83/./alice.txt
+504938460ef369cd275e4ef58994cffe 887cd41e9c613463eab2f0d885c6dd96+83/./bob.txt
+8f3b36aff310e06f3c5b9e95678ff77a 887cd41e9c613463eab2f0d885c6dd96+83/./carol.txt
+~$ <span class="userinput">arv keep get d3bcc2ee0f0ea31049000c721c0f3a2a+56/0-filter.txt</span>
+0f1d6bcf55c34bed7f92a805d2d89bbf 887cd41e9c613463eab2f0d885c6dd96+83/./alice.txt
 </code></pre>
 </notextile>
 
@@ -91,7 +91,7 @@ Notice that the pipeline template explicitly specifies the Keep locator for the
 You can specify values for pipeline component script_parameters like this:
 
 <notextile>
-<pre><code>~$ <span class="userinput">arv pipeline run --template qr1hi-p5p6p-xxxxxxxxxxxxxxx do_hash::input=c1bad4b39ca5a924e481008009d94e32+210</span>
+<pre><code>~$ <span class="userinput">arv pipeline run --run-here --template qr1hi-p5p6p-xxxxxxxxxxxxxxx do_hash::input=c1bad4b39ca5a924e481008009d94e32+210</span>
 2013-12-17 20:31:24 +0000 -- pipeline_instance qr1hi-d1hrv-tlkq20687akys8e
 do_hash qr1hi-8i9sb-rffhuay4jryl2n2 queued 2013-12-17T20:31:24Z
 filter  -                           -
@@ -101,11 +101,11 @@ do_hash qr1hi-8i9sb-rffhuay4jryl2n2 {:done=>1, :running=>1, :failed=>0, :todo=>0
 filter  -                           -
 
 2013-12-17 20:31:55 +0000 -- pipeline_instance qr1hi-d1hrv-tlkq20687akys8e
-do_hash qr1hi-8i9sb-rffhuay4jryl2n2 880b55fb4470b148a447ff38cacdd952+54
+do_hash qr1hi-8i9sb-rffhuay4jryl2n2 50cafdb29cc21dd6eaec85ba9e0c6134+56
 filter  qr1hi-8i9sb-j347g1sqovdh0op queued 2013-12-17T20:31:55Z
 
 2013-12-17 20:32:05 +0000 -- pipeline_instance qr1hi-d1hrv-tlkq20687akys8e
-do_hash qr1hi-8i9sb-rffhuay4jryl2n2 880b55fb4470b148a447ff38cacdd952+54
+do_hash qr1hi-8i9sb-rffhuay4jryl2n2 50cafdb29cc21dd6eaec85ba9e0c6134+56
 filter  qr1hi-8i9sb-j347g1sqovdh0op 490cd451c8108824b8a17e3723e1f236+19
 </code></pre>
 </notextile>
@@ -113,10 +113,10 @@ filter  qr1hi-8i9sb-j347g1sqovdh0op 490cd451c8108824b8a17e3723e1f236+19
 Now check the output:
 
 <notextile>
-<pre><code>~$ <span class="userinput">arv keep get 880b55fb4470b148a447ff38cacdd952+54/md5sum.txt</span>
-44b8ae3fde7a8a88d2f7ebd237625b4f var-GS000016015-ASM.tsv.bz2
+<pre><code>~$ <span class="userinput">arv keep get 50cafdb29cc21dd6eaec85ba9e0c6134+56/md5sum.txt</span>
+44b8ae3fde7a8a88d2f7ebd237625b4f c1bad4b39ca5a924e481008009d94e32+210/./var-GS000016015-ASM.tsv.bz2
 ~$ <span class="userinput">arv keep get 490cd451c8108824b8a17e3723e1f236+19/0-filter.txt</span>
 </code></pre>
 </notextile>
 
-Since none of the files in the collection have hash code that start with 0, output of the filter component is empty.
+Since none of the files in the collection have hash code that start with 0, the output of the filter component is empty.

commit 91cdce018ef0906d43452c1b0079c110e7babb60
Author: Brett Smith <brett at curoverse.com>
Date:   Fri Mar 28 10:31:59 2014 -0400

    doc: Highlight "you" in reference JSON.
    
    Since this reference provides users with raw example JSON, and doesn't
    tell them to write it with @cat@, we can't rely on shell variable
    expansion here.

diff --git a/doc/user/reference/job-and-pipeline-reference.html.textile.liquid b/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
index 56f4aec..9694798 100644
--- a/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
+++ b/doc/user/reference/job-and-pipeline-reference.html.textile.liquid
@@ -41,36 +41,36 @@ h3. Examples
 
 Run the script "crunch_scripts/hash.py" in the repository "you" using the "master" branch head.  Arvados is allowed to re-use a previous job if the script_version of the past job is the same as the "master" branch head (i.e. there have not been any subsequent commits to "master").
 
-<pre>
+<notextile><pre>
 {
   "script": "hash.py",
-  "repository": "you",
+  "repository": "<b>you</b>",
   "script_version": "master",
   "script_parameters": {
     "input": "c1bad4b39ca5a924e481008009d94e32+210"
   }
 }
-</pre>
+</pre></notextile>
 
 Run using exactly the version "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5". Arvados is allowed to re-use a previous job if the script_version of that job is also "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5".
 
-<pre>
+<notextile><pre>
 {
   "script": "hash.py",
-  "repository": "you",
+  "repository": "<b>you</b>",
   "script_version": "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5",
   "script_parameters": {
     "input": "c1bad4b39ca5a924e481008009d94e32+210"
   }
 }
-</pre>
+</pre></notextile>
 
 Arvados is allowed to re-use a previous job if the script_version of the past job is between "earlier_version_tag" and the head of the "master" branch (inclusive), but not "blacklisted_version_tag".  If there are no previous jobs, run the job using the head of the "master" branch as specified in "script_version".
 
-<pre>
+<notextile><pre>
 {
   "script": "hash.py",
-  "repository": "you",
+  "repository": "<b>you</b>",
   "minimum_script_version": "earlier_version_tag",
   "script_version": "master",
   "exclude_script_versions", ["blacklisted_version_tag"],
@@ -78,21 +78,21 @@ Arvados is allowed to re-use a previous job if the script_version of the past jo
     "input": "c1bad4b39ca5a924e481008009d94e32+210"
   }
 }
-</pre>
+</pre></notextile>
 
 Run the script "crunch_scripts/monte-carlo.py" in the repository "you" using the "master" branch head.  Because it is marked as "nondeterministic", never re-use previous jobs, and never re-use this job.
 
-<pre>
+<notextile><pre>
 {
   "script": "monte-carlo.py",
-  "repository": "you",
+  "repository": "<b>you</b>",
   "script_version": "master",
   "nondeterministic": true,
   "script_parameters": {
     "input": "c1bad4b39ca5a924e481008009d94e32+210"
   }
 }
-</pre>
+</pre></notextile>
 
 h2. Pipelines
 
@@ -122,13 +122,13 @@ h3. Examples
 
 This a pipeline named "Filter md5 hash values" with two components, "do_hash" and "filter".  The "input" script parameter of the "do_hash" component is required to be filled in by the user, and the expected data type is "Collection".  This also specifies that the "input" script parameter of the "filter" component is the output of "do_hash", so "filter" will not run until "do_hash" completes successfully.  When the pipeline runs, past jobs that meet the criteria described above may be substituted for either or both components to avoid redundant computation.
 
-<pre>
+<notextile><pre>
 {
   "name": "Filter md5 hash values",
   "components": {
     "do_hash": {
       "script": "hash.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": {
         "input": {
@@ -139,7 +139,7 @@ This a pipeline named "Filter md5 hash values" with two components, "do_hash" an
     },
     "filter": {
       "script": "0-filter.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": {
         "input": {
@@ -149,23 +149,23 @@ This a pipeline named "Filter md5 hash values" with two components, "do_hash" an
     }
   }
 }
-</pre>
+</pre></notextile>
 
 This pipeline consists of three components.  The components "thing1" and "thing2" both depend on "cat_in_the_hat".  Once the "cat_in_the_hat" job is complete, both "thing1" and "thing2" can run in parallel, because they do not depend on each other.
 
-<pre>
+<notextile><pre>
 {
   "name": "Wreck the house",
   "components": {
     "cat_in_the_hat": {
       "script": "cat.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": { }
     },
     "thing1": {
       "script": "thing1.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": {
         "input": {
@@ -175,7 +175,7 @@ This pipeline consists of three components.  The components "thing1" and "thing2
     },
     "thing2": {
       "script": "thing2.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": {
         "input": {
@@ -185,29 +185,29 @@ This pipeline consists of three components.  The components "thing1" and "thing2
     },
   }
 }
-</pre>
+</pre></notextile>
 
 This pipeline consists of three components.  The component "cleanup" depends on "thing1" and "thing2".  Both "thing1" and "thing2" are started immediately and can run in parallel, because they do not depend on each other, but "cleanup" cannot begin until both "thing1" and "thing2" have completed.
 
-<pre>
+<notextile><pre>
 {
   "name": "Clean the house",
   "components": {
     "thing1": {
       "script": "thing1.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": { }
     },
     "thing2": {
       "script": "thing2.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": { }
     },
     "cleanup": {
       "script": "cleanup.py",
-      "repository": "you",
+      "repository": "<b>you</b>",
       "script_version": "master",
       "script_parameters": {
         "mess1": {
@@ -220,4 +220,4 @@ This pipeline consists of three components.  The component "cleanup" depends on
     }
   }
 }
-</pre>
+</pre></notextile>

commit 31c82149c41a5c2c1377b617bfedc0dfbb08faf4
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Mar 31 09:51:42 2014 -0400

    doc: Use $USER in tutorial JSON.
    
    The tutorial encourages users to copy and paste command text.
    However, most of the JSON needs to have a user-specific value for
    "repository", which is easy to overlook.  Since we're telling users to
    save JSON with @cat@, we can take advantage of shell variable
    expansion to make the right thing happen automatically.
    
    I added accompanying notes to help explain what's going on for people
    who aren't copying instructions so literally.
    
    Conflicts:
    	doc/user/tutorials/tutorial-firstscript.html.textile.liquid
    	doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid

diff --git a/doc/user/topics/running-pipeline-command-line.html.textile.liquid b/doc/user/topics/running-pipeline-command-line.html.textile.liquid
index 7940348..79e122d 100644
--- a/doc/user/topics/running-pipeline-command-line.html.textile.liquid
+++ b/doc/user/topics/running-pipeline-command-line.html.textile.liquid
@@ -16,7 +16,7 @@ In "Writing a pipeline":{{ site.baseurl }}/user/tutorials/tutorial-firstscript.h
       "script_parameters":{
         "input": "887cd41e9c613463eab2f0d885c6dd96+83"
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master"
     },
     "filter":{
@@ -26,7 +26,7 @@ In "Writing a pipeline":{{ site.baseurl }}/user/tutorials/tutorial-firstscript.h
           "output_of":"do_hash"
         }
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master"
     }
   }
@@ -35,6 +35,8 @@ EOF</span>
 ~$ <span class="userinput">arv pipeline_template create --pipeline-template "$(cat the_pipeline)"</span></code></pre>
 </notextile>
 
+(Your shell should automatically fill in @$USER@ with your login name.  The JSON that gets saved should have @"repository"@ pointed at your personal git repository.)
+
 You can run this pipeline from the command line using @arv pipeline run@, filling in the UUID that you received from @arv pipeline_template create@:
 
 <notextile>
diff --git a/doc/user/topics/tutorial-parallel.html.textile.liquid b/doc/user/topics/tutorial-parallel.html.textile.liquid
index 6dbdb8a..bbb506f 100644
--- a/doc/user/topics/tutorial-parallel.html.textile.liquid
+++ b/doc/user/topics/tutorial-parallel.html.textile.liquid
@@ -17,7 +17,7 @@ Next, using @nano@ or your favorite Unix text editor, create a new file called @
 
 notextile. <pre>~/<b>you</b>/crunch_scripts$ <code class="userinput">nano parallel-hash.py</code></pre>
 
-Add the following code to compute the md5 hash of each file in a 
+Add the following code to compute the md5 hash of each file in a
 
 <notextile> {% code 'parallel_hash_script_py' as python %} </notextile>
 
@@ -40,7 +40,7 @@ You should now be able to run your new script using Crunch, with "script" referr
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">cat >~/the_job <<EOF
 {
  "script": "parallel-hash.py",
- "repository": "<b>you</b>",
+ "repository": "$USER",
  "script_version": "master",
  "script_parameters":
  {
@@ -63,6 +63,8 @@ EOF</span>
 </code></pre>
 </notextile>
 
+(Your shell should automatically fill in @$USER@ with your login name.  The job JSON that gets saved should have @"repository"@ pointed at your personal git repository.)
+
 Because the job ran in parallel, each instance of parallel-hash creates a separate @md5sum.txt@ as output.  Arvados automatically collates theses files into a single collection, which is the output of the job:
 
 <notextile>
diff --git a/doc/user/tutorials/running-external-program.html.textile.liquid b/doc/user/tutorials/running-external-program.html.textile.liquid
index c46630d..56b71c0 100644
--- a/doc/user/tutorials/running-external-program.html.textile.liquid
+++ b/doc/user/tutorials/running-external-program.html.textile.liquid
@@ -53,7 +53,7 @@ You should now be able to run your new script using Crunch, with @"script"@ refe
           "dataclass": "Collection"
         }
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master"
     }
   }
@@ -63,4 +63,6 @@ EOF
 </code></pre>
 </notextile>
 
+(Your shell should automatically fill in @$USER@ with your login name.  The JSON that gets saved should have @"repository"@ pointed at your personal git repository.)
+
 Your new pipeline template will appear on the Workbench "Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.  You can run the "pipeline using Workbench":tutorial-pipeline-workbench.html.
diff --git a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
index 5cec9c1..36187d2 100644
--- a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
@@ -109,7 +109,7 @@ Next, create a file that contains the pipeline definition:
           "dataclass": "Collection"
         }
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master",
       "output_is_persistent":true
     }
@@ -125,7 +125,7 @@ EOF
 * @"name"@ is a human-readable name for the pipeline.
 * @"components"@ is a set of scripts that make up the pipeline.
 * The component is listed with a human-readable name (@"do_hash"@ in this example).
-* @"repository"@ is the name of a git repository to search for the script version.  You can access a list of available git repositories on the Arvados Workbench under "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}/repositories.
+* @"repository"@ is the name of a git repository to search for the script version.  You can access a list of available git repositories on the Arvados Workbench under "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}/repositories.  Your shell should automatically fill in @$USER@ with your login name, so that the final JSON has @"repository"@ pointed at your personal git repository.
 * @"script_version"@ specifies the version of the script that you wish to run.  This can be in the form of an explicit git revision hash, a tag, or a branch (in which case it will use the HEAD of the specified branch).  Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run.
 * @"script"@ specifies the filename of the script to run.  Crunch expects to find this in the @crunch_scripts/@ subdirectory of the git repository.
 * @"script_parameters"@ describes the parameters for the script.  In this example, there is one parameter called @input@ which is @required@ and is a @Collection at .
diff --git a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
index b0025c7..fa63588 100644
--- a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
@@ -43,7 +43,7 @@ Next, create a file that contains the pipeline definition:
           "dataclass": "Collection"
         }
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master",
       "output_is_persistent":false
     },
@@ -54,7 +54,7 @@ Next, create a file that contains the pipeline definition:
           "output_of":"do_hash"
         }
       },
-      "repository":"<b>you</b>",
+      "repository":"$USER",
       "script_version":"master",
       "output_is_persistent":true
     }
@@ -66,6 +66,8 @@ EOF
 
 * @"output_of"@ indicates that the @output@ of the @do_hash@ component is connected to the @"input"@ of @do_filter at .  This is a _dependency_.  Arvados uses the dependencies between jobs to automatically determine the correct order to run the jobs.
 
+(Your shell should automatically fill in @$USER@ with your login name.  The JSON that gets saved should have @"repository"@ pointed at your personal git repository.)
+
 Now, use @arv pipeline_template create@ to register your pipeline template in Arvados:
 
 <notextile>

commit d5a749082fab8cb70a814ab9329cf5b8f9e36bb7
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 16:00:09 2014 -0400

    doc: Tweak running-external-program style.

diff --git a/doc/user/tutorials/running-external-program.html.textile.liquid b/doc/user/tutorials/running-external-program.html.textile.liquid
index 758faa5..c46630d 100644
--- a/doc/user/tutorials/running-external-program.html.textile.liquid
+++ b/doc/user/tutorials/running-external-program.html.textile.liquid
@@ -17,7 +17,7 @@ Start by entering the @crunch_scripts@ directory of your git repository:
 </code></pre>
 </notextile>
 
-Next, using @nano@ or your favorite Unix text editor, create a new file called @run-md5sum.py@ in the @crunch_scripts@ directory.  
+Next, using @nano@ or your favorite Unix text editor, create a new file called @run-md5sum.py@ in the @crunch_scripts@ directory.
 
 notextile. <pre>~/<b>you</b>/crunch_scripts$ <code class="userinput">nano run-md5sum.py</code></pre>
 
@@ -29,7 +29,7 @@ Make the file executable:
 
 notextile. <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">chmod +x run-md5sum.py</span></code></pre>
 
-Next, add the file to @git@ staging, commit and push:
+Next, use @git@ to stage the file, commit, and push:
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">git add run-md5sum.py</span>
@@ -38,7 +38,7 @@ Next, add the file to @git@ staging, commit and push:
 </code></pre>
 </notextile>
 
-You should now be able to run your new script using Crunch, with "script" referring to our new "run-md5sum.py" script.
+You should now be able to run your new script using Crunch, with @"script"@ referring to our new @run-md5sum.py@ script.
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">cat >~/the_pipeline <<EOF
@@ -63,4 +63,4 @@ EOF
 </code></pre>
 </notextile>
 
-Your new pipeline template will appear on the "Workbench %(rarr)→% Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.  You can run the "pipeline using workbench":tutorial-pipeline-workbench.html
+Your new pipeline template will appear on the Workbench "Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.  You can run the "pipeline using Workbench":tutorial-pipeline-workbench.html.

commit dbbf7f7e313392314139bbdb66a50b740b26d532
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Mar 31 09:50:55 2014 -0400

    doc: tutorial-new-pipeline clarity.
    
    I renamed the "filter" job to "do_filter" to help disambiguate it a
    little from "0-filter.py".
    
    Conflicts:
    	doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid

diff --git a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
index fc98d01..b0025c7 100644
--- a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
@@ -47,7 +47,7 @@ Next, create a file that contains the pipeline definition:
       "script_version":"master",
       "output_is_persistent":false
     },
-    "filter":{
+    "do_filter":{
       "script":"0-filter.py",
       "script_parameters":{
         "input":{
@@ -64,13 +64,13 @@ EOF
 </span></code></pre>
 </notextile>
 
-* @"output_of"@ indicates that the @output@ of the @do_hash@ component should be used as the @"input"@ parameter for the @filter@ component. Arvados determines the correct order to run the jobs when such dependencies are present.
+* @"output_of"@ indicates that the @output@ of the @do_hash@ component is connected to the @"input"@ of @do_filter at .  This is a _dependency_.  Arvados uses the dependencies between jobs to automatically determine the correct order to run the jobs.
 
-Now, use @arv pipeline_template create@ tell Arvados about your pipeline template:
+Now, use @arv pipeline_template create@ to register your pipeline template in Arvados:
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">arv pipeline_template create --pipeline-template "$(cat ~/the_pipeline)"</span>
 </code></pre>
 </notextile>
 
-Your new pipeline template will appear on the "Workbench %(rarr)→% Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.
+Your new pipeline template will appear on the Workbench "Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.

commit 1956ba70cdf1a367a61c8a8a5428db52fa02fc7c
Author: Brett Smith <brett at curoverse.com>
Date:   Mon Mar 31 09:50:01 2014 -0400

    doc: tutorial-firstscript style consistency.
    
    Conflicts:
    	doc/user/tutorials/tutorial-firstscript.html.textile.liquid

diff --git a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
index 03c76f6..5cec9c1 100644
--- a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
@@ -13,16 +13,14 @@ This tutorial uses *@you@* to denote your username.  Replace *@you@* with your u
 
 h2. Setting up Git
 
-As discussed in the previous tutorial, all Crunch scripts are managed through the @git@ revision control system.
-
-First, you should do some basic configuration for git (you only need to do this the first time):
+All Crunch scripts are managed through the @git@ revision control system.  Before you start using git, you should do some basic configuration (you only need to do this the first time):
 
 <notextile>
 <pre><code>~$ <span class="userinput">git config --global user.name "Your Name"</span>
 ~$ <span class="userinput">git config --global user.email <b>you</b>@example.com</span></code></pre>
 </notextile>
 
-On the Arvados Workbench, navigate to "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}/repositories .  You should see a repository with your user name listed in the *name* column.  Next to *name* is the column *push_url*.  Copy the *push_url* value associated with your repository.  This should look like <notextile><code>git at git.{{ site.arvados_api_host }}:<b>you</b>.git</code></notextile>.
+On the Arvados Workbench, navigate to "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}/repositories.  You should see a repository with your user name listed in the *name* column.  Next to *name* is the column *push_url*.  Copy the *push_url* value associated with your repository.  This should look like <notextile><code>git at git.{{ site.arvados_api_host }}:<b>you</b>.git</code></notextile>.
 
 Next, on the Arvados virtual machine, clone your git repository:
 
@@ -38,7 +36,7 @@ For more information about using @git@, try
 
 notextile. <pre><code>$ <span class="userinput">man gittutorial</span></code></pre>
 
-or <b>"click here to search Google for git tutorials":http://google.com/#q=git+tutorial</b>
+or *"search Google for git tutorials":http://google.com/#q=git+tutorial*.
 {% include 'notebox_end' %}
 
 h2. Creating a Crunch script
@@ -64,15 +62,15 @@ Make the file executable:
 notextile. <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">chmod +x hash.py</span></code></pre>
 
 {% include 'notebox_begin' %}
-The steps below describe how to execute the script after committing changes to git. To run a script locally for testing, please see "debugging a crunch script":{{site.baseurl}}/user/topics/tutorial-job-debug.html .
+The steps below describe how to execute the script after committing changes to git. To run a script locally for testing, please see "debugging a crunch script":{{site.baseurl}}/user/topics/tutorial-job-debug.html.
 
 {% include 'notebox_end' %}
 
-Next, add the file to @git@ staging.  This tells @git@ that the file should be included on the next commit.
+Next, add the file to git staging.  This tells @git@ that the file should be included on the next commit.
 
 notextile. <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">git add hash.py</span></code></pre>
 
-Next, commit your changes to git.  All staged changes are recorded into the local @git@ repository:
+Next, commit your changes to git.  All staged changes are recorded into the local git repository:
 
 <notextile>
 <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">git commit -m"my first script"</span>
@@ -121,23 +119,23 @@ EOF
 </span></code></pre>
 </notextile>
 
-* @cat@ is a standard Unix utility that simply copies standard input to standard output
-* @<<EOF@ tells the shell to direct the following lines into the standard input for @cat@ up until it sees the line @EOF@
-* @>the_pipeline@ redirects standard output to a file called @the_pipeline@
-* @"name"@ is a human-readable name for the pipeline
-* @"components"@ is a set of scripts that make up the pipeline
-* The component is listed with a human-readable name (@"do_hash"@ in this example)
-* @"script"@ specifies the name of the script to run.  The script is searched for in the "crunch_scripts/" subdirectory of the @git@ checkout specified by @"script_version"@.
-* @"repository"@ is the git repository to search for the script version.  You can access a list of available @git@ repositories on the Arvados workbench under "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}//repositories .
-* @"script_version"@ specifies the version of the script that you wish to run.  This can be in the form of an explicit @git@ revision hash, a tag, or a branch (in which case it will take the HEAD of the specified branch).  Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run.
+* @cat@ is a standard Unix utility that writes a sequence of input to standard output.
+* @<<EOF@ tells the shell to direct the following lines into the standard input for @cat@ up until it sees the line @EOF at .
+* @>the_pipeline@ redirects standard output to a file called @the_pipeline at .
+* @"name"@ is a human-readable name for the pipeline.
+* @"components"@ is a set of scripts that make up the pipeline.
+* The component is listed with a human-readable name (@"do_hash"@ in this example).
+* @"repository"@ is the name of a git repository to search for the script version.  You can access a list of available git repositories on the Arvados Workbench under "Compute %(rarr)→% Code repositories":https://{{site.arvados_workbench_host}}/repositories.
+* @"script_version"@ specifies the version of the script that you wish to run.  This can be in the form of an explicit git revision hash, a tag, or a branch (in which case it will use the HEAD of the specified branch).  Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run.
+* @"script"@ specifies the filename of the script to run.  Crunch expects to find this in the @crunch_scripts/@ subdirectory of the git repository.
 * @"script_parameters"@ describes the parameters for the script.  In this example, there is one parameter called @input@ which is @required@ and is a @Collection at .
 * @"output_is_persistent"@ indicates whether the output of the job is considered valuable. If this value is false (or not given), the output will be treated as intermediate data and eventually deleted to reclaim disk space.
 
-Now, use @arv pipeline_template create@ tell Arvados about your pipeline template:
+Now, use @arv pipeline_template create@ to register your pipeline template in Arvados:
 
 <notextile>
 <pre><code>~$ <span class="userinput">arv pipeline_template create --pipeline-template "$(cat the_pipeline)"</span>
 </code></pre>
 </notextile>
 
-Your new pipeline template will appear on the "Workbench %(rarr)→% Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.  You can run the "pipeline using workbench":tutorial-pipeline-workbench.html
+Your new pipeline template will appear on the Workbench "Compute %(rarr)→% Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_instances page.  You can run the "pipeline using Workbench":tutorial-pipeline-workbench.html.

commit 477b2aa7dcbf5cd4e1f1a56d2275cc3ded5ad023
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 14:01:27 2014 -0400

    doc: tutorial-pipeline-workbench style+clarity.

diff --git a/doc/user/tutorials/tutorial-pipeline-workbench.html.textile.liquid b/doc/user/tutorials/tutorial-pipeline-workbench.html.textile.liquid
index 46aadd3..277b966 100644
--- a/doc/user/tutorials/tutorial-pipeline-workbench.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-pipeline-workbench.html.textile.liquid
@@ -6,20 +6,19 @@ title: "Running a pipeline using Workbench"
 
 notextile. <div class="spaced-out">
 
-# Go to "Collections":https://{{ site.arvados_workbench_host }}/collections .
-# On the collections page, go to the search box <span class="glyphicon glyphicon-search"></span> and search for "tutorial".
-# This should yield a collection with the contents "var-GS000016015-ASM.tsv.bz2"
-# Click on the check box to the left of "var-GS000016015-ASM.tsv.bz2".  This puts the collection in your persistent selection list.  Click on the paperclip <span class="glyphicon glyphicon-paperclip"></span> in the upper right to get a dropdown menu listing your current selections.
-# Go to "Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_templates .
-# Look for a pipeline named "Tutorial pipeline".
-# Click on the play button <span class="glyphicon glyphicon-play"></span> to the left of "Tutorial pipeline".  This will take you to a new page to configure the pipeline.
-# Under *parameter* look for "input".  Set the value of "input" by clicking on on "none" to get a editing popup.  At the top of the selection list in the editing popup will be the collection that you selected in step 4.
-# You can now click on "Run pipeline" in the upper right to start the pipeline.
-# This will reload the page with the pipeline queued to run.
+# Go to "Collections":https://{{ site.arvados_workbench_host }}/collections (*Data* %(rarr)→% *Collections (data files)*).
+# On the Collections page, go to the search box <span class="glyphicon glyphicon-search"></span> and search for "tutorial".
+# The results should include a collection with the contents *var-GS000016015-ASM.tsv.bz2*.
+# Click on the check box to the left of *var-GS000016015-ASM.tsv.bz2*.  This puts the collection in your persistent selection list.  You can click on the paperclip <span class="glyphicon glyphicon-paperclip"></span> in the upper right to review your current selections.
+# Go to "Pipeline templates":https://{{ site.arvados_workbench_host }}/pipeline_templates (*Compute* %(rarr)→% *Pipeline templates*).
+# Look for a pipeline named *Tutorial pipeline*.
+# Click on the play button <span class="glyphicon glyphicon-play"></span> to the left of *Tutorial pipeline*.  This will take you to a new page to configure the pipeline.
+# Under the *parameter* column, look for *input*.  Set the value of *input* by clicking on *none* to get a selection popup.  The collection that you selected in step 4 will be at the top of that pulldown menu.  Select that collection in the pulldown menu.
+# You can now click on the *Run pipeline* button in the upper right to start the pipeline.  A new page shows the pipeline status, queued to run.
 # The page refreshes automatically every 15 seconds.  You should see the pipeline running, and then finish successfully.
-# Once it is finished, click on the link under the *output* column.  This will take you to the collection page for the output of this pipeline.
-# Click on "md5sum.txt" to see the actual file that is the output of this pipeline.
-# On the collection page, click on the "Provenance graph" tab to see a graphical representation of the data elements and pipelines that were involved in generating this file.
+# Once the pipeline is finished, click on the link under the *output* column.  This will take you to the collection page for the output of this pipeline.
+# Click on *md5sum.txt* to see the actual file that is the output of this pipeline.
+# Go back to the collection page for the result.  Click on the *Provenance graph* tab to see a graph illustrating the collections and scripts that were used to generate this file.
 
 notextile. </div>
 

commit 4ac093dfacb82c270ef2536822ee4ab07715c88e
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 12:46:19 2014 -0400

    doc: Fix tutorial-keep typos and consistency.

diff --git a/doc/user/tutorials/tutorial-keep.html.textile.liquid b/doc/user/tutorials/tutorial-keep.html.textile.liquid
index 5a5e879..1f4c723 100644
--- a/doc/user/tutorials/tutorial-keep.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-keep.html.textile.liquid
@@ -11,17 +11,15 @@ This tutorial introduces you to the Arvados file storage system.
 
 The Arvados distributed file system is called *Keep*.  Keep is a content-addressable file system.  This means that files are managed using special unique identifiers derived from the _contents_ of the file, rather than human-assigned file names (specifically, the md5 hash).  This has a number of advantages:
 * Files can be stored and replicated across a cluster of servers without requiring a central name server.
-* Systematic validation of data integrity by both server and client because the checksum is built into the identifier.
-* Minimizes data duplication (two files with the same contents will result in the same identifier, and will not be stored twice.)
-* Avoids data race conditions (an identifier always points to the same data.)
+* Both the server and client systematically validate data integrity because the checksum is built into the identifier.
+* Data duplication is minimized—two files with the same contents will have in the same identifier, and will not be stored twice.
+* It avoids data race conditions, since an identifier always points to the same data.
 
 h1. Putting Data into Keep
 
-We will start with downloading a freely available VCF file from the "Personal Genome Project (PGP)":http://www.personalgenomes.org subject "hu599905":https://my.personalgenomes.org/profile/hu599905 to a staging directory on the VM, and then add it to Keep.
+We will start by downloading a freely available VCF file from "Personal Genome Project (PGP)":http://www.personalgenomes.org subject "hu599905":https://my.personalgenomes.org/profile/hu599905 to a staging directory on the VM, and adding it to Keep.  In the following commands, replace *@you@* with your login name.
 
-In the following tutorials, replace <b><code>you</code></b> with your user id.
-
-First, log into the Arvados VM instance and set up the staging area:
+First, log into your Arvados VM and set up the staging area:
 
 notextile. <pre><code>~$ <span class="userinput">mkdir /scratch/<b>you</b></span></code></pre>
 
@@ -65,7 +63,7 @@ You can also use @arv keep put@ to add an entire directory:
 /scratch/<b>you</b>$ <span class="userinput">echo "hello bob" > tmp/bob.txt</span>
 /scratch/<b>you</b>$ <span class="userinput">echo "hello carol" > tmp/carol.txt</span>
 /scratch/<b>you</b>$ <span class="userinput">arv keep put tmp</span>
-0M / 0M 100.0% 
+0M / 0M 100.0%
 887cd41e9c613463eab2f0d885c6dd96+83
 </code></pre>
 </notextile>
@@ -76,12 +74,12 @@ h1. Getting Data from Keep
 
 h2. Using Workbench
 
-You may access collections through the "Collections section of Arvados Workbench":https://{{ site.arvados_workbench_host }}/collections located at "https://{{ site.arvados_workbench_host }}/collections":https://{{ site.arvados_workbench_host }}/collections .  You can also access individual collections and individual files within a collection.  Some examples:
+You may access collections through the "Collections section of Arvados Workbench":https://{{ site.arvados_workbench_host }}/collections at *Data* %(rarr)→% *Collections (data files)*.  You can also access individual files within a collection.  Some examples:
 
 * "https://{{ site.arvados_workbench_host }}/collections/c1bad4b39ca5a924e481008009d94e32+210":https://{{ site.arvados_workbench_host }}/collections/c1bad4b39ca5a924e481008009d94e32+210
 * "https://{{ site.arvados_workbench_host }}/collections/887cd41e9c613463eab2f0d885c6dd96+83/alice.txt":https://{{ site.arvados_workbench_host }}/collections/887cd41e9c613463eab2f0d885c6dd96+83/alice.txt
 
-h2(#arv-get). Using arv-get
+h2(#arv-get). Using the command line
 
 You can view the contents of a collection using @arv keep ls@:
 
@@ -109,6 +107,8 @@ Use @arv keep get@ to download the contents of a collection and place it in the
 
 <notextile>
 <pre><code>/scratch/<b>you</b>$ <span class="userinput">arv keep get c1bad4b39ca5a924e481008009d94e32+210/ .</span>
+/scratch/<b>you</b>$ <span class="userinput">ls var-GS000016015-ASM.tsv.bz2</span>
+var-GS000016015-ASM.tsv.bz2
 </code></pre>
 </notextile>
 
@@ -129,10 +129,10 @@ With a local copy of the file, we can do some computation, for example computing
 
 h2. Using arv-mount
 
-Use @arv-mount@ to take advantage of the "File System in User Space / FUSE":http://fuse.sourceforge.net/ feature of the Linux kernel to mount a Keep collection as if it were a regular directory tree.
+Use @arv-mount@ to mount a Keep collection and access it using traditional filesystem tools.
 
 <notextile>
-<pre><code>/scratch/<b>you</b>$ <span class="userinput">mkdir mnt</span>
+<pre><code>/scratch/<b>you</b>$ <span class="userinput">mkdir -p mnt</span>
 /scratch/<b>you</b>$ <span class="userinput">arv-mount --collection c1bad4b39ca5a924e481008009d94e32+210 mnt &</span>
 /scratch/<b>you</b>$ <span class="userinput">cd mnt</span>
 /scratch/<b>you</b>/mnt$ <span class="userinput">ls</span>
@@ -147,7 +147,7 @@ var-GS000016015-ASM.tsv.bz2
 You can also mount the entire Keep namespace in "magic directory" mode:
 
 <notextile>
-<pre><code>/scratch/<b>you</b>$ <span class="userinput">mkdir mnt</span>
+<pre><code>/scratch/<b>you</b>$ <span class="userinput">mkdir -p mnt</span>
 /scratch/<b>you</b>$ <span class="userinput">arv-mount mnt &</span>
 /scratch/<b>you</b>$ <span class="userinput">cd mnt/c1bad4b39ca5a924e481008009d94e32+210</span>
 /scratch/<b>you</b>/mnt/c1bad4b39ca5a924e481008009d94e32+210$ <span class="userinput">ls</span>
@@ -159,8 +159,8 @@ var-GS000016015-ASM.tsv.bz2
 </code></pre>
 </notextile>
 
-Using @arv-mount@ has several significant benefits:
+ at arv-mount@ provides several features:
 
 * You can browse, open and read Keep entries as if they are regular files.
 * It is easy for existing tools to access files in Keep.
-* Data is downloaded on demand, it is not necessary to download an entire file or collection to start processing
+* Data is downloaded on demand.  It is not necessary to download an entire file or collection to start processing.

commit 4545c9039616be21017862a78f60b2b02540b613
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 12:15:51 2014 -0400

    doc: Fix api-tokens typos and consistency.

diff --git a/doc/user/reference/api-tokens.html.textile.liquid b/doc/user/reference/api-tokens.html.textile.liquid
index 018c71c..b5015d7 100644
--- a/doc/user/reference/api-tokens.html.textile.liquid
+++ b/doc/user/reference/api-tokens.html.textile.liquid
@@ -6,28 +6,25 @@ title: "Getting an API token"
 
 The Arvados API token is a secret key that enables the @arv@ command line client to access Arvados with the proper permissions.
 
-Access the Arvados workbench using this link: "https://{{ site.arvados_workbench_host }}/":https://{{ site.arvados_workbench_host }}/
+Access the Arvados Workbench using this link: "https://{{ site.arvados_workbench_host }}/":https://{{ site.arvados_workbench_host }}/  (Replace @{{ site.arvados_api_host }}@ with the hostname of your local Arvados instance if necessary.)
 
-(Replace @{{ site.arvados_api_host }}@ with the hostname of your local Arvados instance if necessary.)
+Open a shell on the system where you want to use the Arvados client. This may be your local workstation, or "an Arvados virtual machine accessed with ssh":{{site.baseurl}}/user/getting_started/ssh-access.html.
 
-First, open a shell on the system on which you intend to use the Arvados client (this may be your local workstation, or an Arvados VM, refer to "Accessing Arvados over ssh":{{site.baseurl}}/user/getting_started/ssh-access.html ) .
-
-Click on the user icon <span class="glyphicon glyphicon-user"></span> in the upper right corner to access the user settings menu, and click on the menu item _Manage API token_ to go to the "api client authorizations" page.  
+Click on the user icon <span class="glyphicon glyphicon-user"></span> in the upper right corner to access the user settings menu.  Click on the menu item *Manage API tokens* to go to the "Api client authorizations" page.
 
 h2. The easy way
 
-For your convenience, the "api client authorizations" page on Workbench provides a "Help" tab that provides a command you may copy and paste directly into the shell.  It will look something like this:
+For your convenience, the "Api client authorizations" page on Workbench provides a *Help* tab that includes a command you may copy and paste directly into the shell.  It will look something like this:
 
 bc. ### Pasting the following lines at a shell prompt will allow Arvados SDKs
-### to authenticate to your account, youraddress at example.com
+### to authenticate to your account, you at example.com
 read ARVADOS_API_TOKEN <<EOF
 2jv9346o396exampledonotuseexampledonotuseexes7j1ld
 EOF
 export ARVADOS_API_TOKEN ARVADOS_API_HOST={{ site.arvados_api_host }}
 
-* The @read@ command takes the contents of stdin and puts it into the shell variable named on the command line.
-* The @<<EOF@ notation means read each line on stdin and pipe it to the command, terminating on reading the line @EOF at .
-* The @export@ command puts a local shell variable into the environment that will be inherited by child processes (e.g. the @arv@ client).
+* The @read@ command reads text input until @EOF@ (designated by @<<EOF@) and stores it in the @ARVADOS_API_TOKEN@ environment variable.
+* The @export@ command puts a local shell variable into the environment that will be inherited by child processes such as the @arv@ client.
 
 h2. Setting the environment manually
 
@@ -39,8 +36,8 @@ $ <span class="userinput">export ARVADOS_API_TOKEN=2jv9346o3966345u7ueuim7a1zaao
 </code></pre>
 </notextile>
 
-* @ARVADOS_API_HOST@ tells @arv@ which host to connect to
-* @ARVADOS_API_TOKEN@ is the secret key used by the Arvados API server to authenticate access.
+* @ARVADOS_API_HOST@ tells @arv@ which host to connect to.
+* @ARVADOS_API_TOKEN@ is the secret key used by the Arvados API server to authenticate access.  Its value is the text you copied from the *api_token* column on the Workbench.
 
 If you are connecting to a development instance with a unverified/self-signed SSL certificate, set this variable to skip SSL validation:
 
@@ -51,7 +48,7 @@ If you are connecting to a development instance with a unverified/self-signed SS
 
 h2. settings.conf
 
-Arvados tools will also look for the authentication information in @~/.config/arvados/settings.conf at . If you have already put the variables into the environment with instructions above, you can use these commands to create an Arvados configuration file:
+Arvados tools will also look for the authentication information in @~/.config/arvados/settings.conf at . If you have already put the variables into the environment following the instructions above, you can use these commands to create an Arvados configuration file:
 
 <notextile>
 <pre><code>$ <span class="userinput">echo "ARVADOS_API_HOST=$ARVADOS_API_HOST" > ~/.config/arvados/settings.conf</span>
@@ -61,7 +58,7 @@ $ <span class="userinput">echo "ARVADOS_API_TOKEN=$ARVADOS_API_TOKEN" >> ~/.conf
 
 h2. .bashrc
 
-Alternately, you may add the declarations of @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ to the @~/.bashrc@ file on the system on which you intend to use the Arvados client.  If you have already put the variables into the environment with instructions above, you can use these commands to append the environment variables to your @~/.bashrc@:
+Alternately, you may add the declarations of @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ to the @~/.bashrc@ file on the system on which you intend to use the Arvados client.  If you have already put the variables into the environment following the instructions above, you can use these commands to append the environment variables to your @~/.bashrc@:
 
 <notextile>
 <pre><code>$ <span class="userinput">echo "export ARVADOS_API_HOST=$ARVADOS_API_HOST" >> ~/.bashrc</span>

commit d6f1334e74ce51269ee3d7f462b2ec17dc8ca3f8
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 11:56:55 2014 -0400

    doc: Fix ssh-access typos and consistency.

diff --git a/doc/user/getting_started/ssh-access.html.textile.liquid b/doc/user/getting_started/ssh-access.html.textile.liquid
index e4a2b9c..162c732 100644
--- a/doc/user/getting_started/ssh-access.html.textile.liquid
+++ b/doc/user/getting_started/ssh-access.html.textile.liquid
@@ -4,7 +4,7 @@ navsection: userguide
 title: Accessing an Arvados VM over ssh
 ...
 
-Arvados requires a public @ssh@ key in order to securely log in to an Arvados VM instance, or to access an Arvados @git@ repository.
+Arvados requires a public ssh key in order to securely log in to an Arvados VM instance, or to access an Arvados @git@ repository.
 
 This document is divided up into three sections.
 
@@ -23,7 +23,7 @@ Start by opening a terminal window.  Check if you have an existing public key:
 
 notextile. <pre><code>$ <span class="userinput">ls ~/.ssh/id_rsa.pub</span></code></pre>
 
-If the file @id_rsa.pub@ exists, then you may use your existing key.  Copy the contents of @~/.ssh/id_rsa.pub@ onto the clipboard (this is your public key).  Proceed to "adding your key to the Arvados Workbench.":#workbench
+If the file @id_rsa.pub@ exists, then you may use your existing key.  Copy the contents of @~/.ssh/id_rsa.pub@ onto the clipboard (this is your public key).  You can skip this step and proceed by "adding your key to the Arvados Workbench.":#workbench
 
 If there is no file @~/.ssh/id_rsa.pub@, you must generate a new key.  Use @ssh-keygen@ to do this:
 
@@ -49,7 +49,7 @@ ssh-rsa AAAAB3NzaC1ycEDoNotUseExampleKeyDoNotUseExampleKeyDoNotUseExampleKeyDoNo
 </code></pre>
 </notextile>
 
-Now you can set up @ssh-agent@ (next) or proceed to "adding your key to the Arvados Workbench.":#workbench
+Now you can set up @ssh-agent@ (next) or proceed with "adding your key to the Arvados Workbench.":#workbench
 
 h3. Setting up ssh-agent (recommended)
 
@@ -61,9 +61,9 @@ If you get the error "Could not open a connection to your authentication agent"
 
 notextile. <pre><code>$ <span class="userinput">eval $(ssh-agent -s)</span></code></pre>
 
-* @ssh-agent -s@ prints out values for environment variables SSH_AUTH_SOCK and SSH_AGENT_PID and then runs in the background.  Using "eval" on the output as shown here causes those variables to be set in the current shell environment so that subsequent calls to @ssh@ can discover how to access the @ssh-agent@ daemon.
+ at ssh-agent -s@ prints out values for environment variables SSH_AUTH_SOCK and SSH_AGENT_PID and then runs in the background.  Using "eval" on the output as shown here causes those variables to be set in the current shell environment so that subsequent calls to @ssh@ can discover how to access the @ssh-agent@ daemon.
 
-After running @ssh-agent@, or if @ssh-add -l@ prints "The agent has no identities", then you will need to add your key using the following command.  The passphrase to decrypt the key is the same used to protect the key when it was created with @ssh-keygen@: 
+After running @ssh-agent@, or if @ssh-add -l@ prints "The agent has no identities", then you will need to add your key using the following command.  The passphrase to decrypt the key is the same used to protect the key when it was created with @ssh-keygen@:
 
 <notextile>
 <pre><code>$ <span class="userinput">ssh-add</span>
@@ -86,13 +86,9 @@ h2(#windows). Windows: Using PuTTY
 
 (Note: if you are using the @ssh@ client that comes with "Cygwin":http://cygwin.com you should follow the "Unix":#unix instructions).
 
-"PuTTY":http://www.putty.org/ is a free (MIT-licensed) Win32 Telnet and SSH client. PuTTy includes all the tools a windows user needs to set up Private Keys and to set up and use SSH connections to your virtual machines in the Arvados Cloud. 
+"PuTTY":http://www.putty.org/ is a free (MIT-licensed) Win32 Telnet and SSH client. PuTTY includes all the tools a Windows user needs to create private keys and make ssh connections to your virtual machines in the Arvados Cloud.
 
-You can use PuTTY to create public/private keys, which are how you’ll ensure that that access to Arvados cloud is secure. You can also use PuTTY as an SSH client to access your virtual machine in an Arvados cloud and work with the Arvados Command Line Interface (CLI) client. 
-
-You may download putty from "http://www.putty.org/":http://www.putty.org/ .
-
-Note that you should download the installer or .zip file with all of the PuTTY tools (PuTTYtel is not required).
+You can "download PuTTY from its Web site":http://www.putty.org/.  Note that you should download the installer or .zip file with all of the PuTTY tools (PuTTYtel is not required).
 
 h3. Step 1 - Adding PuTTY to the PATH
 
@@ -100,7 +96,9 @@ h3. Step 1 - Adding PuTTY to the PATH
 # Open the Control Panel.
 # Select _Advanced System Settings_, and choose _Environment Variables_.
 # Under system variables, find and edit @PATH at .
-# Add the following to the end of PATH (make sure to include semi colon and quotation marks):
+# If you installed PuTTY in @C:\Program Files\PuTTY\@, add the following to the end of PATH (make sure to include semicolon and quotation marks):
+<code>;\"C:\Program Files\PuTTY\"</code>
+If you installed PuTTY in @C:\Program Files (x86)\PuTTY\@, add the following to the end of PATH (make sure to include semicolon and quotation marks):
 <code>;\"C:\Program Files (x86)\PuTTY\"</code>
 # Click through the OKs to close all the dialogs you’ve opened.
 
@@ -110,72 +108,68 @@ h3. Step 2 - Creating a Public Key
 # At the bottom of the window, make sure the ‘Number of bits in a generated key’ field is set to 4096.
 # Click Generate and follow the instructions to generate a key.
 # Click to save the Public Key.
-# Click to save the Private Key (we recommend using a strong passphrase) .
+# Click to save the Private Key (we recommend using a strong passphrase).
 # Select the text of the Public Key and copy it to the clipboard.
 
 h3. Step 3 - Set up Pageant
 
-Note: Pageant is a PuTTY utility that manages your private keys so is not necessary to enter your private key passphrase every time you need to make a new ssh connection.
+Pageant is a PuTTY utility that manages your private keys so is not necessary to enter your private key passphrase every time you make a new ssh connection.
 
 # Start Pageant from the Start Menu or the folder where it was installed.
 # Pageant will now be running in the system tray. Click the Pageant icon to configure.
 # Choose _Add Key_ and add the private key which you created in the previous step.
 
-You are now ready to proceed to "adding your key to the Arvados Workbench":#workbench .
-
-_Note: We recommend you do not delete the “Default” Saved Session._
+You are now ready to proceed to "adding your key to the Arvados Workbench.":#workbench
 
 h1(#workbench). Adding your key to Arvados Workbench
 
-h3. From the workbench dashboard
+h3. From the Workbench dashboard
 
-If you have no @ssh@ keys registered, there should be a notification asking you to provide your @ssh@ public key.  On the Workbench dashboard (in this guide, this is "https://{{ site.arvados_workbench_host }}/":https://{{ site.arvados_workbench_host }}/ ), look for the envelope icon <span class="glyphicon glyphicon-envelope"></span> <span class="badge badge-alert">1</span> in upper right corner (the number indicates there are new notifications).  Click on this icon and a dropdown menu should appear with a message asking you to add your public key.  Paste your public key into the text area provided and click on the check button to submit the key.  You are now ready to "log into an Arvados VM":#login.
+If you have no ssh keys registered, there should be a notification asking you to provide your ssh public key.  On the Workbench dashboard, look for the envelope icon <span class="glyphicon glyphicon-envelope"></span> <span class="badge badge-alert">1</span> in upper right corner (the number indicates there are new notifications).  Click on this icon and a dropdown menu should appear with a message asking you to add your public key.  Paste your public key into the text area provided and click on the check button to submit the key.  You are now ready to "log into an Arvados VM":#login.
 
 h3. Alternate way to add ssh keys
 
-If you want to add additional @ssh@ keys, click on the user icon <span class="glyphicon glyphicon-user"></span> in the upper right corner to access the user settings menu, and click on the menu item _Manage ssh keys_ to go to the Authorized keys page.
+If you want to add additional ssh keys, click on the user icon <span class="glyphicon glyphicon-user"></span> in the upper right corner to access the user settings menu, and click on the menu item *Manage ssh keys* to go to the Authorized keys page.
 
-On _Authorized keys_ page, the click on the button <span class="btn btn-primary disabled">Add a new authorized key</span> in the upper right corner.
+On the *Authorized keys* page, the click on the button <span class="btn btn-primary disabled">Add a new authorized key</span> in the upper right corner.
 
-The page will reload with a new row of information.  Under the *public_key* column heading, click on the cell +none+ .  This will open an editing popup as shown in this screenshot:
+The page will reload with a new row of information.  Under the *public_key* column heading, click on the cell +none+.  This will open an editing popup as shown in this screenshot:
 
 !{{ site.baseurl }}/images/ssh-adding-public-key.png!
 
-Paste the public key from the previous section into the popup text box and click on the check mark to save it.  This should refresh the page with the public key that you just added now listed under the *public_key* column.  You are now ready to "log into an Arvados VM":#login.
+Paste the public key that you copied to the cliboard in the previous section into the popup text box, then click on the check mark to save it.  This should refresh the page with the public key that you just added now listed under the *public_key* column.  You are now ready to "log into an Arvados VM":#login.
 
 h1(#login). Using ssh to log into an Arvados VM
 
-To see a list of virtual machines that you have access to and determine the name and login information, click on Compute %(rarr)→% Virtual machines.  Once on the "virtual machines" page, The *hostname* columns lists the name of each available VM.  The *logins* column will have a value in the form of @["you"]@.  Ignore the square brackets and quotes to get your login name.  In this guide the hostname will be _shell_ and the login will be _you_.  Replace these with your hostname and login as appropriate.
+To see a list of virtual machines that you have access to and determine the name and login information, click on Compute %(rarr)→% Virtual machines.  Once on the *Virtual machines* page, The *hostname* columns lists the name of each available VM.  The *logins* column will have a value in the form of @["you"]@.  Your login name is the text inside the quotes.  In this guide the hostname will be _shell_ and the login will be _you_.  Replace these with your hostname and login name as appropriate.
 
 This section consists of two sets of instructions, depending on whether you will be logging in using a "Unix":#unixvm (Linux, OS X, Cygwin) or "Windows":#windowsvm client.
 
 h2(#unixvm). Logging in using command line ssh (Unix)
 
-h3. Connecting to the VM
+h3. Connecting to the virtual machine
 
-Use the following command to connect to the "shell" VM instance as "you".  Replace *<code>you at shell</code>* at the end of the following command with your *login* and *hostname* from Workbench:
+Use the following command to connect to the _shell_ VM instance as _you_.  Replace *<code>you at shell</code>* at the end of the following command with your *login* and *hostname* from Workbench:
 
-notextile. <pre><code>$ <span class="userinput">ssh -o "ProxyCommand ssh -a -x -p2222 turnout at switchyard.{{ site.arvados_api_host }} shell" -A -x <b>you at shell</b></span></code></pre>
+notextile. <pre><code>$ <span class="userinput">ssh -o "ProxyCommand ssh -a -x -p2222 turnout at switchyard.{{ site.arvados_api_host }} <b>shell</b>" -A -x <b>you at shell</b></span></code></pre>
 
-There are several things going on here:
+This command does several things at once. You usually cannot log in directly to virtual machines over the public Internet.  Instead, you log into a "switchyard" server and then tell the switchyard which virtual machine you want to connect to.
 
-The VMs typically have addresses that are not globally routable, so you cannot log in directly.  Instead, you log into a "switchyard" server and then tell the switchyard which VM you want to connect to.
-
-* @-o "ProxyCommand ..."@ option instructs ssh to run the specified command and then tunnel your ssh connection over the proxy.
-* @-a@ tells ssh not to forward your ssh-agent credentials to the switchyard
-* @-x@ tells ssh not to forward your X session to the switchyard
-* @-p2222@ specifies that the switchyard is running on non-standard port 2222
-* <code>turnout at switchyard.{{ site.arvados_api_host }}</code> specifies the user (@turnout@) and hostname (@switchyard.{{ site.arvados_api_host }}@) of the switchboard server that will proxy our connection to the VM.
-* @shell@ is the name of the VM that we want to connect to.  This is sent to the switchyard server as if it were an ssh command, and the switchyard server connects to the VM on our behalf.
-* After the ProxyCommand section, the @-x@ must be repeated because it applies to the connection to VM instead of the switchyard.
+* @-o "ProxyCommand ..."@ configures ssh to run the specified command to create a proxy and route your connection through it.
+* @-a@ tells ssh not to forward your ssh-agent credentials to the switchyard.
+* @-x@ tells ssh not to forward your X session to the switchyard.
+* @-p2222@ specifies that the switchyard is running on non-standard port 2222.
+* <code>turnout at switchyard.{{ site.arvados_api_host }}</code> specifies the user (@turnout@) and hostname (@switchyard.{{ site.arvados_api_host }}@) of the switchyard server that will proxy our connection to the VM.
+* *@shell@* is the name of the VM that we want to connect to.  This is sent to the switchyard server as if it were an ssh command, and the switchyard server connects to the VM on our behalf.
+* After the ProxyCommand section, we repeat @-x@ to disable X session forwarding to the virtual machine.
 * @-A@ specifies that we want to forward access to @ssh-agent@ to the VM.
-* Finally, *<code>you at shell</code>* specifies your username and repeats the hostname of the VM.  The username can be found in the *logins* column in the VMs Workbench page, discussed above.
+* Finally, *<code>you at shell</code>* specifies your login name and repeats the hostname of the VM.  The username can be found in the *logins* column in the VMs Workbench page, discussed in the previous section.
 
 You should now be able to log into the Arvados VM and "check your environment.":check-environment.html
 
 h3. Configuration (recommended)
 
-Since the above command line is cumbersome, it can be greatly simplfied by adding the following section your @~/.ssh/config@ file:
+The command line above is cumbersome, but you can configure ssh to remember many of these settings.  Add this text to the file @.ssh/config@ in your home directory (create a new file if @.ssh/config@ doesn't exist):
 
 <notextile>
 <pre><code class="userinput">Host *.arvados
@@ -197,18 +191,20 @@ h3. Initial configuration
 # Open PuTTY from the Start Menu.
 # On the Session screen set the Host Name (or IP address) to “shell”.
 # On the Session screen set the Port to “22”.
-# On the Connection %(rarr)→% Data screen set the Auto-login username to the username listed in the *logins* column on the Arvados Workbench _Access %(rarr)→% VMs_ page.
+# On the Connection %(rarr)→% Data screen set the Auto-login username to the username listed in the *logins* column on the Arvados Workbench page _Compute %(rarr)→% Virtual machines_.
 # On the Connection %(rarr)→% Proxy screen set the Proxy Type to “Local”.
 # On the Connection %(rarr)→% Proxy screen in the “Telnet command, or local proxy command” box enter:
 <code>plink -P 2222 turnout at switchyard.qr1hi.arvadosapi.com %host</code>
 Make sure there is no newline at the end of the text entry.
-# Return to the Session screen. In the Saved Sessions box, enter a name for this configuration and hit Save. 
+# Return to the Session screen. In the Saved Sessions box, enter a name for this configuration and click Save.
+
+_Note: We recommend you do not delete the “Default” Saved Session._
 
 h3. Connecting to the VM
 
-# Open PuTTY 
+# Open PuTTY from the Start Menu.
 # Click on the Saved Session name you created in the previous section.
 # Click Load to load those saved session settings.
-# Click Open and that will open the SSH window at the command prompt. You will now be logged in to your virtual machine.
+# Click Open to open the SSH window at the command prompt. You will now be logged into your virtual machine.
 
 You should now be able to log into the Arvados VM and "check your environment.":check-environment.html

commit 5abb716e9209f26b29ba883feeaa8ded8f046aaf
Author: Brett Smith <brett at curoverse.com>
Date:   Thu Mar 27 11:09:00 2014 -0400

    doc: Style and consistency in user/index.

diff --git a/doc/user/index.html.textile.liquid b/doc/user/index.html.textile.liquid
index 03a9e60..ae06627 100644
--- a/doc/user/index.html.textile.liquid
+++ b/doc/user/index.html.textile.liquid
@@ -9,25 +9,25 @@ This guide is intended to introduce new users to the Arvados system.  It covers
 This user guide introduces how to use the major components of Arvados.  These are:
 
 * Keep: Content-addressable cluster file system designed for robust storage of very large files, such as whole genome sequences running in the hundreds of gigabytes
-* Crunch: Cluster compute engine designed for genomic analysis, e.g. alignment, variant calls
-* Metadata Database: Information about the genomic data stored in Keep, such as genomic traits, human subjects
-* Workbench: Web interface to Arvados components
+* Crunch: Cluster compute engine designed for genomic analysis, such as alignment and variant calls
+* Metadata Database: Information about the genomic data stored in Keep, such as genomic traits and human subjects
+* Workbench: Arvados' Web interface
 
 h2. Prerequisites
 
 To get the most value out of this guide, you should be comfortable with the following:
 
-# Using a secure shell client such as @ssh@ or @putty@ to log on to a remote server 
-# Using the unix command line shell @bash@
+# Using a secure shell client such as @ssh@ or @putty@ to log on to a remote server
+# Using the Unix command line shell @bash@
 # Viewing and editing files using a unix text editor such as @vi@, @emacs@, or @nano@
 # Programming in @python@
 # Revision control using @git@
 
 We also recommend you read the "Arvados Platform Overview":https://arvados.org/projects/arvados/wiki#Platform-Overview for an introduction and background information about Arvados.
 
-The examples in this guide uses the Arvados instance located at "https://{{ site.arvados_workbench_host }}/":https://{{ site.arvados_workbench_host }}/ .  If you are using a different Arvados instance replace @{{ site.arvados_workbench_host }}@ with your private instance in all of the examples in this guide.
+The examples in this guide use the Arvados instance located at "https://{{ site.arvados_workbench_host }}/":https://{{ site.arvados_workbench_host }}/.  If you are using a different Arvados instance replace @{{ site.arvados_workbench_host }}@ with your private instance in all of the examples in this guide.
 
-The Arvados public beta instance is located at "https://workbench.qr1hi.arvadosapi.com/":https://workbench.qr1hi.arvadosapi.com/ .  You must have an account in order to use this service.  If you would like to request an account, please send an email to "arvados at curoverse.com":mailto:arvados at curoverse.com .
+The Arvados public beta instance is located at "https://workbench.qr1hi.arvadosapi.com/":https://workbench.qr1hi.arvadosapi.com/.  You must have an account in order to use this service.  If you would like to request an account, please send an email to "arvados at curoverse.com":mailto:arvados at curoverse.com.
 
 h2. Typographic conventions
 
@@ -35,15 +35,15 @@ This manual uses the following typographic conventions:
 
 <notextile>
 <ul>
-<li>Code blocks which are set aside from the text indicate user input to the system.  Commands that should be entered into a Unix shell are indicated by the directory where you should  enter the command ('~' indicates your home directory) followed by '$', followed by the highlighted <span class="userinput">command to enter</span> (do not enter the '$'), and possibly followed by example command output in black.  For example, the following block indicates that you should type "ls foo.*" while in your home directory and the expected output will be "foo.input" and "foo.output".
-<pre><code>~$ <span class="userinput">ls foo</span>
-foo
+<li>Code blocks which are set aside from the text indicate user input to the system.  Commands that should be entered into a Unix shell are indicated by the directory where you should  enter the command ('~' indicates your home directory) followed by '$', followed by the highlighted <span class="userinput">command to enter</span> (do not enter the '$'), and possibly followed by example command output in black.  For example, the following block indicates that you should type <code>ls foo.*</code> while in your home directory and the expected output will be "foo.input" and "foo.output".
+<pre><code>~$ <span class="userinput">ls foo.*</span>
+foo.input foo.output
 </code></pre>
 </li>
 
 <li>Code blocks inline with text emphasize specific <code>programs</code>, <code>files</code>, or <code>options</code> that are being discussed.</li>
-<li>Bold text emphasizes <b>specific items</b> to look when discussing Arvados Workbench pages.</li>
-<li>A sequence of steps separated by right arrows (<span class="rarr">→</span>) indicate a path the user should follow through the Arvados Workbench to access some piece of information under discussion.  The steps indicate a menu, hyperlink, column name, field name, or other label on the page that guide the user where to look or click.
+<li>Bold text emphasizes <b>specific items</b> to review on Arvados Workbench pages.</li>
+<li>A sequence of steps separated by right arrows (<span class="rarr">→</span>) indicate a path the user should follow through the Arvados Workbench.  The steps indicate a menu, hyperlink, column name, field name, or other label on the page that guide the user where to look or click.
 </li>
 </ul>
 </notextile>

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list