[ARVADOS] created: 63511ce5f1dc6d6e38cfafe240f0c907ad11748e
git at public.curoverse.com
git at public.curoverse.com
Mon Mar 3 15:19:07 EST 2014
at 63511ce5f1dc6d6e38cfafe240f0c907ad11748e (commit)
commit 63511ce5f1dc6d6e38cfafe240f0c907ad11748e
Author: radhika chippada <radhika at radhika.curoverse>
Date: Mon Mar 3 15:17:42 2014 -0500
minor updates to user guide
diff --git a/doc/_config.yml b/doc/_config.yml
index 9a95734..4ca1816 100644
--- a/doc/_config.yml
+++ b/doc/_config.yml
@@ -1,5 +1,5 @@
exclude: ["Rakefile", "tmp", "vendor"]
-baseurl: file:///home/tetron/work/arvados/doc/.site
+baseurl: /doc
arvados_api_host: qr1hi.arvadosapi.com
navbar:
@@ -15,8 +15,8 @@ navbar:
- user/tutorials/tutorial-job1.html.textile.liquid
- user/tutorials/tutorial-firstscript.html.textile.liquid
- user/tutorials/tutorial-job-debug.html.textile.liquid
- - user/tutorials/tutorial-new-pipeline.html.textile.liquid
- user/tutorials/tutorial-parallel.html.textile.liquid
+ - user/tutorials/tutorial-new-pipeline.html.textile.liquid
- user/tutorials/tutorial-trait-search.html.textile.liquid
- user/tutorials/tutorial-gatk-variantfiltration.html.textile.liquid
- user/tutorials/running-external-program.html.textile.liquid
diff --git a/doc/_includes/_parallel_hash_script_py.liquid b/doc/_includes/_parallel_hash_script_py.liquid
index 8e39c0b..a914e04 100644
--- a/doc/_includes/_parallel_hash_script_py.liquid
+++ b/doc/_includes/_parallel_hash_script_py.liquid
@@ -3,7 +3,7 @@
import hashlib
import arvados
-# Jobs consist of one of more tasks. A task is a single invocation of
+# Jobs consist of one or more tasks. A task is a single invocation of
# a crunch script.
# Get the current task
diff --git a/doc/user/reference/sdk-cli.html.textile.liquid b/doc/user/reference/sdk-cli.html.textile.liquid
index 6d98957..c795631 100644
--- a/doc/user/reference/sdk-cli.html.textile.liquid
+++ b/doc/user/reference/sdk-cli.html.textile.liquid
@@ -15,8 +15,10 @@ h3. Usage
h4. Global options
-- @--format=json@ := Output response as JSON
+- @--format=json@ := Output response as JSON. This is the default format.
+
- @--format=yaml@ := Output response as YAML
+
- @--format=uuid@ := Output only the UUIDs of object(s) in the API response, one per line.
diff --git a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
index 4c49d19..5c3d326 100644
--- a/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-firstscript.html.textile.liquid
@@ -66,6 +66,11 @@ Make the file executable:
notextile. <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">chmod +x hash.py</span></code></pre>
+{% include 'notebox_begin' %}
+The below steps describe how to execute the script after committing changes to git. To test the script locally, please see the "debugging a crunch script":tutorial-job-debug.html page.
+
+{% include 'notebox_end' %}
+
Next, add the file to @git@ staging. This tells @git@ that the file should be included on the next commit.
notextile. <pre><code>~/<b>you</b>/crunch_scripts$ <span class="userinput">git add hash.py</span></code></pre>
diff --git a/doc/user/tutorials/tutorial-gatk-variantfiltration.html.textile.liquid b/doc/user/tutorials/tutorial-gatk-variantfiltration.html.textile.liquid
index d69b122..3bf05a5 100644
--- a/doc/user/tutorials/tutorial-gatk-variantfiltration.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-gatk-variantfiltration.html.textile.liquid
@@ -8,7 +8,7 @@ title: "Using GATK with Arvados"
h1. Using GATK with Arvados
-This tutorials demonstrates how to use the Genome Analysis Toolkit (GATK) with Arvados. In this example we will install GATK and then create a VariantFiltration job to assign pass/fail scores to variants in a VCF file.
+This tutorial demonstrates how to use the Genome Analysis Toolkit (GATK) with Arvados. In this example we will install GATK and then create a VariantFiltration job to assign pass/fail scores to variants in a VCF file.
*This tutorial assumes that you are "logged into an Arvados VM instance":{{site.baseurl}}/user/getting_started/ssh-access.html#login, and have a "working environment.":{{site.baseurl}}/user/getting_started/check-environment.html*
diff --git a/doc/user/tutorials/tutorial-job1.html.textile.liquid b/doc/user/tutorials/tutorial-job1.html.textile.liquid
index b66d2e6..a0dd896 100644
--- a/doc/user/tutorials/tutorial-job1.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-job1.html.textile.liquid
@@ -91,7 +91,7 @@ Use @arv job create@ to actually submit the job. It should print out a JSON obj
</code></pre>
</notextile>
-The job is new queued and will start running as soon as it reaches the front of the queue. Fields to pay attention to include:
+The job is now queued and will start running as soon as it reaches the front of the queue. Fields to pay attention to include:
* @"uuid"@ is the unique identifier for this specific job
* @"script_version"@ is the actual revision of the script used. This is useful if the version was described using the "repository:branch" format.
diff --git a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
index 4439641..d128b4b 100644
--- a/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-new-pipeline.html.textile.liquid
@@ -41,7 +41,7 @@ Next, create a file that contains the pipeline definition:
"script_parameters":{
"input": "887cd41e9c613463eab2f0d885c6dd96+83"
},
- "script_version":"tetron:master"
+ "script_version":"you:master"
},
"filter":{
"script":"0-filter.py",
@@ -50,7 +50,7 @@ Next, create a file that contains the pipeline definition:
"output_of":"do_hash"
}
},
- "script_version":"tetron:master"
+ "script_version":"you:master"
}
}
}
diff --git a/doc/user/tutorials/tutorial-parallel.html.textile.liquid b/doc/user/tutorials/tutorial-parallel.html.textile.liquid
index 36ccbcb..be78506 100644
--- a/doc/user/tutorials/tutorial-parallel.html.textile.liquid
+++ b/doc/user/tutorials/tutorial-parallel.html.textile.liquid
@@ -83,3 +83,6 @@ md5sum.txt
h2. The one job per file pattern
This example demonstrates how to schedule a new task per file. Because this is a common pattern, the Crunch Python API contains a convenience function to "queue a task for each input file":{{site.baseurl}}/sdk/python/crunch-utility-libraries.html#one_task_per_input which reduces the amount of boilerplate code required to handle parallel jobs.
+
+Next, "Constructing a Crunch pipeline":tutorial-new-pipeline.html
+
-----------------------------------------------------------------------
hooks/post-receive
--
More information about the arvados-commits
mailing list