[ARVADOS] updated: 1.1.4-501-gab7bb79f2

Git user git at public.curoverse.com
Mon Jun 25 15:01:40 EDT 2018


Summary of changes:
 doc/install/index.html.textile.liquid | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

  discards  0c8e52226a7adfe721149f0469bdf23ce2b60860 (commit)
  discards  43d3cb886e29f48ca6bb5f701172e7606709631f (commit)
  discards  478e18103e2b50d409160373d3cdfeed45b1c360 (commit)
  discards  7790a4a4feaca6a79ff97920ec0d74b74340ba13 (commit)
  discards  eb7c723ec24da830d4bb1ad13532f34cb9d78645 (commit)
  discards  6eadf7871f00e2c31330cd9434e04265eefc5bb5 (commit)
       via  ab7bb79f2da9f44eae4b1fd910680ca472b9c5a7 (commit)
       via  ba908bbe90213f1d50422611052c92280eae0dcd (commit)
       via  945258f22c3c02d12e0dada049b8c37fa5139af2 (commit)
       via  a31c1accfc353ed6bb3c9982ee694f98f6c965ec (commit)
       via  42d62e3d140360a179293b8995aaf535e8c4c30c (commit)
       via  059d04053d1a7ac62c796ad5757191b9c5dd5aae (commit)
       via  93661ec76c6c1affcde86563dccda5843a879239 (commit)

This update added new revisions after undoing existing revisions.  That is
to say, the old revision is not a strict subset of the new revision.  This
situation occurs when you --force push a change and generate a repository
containing something like this:

 * -- * -- B -- O -- O -- O (0c8e52226a7adfe721149f0469bdf23ce2b60860)
            \
             N -- N -- N (ab7bb79f2da9f44eae4b1fd910680ca472b9c5a7)

When this happens we assume that you've already had alert emails for all
of the O revisions, and so we here report only the revisions in the N
branch from the common base, B.

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.


commit ab7bb79f2da9f44eae4b1fd910680ca472b9c5a7
Merge: ba908bbe9 a200bee21
Author: Ward Vandewege <wvandewege at veritasgenetics.com>
Date:   Mon Jun 25 15:01:18 2018 -0400

    13650: Merge branch 'master' into 13650-document-arvados-kubernetes
    
    Arvados-DCO-1.1-Signed-off-by: Ward Vandewege <wvandewege at veritasgenetics.com>


commit ba908bbe90213f1d50422611052c92280eae0dcd
Author: Ward Vandewege <wvandewege at veritasgenetics.com>
Date:   Mon Jun 25 14:56:44 2018 -0400

    A series of documentation improvements based on review feedback.
    
    refs #13650
    
    Arvados-DCO-1.1-Signed-off-by: Ward Vandewege <wvandewege at veritasgenetics.com>

diff --git a/doc/install/arvados-on-kubernetes-GKE.html.textile.liquid b/doc/install/arvados-on-kubernetes-GKE.html.textile.liquid
new file mode 100644
index 000000000..88b2d5730
--- /dev/null
+++ b/doc/install/arvados-on-kubernetes-GKE.html.textile.liquid
@@ -0,0 +1,62 @@
+---
+layout: default
+navsection: installguide
+title: Arvados on Kubernetes - Google Kubernetes Engine
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+This page documents the setup of the prerequisites to run the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Google Kubernetes Engine@ (GKE).
+
+h3. Install tooling
+
+Install @gcloud@:
+
+* Follow the instructions at "https://cloud.google.com/sdk/downloads":https://cloud.google.com/sdk/downloads
+
+Install @kubectl@:
+
+<pre>
+$ gcloud components install kubectl
+</pre>
+
+Install @helm@:
+
+* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
+
+h3. Boot the GKE cluster
+
+This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
+
+<pre>
+$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.10
+</pre>
+
+It takes a few minutes for the cluster to be initialized.
+
+h3. Reserve a static IP
+
+Reserve a "static IP":https://console.cloud.google.com/networking/addresses in GCE. Make sure the IP is in the same region as your GKE cluster, and is of the "Regional" type.
+
+h3. Connect to the GKE cluster.
+
+Via the web:
+* Click the "Connect" button next to your "GKE cluster"https://console.cloud.google.com/kubernetes/.
+* Execute the "Command-line access" command on your development machine.
+
+Alternatively, use this command:
+
+<pre>
+$ gcloud container clusters get-credentials <CLUSTERNAME> --zone us-central1-a --project <YOUR-PROJECT>
+</pre>
+
+Test the connection:
+
+<pre>
+$ kubectl get nodes
+</pre>
+
+Now proceed to the "Initialize helm on the Kubernetes cluster":/install/arvados-on-kubernetes.html#helm section.
diff --git a/doc/install/arvados-on-kubernetes-minikube.html.textile.liquid b/doc/install/arvados-on-kubernetes-minikube.html.textile.liquid
new file mode 100644
index 000000000..132b443df
--- /dev/null
+++ b/doc/install/arvados-on-kubernetes-minikube.html.textile.liquid
@@ -0,0 +1,34 @@
+---
+layout: default
+navsection: installguide
+title: Arvados on Kubernetes - Minikube
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+This page documents the setup of the prerequisites to run the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Minikube at .
+
+h3. Install tooling
+
+Install @kubectl@:
+
+* Follow the instructions at "https://kubernetes.io/docs/tasks/tools/install-kubectl/":https://kubernetes.io/docs/tasks/tools/install-kubectl/
+
+Install @helm@:
+
+* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
+
+h3. Install Minikube
+
+Follow the instructions at "https://kubernetes.io/docs/setup/minikube/":https://kubernetes.io/docs/setup/minikube/
+
+Test the connection:
+
+<pre>
+$ kubectl get nodes
+</pre>
+
+Now proceed to the "Initialize helm on the Kubernetes cluster":/install/arvados-on-kubernetes.html#helm section.
diff --git a/doc/install/arvados-on-kubernetes.html.textile.liquid b/doc/install/arvados-on-kubernetes.html.textile.liquid
index 581e14097..716b2b935 100644
--- a/doc/install/arvados-on-kubernetes.html.textile.liquid
+++ b/doc/install/arvados-on-kubernetes.html.textile.liquid
@@ -18,11 +18,11 @@ This Helm chart does not retain any state after it is deleted. An Arvados cluste
 h2. Requirements
 
 * Kubernetes 1.10+ cluster with at least 3 nodes, 2 or more cores per node
-* `kubectl` and `helm` installed locally, and able to connect to your Kubernetes cluster
+* @kubectl@ and @helm@ installed locally, and able to connect to your Kubernetes cluster
 
-If that does not describe your environment, please see the "GKE":#GKE or "Minikube":#Minikube sections below to get set up with Kubernetes.
+If you do not have a Kubernetes cluster already set up, you can use "Google Kubernetes Engine":/install/arvados-on-kubernetes-GKE.html for multi-node development and testing, "Minikube":/install/arvados-on-kubernetes-minikube.html for single node development and testing or "another Kubernetes solution":https://kubernetes.io/docs/setup/pick-right-solution/.
 
-h2(#helm). Install helm on the Kubernetes cluster
+h2(#helm). Initialize helm on the Kubernetes cluster
 
 If you already have helm running on the Kubernetes cluster, proceed directly to "Start the Arvados cluster":#Start below.
 
@@ -33,7 +33,7 @@ $ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-ad
 $ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
 </pre>
 
-Test `helm` by running
+Test @helm@ by running
 
 <pre>
 $ helm ls
@@ -51,8 +51,7 @@ $ cd arvados-kubernetes/charts/arvados
 $ ./cert-gen.sh <IP ADDRESS>
 </pre>
 
-The `values.yaml` file contains a number of variables that can be modified.
-Specifically, you probably want to modify the values for
+The @values.yaml@ file contains a number of variables that can be modified. At a minimum, review and/or modify the values for
 
 <pre>
   adminUserEmail
@@ -78,7 +77,7 @@ After a few minutes, you can access Arvados Workbench at the IP address specifie
 
 * https://<IP ADDRESS>
 
-with the username and password specified in the `values.yaml` file.
+with the username and password specified in the @values.yaml@ file.
 
 Alternatively, use the Arvados cli tools or SDKs:
 
@@ -98,7 +97,7 @@ $ arv user current
 
 h2. Reload
 
-If you make changes to the Helm chart (e.g. to `values.yaml`), you can reload Arvados with
+If you make changes to the Helm chart (e.g. to @values.yaml@), you can reload Arvados with
 
 <pre>
 $ helm upgrade arvados .
@@ -113,79 +112,3 @@ This Helm chart does not retain any state after it is deleted. An Arvados cluste
 <pre>
 $ helm del arvados --purge
 </pre>
-
-h2(#GKE). GKE
-
-h3. Install tooling
-
-Install `gcloud`:
-
-* Follow the instructions at "https://cloud.google.com/sdk/downloads":https://cloud.google.com/sdk/downloads
-
-Install `kubectl`:
-
-<pre>
-$ gcloud components install kubectl
-</pre>
-
-Install `helm`:
-
-* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
-
-h3. Boot the GKE cluster
-
-This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
-
-<pre>
-$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.10.2-gke.3
-</pre>
-
-It takes a few minutes for the cluster to be initialized.
-
-h3. Reserve a static IP
-
-Reserve a "static IP":https://console.cloud.google.com/networking/addresses in GCE. Make sure the IP is in the same region as your GKE cluster, and is of the "Regional" type.
-
-h3. Connect to the GKE cluster.
-
-Via the web:
-* Click the "Connect" button next to your "GKE cluster"https://console.cloud.google.com/kubernetes/.
-* Execute the "Command-line access" command on your development machine.
-
-Alternatively, use this command:
-
-<pre>
-$ gcloud container clusters get-credentials <CLUSTERNAME> --zone us-central1-a --project <YOUR-PROJECT>
-</pre>
-
-Test the connection:
-
-<pre>
-$ kubectl get nodes
-</pre>
-
-Now proceed to the "Install helm on the Kubernetes cluster":#helm section.
-
-h2(#Minikube). Minikube
-
-h3. Install tooling
-
-Install `kubectl`:
-
-* Follow the instructions at "https://kubernetes.io/docs/tasks/tools/install-kubectl/":https://kubernetes.io/docs/tasks/tools/install-kubectl/
-
-Install `helm`:
-
-* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
-
-h3. Install Minikube
-
-Follow the instructions at "https://kubernetes.io/docs/setup/minikube/":https://kubernetes.io/docs/setup/minikube/
-
-Test the connection:
-
-<pre>
-$ kubectl get nodes
-</pre>
-
-Now proceed to the "Install helm on the Kubernetes cluster":#helm section.
diff --git a/doc/install/index.html.textile.liquid b/doc/install/index.html.textile.liquid
index f0b4901ab..a47d30ac1 100644
--- a/doc/install/index.html.textile.liquid
+++ b/doc/install/index.html.textile.liquid
@@ -16,14 +16,14 @@ Arvados components can be installed and configured in a number of different ways
 <div class="offset1">
 table(table table-bordered table-condensed).
 ||||\6=. _Appropriate for_|
-||_Ease of installation_|_Multiuser/Networked_|_Demo_|_Workflow Development_|_Workflow QA/QC_|_Large Scale Production_|_Arvados Software Development_|_Arvados Software Development Testing_|
-|"Arvados-in-a-box":arvbox.html (arvbox)|Easy|no|yes|no|no|no|yes|yes|
-|"Arvados on Kubernetes":arvados-on-kubernetes.html|Easy ^1^|yes|yes|no ^2^|no ^2^|no ^2^|no|yes|
-|"Manual installation":install-manual-prerequisites.html|Complex|yes|yes|yes|yes|yes|no|no|
-|"Cloud demo":https://cloud.curoverse.com by Curoverse|N/A ^3^|yes|yes|no|no|no|no|no|
-|"Cluster Operation Subscription":https://curoverse.com/products by Curoverse|N/A ^3^|yes|yes|yes|yes|yes|no|no|
+||_Ease of installation_|_Multiuser/Networked_|_Workflow Development_|_Workflow Testing_|_Large Scale Production_|_Developing Arvados_|_Arvados Software Development Testing_|
+|"Arvados-in-a-box":arvbox.html (arvbox)|Easy|no|no|no|no|yes|yes|
+|"Arvados on Kubernetes":arvados-on-kubernetes.html|Easy ^1^|yes|no ^2^|no ^2^|no ^2^|no|yes|
+|"Manual installation":install-manual-prerequisites.html|Complex|yes|yes|yes|yes|no|no|
+|"Cloud demo":https://cloud.curoverse.com by Veritas Genetics|N/A ^3^|yes|no|no|no|no|no|
+|"Cluster Operation Subscription":https://curoverse.com/products by Veritas Genetics|N/A ^3^|yes|yes|yes|yes|yes|yes|
 </div>
 
 * ^1^ Assumes a Kubernetes cluster is available
-* ^2^ While Arvados on Kubernetes is not yet ready for production use, it is being developed toward that goal
-* ^3^ No installation necessary, Curoverse run and managed
+* ^2^ Arvados on Kubernetes is under development and not yet ready for production use
+* ^3^ No installation necessary, Veritas Genetics run and managed

commit 945258f22c3c02d12e0dada049b8c37fa5139af2
Author: Ward Vandewege <wvandewege at veritasgenetics.com>
Date:   Thu Jun 21 14:44:40 2018 -0400

    Add basic documentation for the new Arvados on Kubernetes install option.
    
    refs #13650
    
    Arvados-DCO-1.1-Signed-off-by: Ward Vandewege <wvandewege at veritasgenetics.com>

diff --git a/doc/_config.yml b/doc/_config.yml
index a64ff8ace..6bfaea090 100644
--- a/doc/_config.yml
+++ b/doc/_config.yml
@@ -159,6 +159,8 @@ navbar:
       - install/index.html.textile.liquid
     - Docker quick start:
       - install/arvbox.html.textile.liquid
+    - Arvados on Kubernetes:
+      - install/arvados-on-kubernetes.html.textile.liquid
     - Manual installation:
       - install/install-manual-prerequisites.html.textile.liquid
       - install/install-postgresql.html.textile.liquid
diff --git a/doc/install/arvados-on-kubernetes.html.textile.liquid b/doc/install/arvados-on-kubernetes.html.textile.liquid
new file mode 100644
index 000000000..581e14097
--- /dev/null
+++ b/doc/install/arvados-on-kubernetes.html.textile.liquid
@@ -0,0 +1,191 @@
+---
+layout: default
+navsection: installguide
+title: Arvados on Kubernetes
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Arvados on Kubernetes is implemented as a Helm chart.
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster spun up with this Helm Chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+h2. Requirements
+
+* Kubernetes 1.10+ cluster with at least 3 nodes, 2 or more cores per node
+* `kubectl` and `helm` installed locally, and able to connect to your Kubernetes cluster
+
+If that does not describe your environment, please see the "GKE":#GKE or "Minikube":#Minikube sections below to get set up with Kubernetes.
+
+h2(#helm). Install helm on the Kubernetes cluster
+
+If you already have helm running on the Kubernetes cluster, proceed directly to "Start the Arvados cluster":#Start below.
+
+<pre>
+$ helm init
+$ kubectl create serviceaccount --namespace kube-system tiller
+$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
+$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
+</pre>
+
+Test `helm` by running
+
+<pre>
+$ helm ls
+</pre>
+
+There should be no errors. The command will return nothing.
+
+h2(#Start). Start the Arvados cluster
+
+First, determine the IP address that the Arvados cluster will use to expose its API, Workbench, etc. If you want this Arvados cluster to be reachable from places other than the local machine, the IP address will need to be routable as appropriate.
+
+<pre>
+$ git clone https://github.com/curoverse/arvados-kubernetes.git
+$ cd arvados-kubernetes/charts/arvados
+$ ./cert-gen.sh <IP ADDRESS>
+</pre>
+
+The `values.yaml` file contains a number of variables that can be modified.
+Specifically, you probably want to modify the values for
+
+<pre>
+  adminUserEmail
+  adminUserPassword
+  superUserSecret
+  anonymousUserSecret
+</pre>
+
+Now start the Arvados cluster:
+
+<pre>
+$ helm install --name arvados . --set externalIP=<IP ADDRESS>
+</pre>
+
+At this point, you can use kubectl to see the Arvados cluster boot:
+
+<pre>
+$ kubectl get pods
+$ kubectl get svc
+</pre>
+
+After a few minutes, you can access Arvados Workbench at the IP address specified
+
+* https://<IP ADDRESS>
+
+with the username and password specified in the `values.yaml` file.
+
+Alternatively, use the Arvados cli tools or SDKs:
+
+Set the environment variables:
+
+<pre>
+$ export ARVADOS_API_TOKEN=<superUserSecret from values.yaml>
+$ export ARVADOS_API_HOST=<STATIC IP>:444
+$ export ARVADOS_API_HOST_INSECURE=true
+</pre>
+
+Test access with:
+
+<pre>
+$ arv user current
+</pre>
+
+h2. Reload
+
+If you make changes to the Helm chart (e.g. to `values.yaml`), you can reload Arvados with
+
+<pre>
+$ helm upgrade arvados .
+</pre>
+
+h2. Shut down
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster spun up with this Helm Chart is entirely ephemeral, and <strong>all data stored on the Arvados cluster will be deleted</strong> when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+<pre>
+$ helm del arvados --purge
+</pre>
+
+h2(#GKE). GKE
+
+h3. Install tooling
+
+Install `gcloud`:
+
+* Follow the instructions at "https://cloud.google.com/sdk/downloads":https://cloud.google.com/sdk/downloads
+
+Install `kubectl`:
+
+<pre>
+$ gcloud components install kubectl
+</pre>
+
+Install `helm`:
+
+* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
+
+h3. Boot the GKE cluster
+
+This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
+
+<pre>
+$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.10.2-gke.3
+</pre>
+
+It takes a few minutes for the cluster to be initialized.
+
+h3. Reserve a static IP
+
+Reserve a "static IP":https://console.cloud.google.com/networking/addresses in GCE. Make sure the IP is in the same region as your GKE cluster, and is of the "Regional" type.
+
+h3. Connect to the GKE cluster.
+
+Via the web:
+* Click the "Connect" button next to your "GKE cluster"https://console.cloud.google.com/kubernetes/.
+* Execute the "Command-line access" command on your development machine.
+
+Alternatively, use this command:
+
+<pre>
+$ gcloud container clusters get-credentials <CLUSTERNAME> --zone us-central1-a --project <YOUR-PROJECT>
+</pre>
+
+Test the connection:
+
+<pre>
+$ kubectl get nodes
+</pre>
+
+Now proceed to the "Install helm on the Kubernetes cluster":#helm section.
+
+h2(#Minikube). Minikube
+
+h3. Install tooling
+
+Install `kubectl`:
+
+* Follow the instructions at "https://kubernetes.io/docs/tasks/tools/install-kubectl/":https://kubernetes.io/docs/tasks/tools/install-kubectl/
+
+Install `helm`:
+
+* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
+
+h3. Install Minikube
+
+Follow the instructions at "https://kubernetes.io/docs/setup/minikube/":https://kubernetes.io/docs/setup/minikube/
+
+Test the connection:
+
+<pre>
+$ kubectl get nodes
+</pre>
+
+Now proceed to the "Install helm on the Kubernetes cluster":#helm section.
diff --git a/doc/install/index.html.textile.liquid b/doc/install/index.html.textile.liquid
index a9b297108..f0b4901ab 100644
--- a/doc/install/index.html.textile.liquid
+++ b/doc/install/index.html.textile.liquid
@@ -1,7 +1,7 @@
 ---
 layout: default
 navsection: installguide
-title: Installation overview
+title: Installation options
 ...
 {% comment %}
 Copyright (C) The Arvados Authors. All rights reserved.
@@ -11,7 +11,19 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 Arvados components run on GNU/Linux systems, and do not depend on any particular cloud operating stack.  Arvados supports Debian and derivatives such as Ubuntu, as well as Red Hat and derivatives such as CentOS.
 
-Arvados components can be installed and configured in a number of different ways.  Step-by-step instructions are available to perform a production installation from packages with manual configuration.  This method assumes you have several (virtual) machines at your disposal for running the various Arvados components.
+Arvados components can be installed and configured in a number of different ways.
 
-* "Docker quick start":arvbox.html
-* "Manual installation":install-manual-prerequisites.html
+<div class="offset1">
+table(table table-bordered table-condensed).
+||||\6=. _Appropriate for_|
+||_Ease of installation_|_Multiuser/Networked_|_Demo_|_Workflow Development_|_Workflow QA/QC_|_Large Scale Production_|_Arvados Software Development_|_Arvados Software Development Testing_|
+|"Arvados-in-a-box":arvbox.html (arvbox)|Easy|no|yes|no|no|no|yes|yes|
+|"Arvados on Kubernetes":arvados-on-kubernetes.html|Easy ^1^|yes|yes|no ^2^|no ^2^|no ^2^|no|yes|
+|"Manual installation":install-manual-prerequisites.html|Complex|yes|yes|yes|yes|yes|no|no|
+|"Cloud demo":https://cloud.curoverse.com by Curoverse|N/A ^3^|yes|yes|no|no|no|no|no|
+|"Cluster Operation Subscription":https://curoverse.com/products by Curoverse|N/A ^3^|yes|yes|yes|yes|yes|no|no|
+</div>
+
+* ^1^ Assumes a Kubernetes cluster is available
+* ^2^ While Arvados on Kubernetes is not yet ready for production use, it is being developed toward that goal
+* ^3^ No installation necessary, Curoverse run and managed

commit a31c1accfc353ed6bb3c9982ee694f98f6c965ec
Author: Peter Amstutz <pamstutz at veritasgenetics.com>
Date:   Wed Jun 20 13:19:47 2018 -0400

    13627: Make sure work_api is set on runtimeContext.
    
    Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <pamstutz at veritasgenetics.com>

diff --git a/sdk/cwl/arvados_cwl/__init__.py b/sdk/cwl/arvados_cwl/__init__.py
index a7e698b6d..bf419dd9b 100644
--- a/sdk/cwl/arvados_cwl/__init__.py
+++ b/sdk/cwl/arvados_cwl/__init__.py
@@ -463,6 +463,7 @@ class ArvCwlRunner(object):
         runtimeContext = runtimeContext.copy()
         runtimeContext.use_container = True
         runtimeContext.tmpdir_prefix = "tmp"
+        runtimeContext.work_api = self.work_api
 
         if self.work_api == "containers":
             if self.ignore_docker_for_reuse:
diff --git a/sdk/cwl/arvados_cwl/arvtool.py b/sdk/cwl/arvados_cwl/arvtool.py
index 5b1806b35..119acc303 100644
--- a/sdk/cwl/arvados_cwl/arvtool.py
+++ b/sdk/cwl/arvados_cwl/arvtool.py
@@ -20,6 +20,8 @@ class ArvadosCommandTool(CommandLineTool):
             return partial(ArvadosContainer, self.arvrunner)
         elif runtimeContext.work_api == "jobs":
             return partial(ArvadosJob, self.arvrunner)
+        else:
+            raise Exception("Unsupported work_api %s", runtimeContext.work_api)
 
     def make_path_mapper(self, reffiles, stagedir, runtimeContext, separateDirs):
         if runtimeContext.work_api == "containers":

commit 42d62e3d140360a179293b8995aaf535e8c4c30c
Author: Tom Clegg <tclegg at veritasgenetics.com>
Date:   Mon Jun 18 10:40:26 2018 -0400

    13164: Fix priority of containers that have priority=0 due to races.
    
    Arvados-DCO-1.1-Signed-off-by: Tom Clegg <tclegg at veritasgenetics.com>

diff --git a/services/api/lib/update_priority.rb b/services/api/lib/update_priority.rb
index 724d2b20a..21cd74bae 100644
--- a/services/api/lib/update_priority.rb
+++ b/services/api/lib/update_priority.rb
@@ -3,8 +3,15 @@
 # SPDX-License-Identifier: AGPL-3.0
 
 module UpdatePriority
-  # Clean up after races: if container priority>0 but there are no
-  # committed container requests for it, reset priority to 0.
+  extend CurrentApiClient
+
+  # Clean up after races.
+  #
+  # If container priority>0 but there are no committed container
+  # requests for it, reset priority to 0.
+  #
+  # If container priority=0 but there are committed container requests
+  # for it with priority>0, update priority.
   def self.update_priority
     if !File.owned?(Rails.root.join('tmp'))
       Rails.logger.warn("UpdatePriority: not owner of #{Rails.root}/tmp, skipping")
@@ -13,7 +20,19 @@ module UpdatePriority
     lockfile = Rails.root.join('tmp', 'update_priority.lock')
     File.open(lockfile, File::RDWR|File::CREAT, 0600) do |f|
       return unless f.flock(File::LOCK_NB|File::LOCK_EX)
-      ActiveRecord::Base.connection.execute("UPDATE containers AS c SET priority=0 WHERE state='Queued' AND priority>0 AND uuid NOT IN (SELECT container_uuid FROM container_requests WHERE priority>0);")
+
+      # priority>0 but should be 0:
+      ActiveRecord::Base.connection.
+        exec_query("UPDATE containers AS c SET priority=0 WHERE state IN ('Queued', 'Locked', 'Running') AND priority>0 AND uuid NOT IN (SELECT container_uuid FROM container_requests WHERE priority>0 AND state='Committed');", 'UpdatePriority')
+
+      # priority==0 but should be >0:
+      act_as_system_user do
+        Container.
+          joins("JOIN container_requests ON container_requests.container_uuid=containers.uuid AND container_requests.state=#{Container.sanitize(ContainerRequest::Committed)} AND container_requests.priority>0").
+          where('containers.state IN (?) AND containers.priority=0 AND container_requests.uuid IS NOT NULL',
+                [Container::Queued, Container::Locked, Container::Running]).
+          map(&:update_priority!)
+      end
     end
   end
 
diff --git a/services/api/test/unit/update_priority_test.rb b/services/api/test/unit/update_priority_test.rb
new file mode 100644
index 000000000..2d28d3fb6
--- /dev/null
+++ b/services/api/test/unit/update_priority_test.rb
@@ -0,0 +1,30 @@
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+require 'test_helper'
+require 'update_priority'
+
+class UpdatePriorityTest < ActiveSupport::TestCase
+  test 'priority 0 but should be >0' do
+    uuid = containers(:running).uuid
+    ActiveRecord::Base.connection.exec_query('UPDATE containers SET priority=0 WHERE uuid=$1', 'test-setup', [[nil, uuid]])
+    assert_equal 0, Container.find_by_uuid(uuid).priority
+    UpdatePriority.update_priority
+    assert_operator 0, :<, Container.find_by_uuid(uuid).priority
+
+    uuid = containers(:queued).uuid
+    ActiveRecord::Base.connection.exec_query('UPDATE containers SET priority=0 WHERE uuid=$1', 'test-setup', [[nil, uuid]])
+    assert_equal 0, Container.find_by_uuid(uuid).priority
+    UpdatePriority.update_priority
+    assert_operator 0, :<, Container.find_by_uuid(uuid).priority
+  end
+
+  test 'priority>0 but should be 0' do
+    uuid = containers(:running).uuid
+    ActiveRecord::Base.connection.exec_query('DELETE FROM container_requests WHERE container_uuid=$1', 'test-setup', [[nil, uuid]])
+    assert_operator 0, :<, Container.find_by_uuid(uuid).priority
+    UpdatePriority.update_priority
+    assert_equal 0, Container.find_by_uuid(uuid).priority
+  end
+end

commit 059d04053d1a7ac62c796ad5757191b9c5dd5aae
Author: Lucas Di Pentima <ldipentima at veritasgenetics.com>
Date:   Tue Jun 19 14:02:07 2018 -0300

    7478: Replaces term 'preemptable' with 'preemptible'
    
    Also added config & documentation on EC2 example config file.
    
    Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <ldipentima at veritasgenetics.com>

diff --git a/lib/dispatchcloud/node_size.go b/lib/dispatchcloud/node_size.go
index b5fd0262a..4329f4f13 100644
--- a/lib/dispatchcloud/node_size.go
+++ b/lib/dispatchcloud/node_size.go
@@ -62,7 +62,7 @@ func ChooseInstanceType(cc *arvados.Cluster, ctr *arvados.Container) (best arvad
 		case it.Scratch < needScratch:
 		case it.RAM < needRAM:
 		case it.VCPUs < needVCPUs:
-		case it.Preemptable != ctr.SchedulingParameters.Preemptable:
+		case it.Preemptible != ctr.SchedulingParameters.Preemptible:
 		case it.Price == best.Price && (it.RAM < best.RAM || it.VCPUs < best.VCPUs):
 			// Equal price, but worse specs
 		default:
diff --git a/lib/dispatchcloud/node_size_test.go b/lib/dispatchcloud/node_size_test.go
index d6b7c6bf9..1484f07a2 100644
--- a/lib/dispatchcloud/node_size_test.go
+++ b/lib/dispatchcloud/node_size_test.go
@@ -92,12 +92,12 @@ func (*NodeSizeSuite) TestChoose(c *check.C) {
 	}
 }
 
-func (*NodeSizeSuite) TestChoosePreemptable(c *check.C) {
+func (*NodeSizeSuite) TestChoosePreemptible(c *check.C) {
 	menu := []arvados.InstanceType{
-		{Price: 4.4, RAM: 4000000000, VCPUs: 8, Scratch: 2 * GiB, Preemptable: true, Name: "costly"},
+		{Price: 4.4, RAM: 4000000000, VCPUs: 8, Scratch: 2 * GiB, Preemptible: true, Name: "costly"},
 		{Price: 2.2, RAM: 2000000000, VCPUs: 4, Scratch: 2 * GiB, Name: "almost best"},
-		{Price: 2.2, RAM: 2000000000, VCPUs: 4, Scratch: 2 * GiB, Preemptable: true, Name: "best"},
-		{Price: 1.1, RAM: 1000000000, VCPUs: 2, Scratch: 2 * GiB, Preemptable: true, Name: "small"},
+		{Price: 2.2, RAM: 2000000000, VCPUs: 4, Scratch: 2 * GiB, Preemptible: true, Name: "best"},
+		{Price: 1.1, RAM: 1000000000, VCPUs: 2, Scratch: 2 * GiB, Preemptible: true, Name: "small"},
 	}
 	best, err := ChooseInstanceType(&arvados.Cluster{InstanceTypes: menu}, &arvados.Container{
 		Mounts: map[string]arvados.Mount{
@@ -109,7 +109,7 @@ func (*NodeSizeSuite) TestChoosePreemptable(c *check.C) {
 			KeepCacheRAM: 123456789,
 		},
 		SchedulingParameters: arvados.SchedulingParameters{
-			Preemptable: true,
+			Preemptible: true,
 		},
 	})
 	c.Check(err, check.IsNil)
@@ -117,5 +117,5 @@ func (*NodeSizeSuite) TestChoosePreemptable(c *check.C) {
 	c.Check(best.RAM >= 1234567890, check.Equals, true)
 	c.Check(best.VCPUs >= 2, check.Equals, true)
 	c.Check(best.Scratch >= 2*GiB, check.Equals, true)
-	c.Check(best.Preemptable, check.Equals, true)
+	c.Check(best.Preemptible, check.Equals, true)
 }
diff --git a/sdk/go/arvados/config.go b/sdk/go/arvados/config.go
index 841f95281..182cf8433 100644
--- a/sdk/go/arvados/config.go
+++ b/sdk/go/arvados/config.go
@@ -63,7 +63,7 @@ type InstanceType struct {
 	RAM          int64
 	Scratch      int64
 	Price        float64
-	Preemptable  bool
+	Preemptible  bool
 }
 
 // GetNodeProfile returns a NodeProfile for the given hostname. An
diff --git a/sdk/go/arvados/container.go b/sdk/go/arvados/container.go
index e71bcd5d0..5398d9d74 100644
--- a/sdk/go/arvados/container.go
+++ b/sdk/go/arvados/container.go
@@ -53,7 +53,7 @@ type RuntimeConstraints struct {
 // such as Partitions
 type SchedulingParameters struct {
 	Partitions  []string `json:"partitions"`
-	Preemptable bool     `json:"preemptable"`
+	Preemptible bool     `json:"preemptible"`
 }
 
 // ContainerList is an arvados#containerList resource.
diff --git a/services/api/app/models/container_request.rb b/services/api/app/models/container_request.rb
index 42fd247f7..799aa430f 100644
--- a/services/api/app/models/container_request.rb
+++ b/services/api/app/models/container_request.rb
@@ -29,7 +29,7 @@ class ContainerRequest < ArvadosModel
   before_validation :fill_field_defaults, :if => :new_record?
   before_validation :validate_runtime_constraints
   before_validation :set_container
-  before_validation :set_default_preemptable_scheduling_parameter
+  before_validation :set_default_preemptible_scheduling_parameter
   validates :command, :container_image, :output_path, :cwd, :presence => true
   validates :output_ttl, numericality: { only_integer: true, greater_than_or_equal_to: 0 }
   validates :priority, numericality: { only_integer: true, greater_than_or_equal_to: 0, less_than_or_equal_to: 1000 }
@@ -198,14 +198,14 @@ class ContainerRequest < ArvadosModel
     end
   end
 
-  def set_default_preemptable_scheduling_parameter
+  def set_default_preemptible_scheduling_parameter
     if self.state == Committed
-      # If preemptable instances (eg: AWS Spot Instances) are allowed,
+      # If preemptible instances (eg: AWS Spot Instances) are allowed,
       # ask them on child containers by default.
-      if Rails.configuration.preemptable_instances and
+      if Rails.configuration.preemptible_instances and
         !self.requesting_container_uuid.nil? and
-        self.scheduling_parameters['preemptable'].nil?
-          self.scheduling_parameters['preemptable'] = true
+        self.scheduling_parameters['preemptible'].nil?
+          self.scheduling_parameters['preemptible'] = true
       end
     end
   end
@@ -236,8 +236,8 @@ class ContainerRequest < ArvadosModel
             scheduling_parameters['partitions'].size)
             errors.add :scheduling_parameters, "partitions must be an array of strings"
       end
-      if !Rails.configuration.preemptable_instances and scheduling_parameters['preemptable']
-        errors.add :scheduling_parameters, "preemptable instances are not allowed"
+      if !Rails.configuration.preemptible_instances and scheduling_parameters['preemptible']
+        errors.add :scheduling_parameters, "preemptible instances are not allowed"
       end
     end
   end
diff --git a/services/api/config/application.default.yml b/services/api/config/application.default.yml
index 19b6f9b25..f51679135 100644
--- a/services/api/config/application.default.yml
+++ b/services/api/config/application.default.yml
@@ -289,10 +289,10 @@ common:
   ### Crunch, DNS & compute node management
   ###
 
-  # Preemptable instance support (e.g. AWS Spot Instances)
-  # When true, child containers will get created with the preemptable
+  # Preemptible instance support (e.g. AWS Spot Instances)
+  # When true, child containers will get created with the preemptible
   # scheduling parameter parameter set.
-  preemptable_instances: false
+  preemptible_instances: false
 
   # Docker image to be used when none found in runtime_constraints of a job
   default_docker_image_for_jobs: false
diff --git a/services/api/test/unit/container_request_test.rb b/services/api/test/unit/container_request_test.rb
index b36ff06bb..8071e05ce 100644
--- a/services/api/test/unit/container_request_test.rb
+++ b/services/api/test/unit/container_request_test.rb
@@ -760,16 +760,16 @@ class ContainerRequestTest < ActiveSupport::TestCase
   [
     [false, ActiveRecord::RecordInvalid],
     [true, nil],
-  ].each do |preemptable_conf, expected|
-    test "having Rails.configuration.preemptable_instances=#{preemptable_conf}, create preemptable container request and verify #{expected}" do
-      sp = {"preemptable" => true}
+  ].each do |preemptible_conf, expected|
+    test "having Rails.configuration.preemptible_instances=#{preemptible_conf}, create preemptible container request and verify #{expected}" do
+      sp = {"preemptible" => true}
       common_attrs = {cwd: "test",
                       priority: 1,
                       command: ["echo", "hello"],
                       output_path: "test",
                       scheduling_parameters: sp,
                       mounts: {"test" => {"kind" => "json"}}}
-      Rails.configuration.preemptable_instances = preemptable_conf
+      Rails.configuration.preemptible_instances = preemptible_conf
       set_user_from_auth :active
 
       cr = create_minimal_req!(common_attrs)
@@ -790,15 +790,15 @@ class ContainerRequestTest < ActiveSupport::TestCase
     'zzzzz-dz642-runningcontainr',
     nil,
   ].each do |requesting_c|
-    test "having preemptable instances active on the API server, a committed #{requesting_c.nil? ? 'non-':''}child CR should not ask for preemptable instance if parameter already set to false" do
+    test "having preemptible instances active on the API server, a committed #{requesting_c.nil? ? 'non-':''}child CR should not ask for preemptible instance if parameter already set to false" do
       common_attrs = {cwd: "test",
                       priority: 1,
                       command: ["echo", "hello"],
                       output_path: "test",
-                      scheduling_parameters: {"preemptable" => false},
+                      scheduling_parameters: {"preemptible" => false},
                       mounts: {"test" => {"kind" => "json"}}}
 
-      Rails.configuration.preemptable_instances = true
+      Rails.configuration.preemptible_instances = true
       set_user_from_auth :active
 
       if requesting_c
@@ -813,7 +813,7 @@ class ContainerRequestTest < ActiveSupport::TestCase
       cr.state = ContainerRequest::Committed
       cr.save!
 
-      assert_equal false, cr.scheduling_parameters['preemptable']
+      assert_equal false, cr.scheduling_parameters['preemptible']
     end
   end
 
@@ -822,15 +822,15 @@ class ContainerRequestTest < ActiveSupport::TestCase
     [true, nil, nil],
     [false, 'zzzzz-dz642-runningcontainr', nil],
     [false, nil, nil],
-  ].each do |preemptable_conf, requesting_c, schedule_preemptable|
-    test "having Rails.configuration.preemptable_instances=#{preemptable_conf}, #{requesting_c.nil? ? 'non-':''}child CR should #{schedule_preemptable ? '':'not'} ask for preemptable instance by default" do
+  ].each do |preemptible_conf, requesting_c, schedule_preemptible|
+    test "having Rails.configuration.preemptible_instances=#{preemptible_conf}, #{requesting_c.nil? ? 'non-':''}child CR should #{schedule_preemptible ? '':'not'} ask for preemptible instance by default" do
       common_attrs = {cwd: "test",
                       priority: 1,
                       command: ["echo", "hello"],
                       output_path: "test",
                       mounts: {"test" => {"kind" => "json"}}}
 
-      Rails.configuration.preemptable_instances = preemptable_conf
+      Rails.configuration.preemptible_instances = preemptible_conf
       set_user_from_auth :active
 
       if requesting_c
@@ -845,7 +845,7 @@ class ContainerRequestTest < ActiveSupport::TestCase
       cr.state = ContainerRequest::Committed
       cr.save!
 
-      assert_equal schedule_preemptable, cr.scheduling_parameters['preemptable']
+      assert_equal schedule_preemptible, cr.scheduling_parameters['preemptible']
     end
   end
 
diff --git a/services/nodemanager/arvnodeman/computenode/driver/ec2.py b/services/nodemanager/arvnodeman/computenode/driver/ec2.py
index c453b91cc..2b1564279 100644
--- a/services/nodemanager/arvnodeman/computenode/driver/ec2.py
+++ b/services/nodemanager/arvnodeman/computenode/driver/ec2.py
@@ -91,7 +91,7 @@ class ComputeNodeDriver(BaseComputeNodeDriver):
                     "VolumeSize": volsize,
                     "VolumeType": "gp2"
                 }}]
-        if size.preemptable:
+        if size.preemptible:
             # Request a Spot instance for this node
             kw['ex_spot_market'] = True
         return kw
diff --git a/services/nodemanager/arvnodeman/config.py b/services/nodemanager/arvnodeman/config.py
index f9724a8fc..8c6757e51 100644
--- a/services/nodemanager/arvnodeman/config.py
+++ b/services/nodemanager/arvnodeman/config.py
@@ -151,15 +151,15 @@ class NodeManagerConfig(ConfigParser.SafeConfigParser):
         section_types = {
             'instance_type': str,
             'price': float,
-            'preemptable': bool,
+            'preemptible': bool,
         }
         for sec_name in self.sections():
             sec_words = sec_name.split(None, 2)
             if sec_words[0] != 'Size':
                 continue
             size_spec = self.get_section(sec_name, section_types, int)
-            if 'preemptable' not in size_spec:
-                size_spec['preemptable'] = False
+            if 'preemptible' not in size_spec:
+                size_spec['preemptible'] = False
             if 'instance_type' not in size_spec:
                 # Assume instance type is Size name if missing
                 size_spec['instance_type'] = sec_words[1]
diff --git a/services/nodemanager/arvnodeman/jobqueue.py b/services/nodemanager/arvnodeman/jobqueue.py
index 99064b398..e91764474 100644
--- a/services/nodemanager/arvnodeman/jobqueue.py
+++ b/services/nodemanager/arvnodeman/jobqueue.py
@@ -38,7 +38,7 @@ class ServerCalculator(object):
             self.cores = 0
             self.bandwidth = 0
             self.price = 9999999
-            self.preemptable = False
+            self.preemptible = False
             self.extra = {}
 
         def meets_constraints(self, **kwargs):
@@ -58,7 +58,7 @@ class ServerCalculator(object):
                 self.disk = 0
             self.scratch = self.disk * 1000
             self.ram = int(self.ram * node_mem_scaling)
-            self.preemptable = False
+            self.preemptible = False
             for name, override in kwargs.iteritems():
                 if name == 'instance_type': continue
                 if not hasattr(self, name):
diff --git a/services/nodemanager/doc/ec2.example.cfg b/services/nodemanager/doc/ec2.example.cfg
index a1fa2dc32..117f9b224 100644
--- a/services/nodemanager/doc/ec2.example.cfg
+++ b/services/nodemanager/doc/ec2.example.cfg
@@ -169,12 +169,24 @@ security_groups = idstring1, idstring2
 # You may also want to define the amount of scratch space (expressed
 # in GB) for Crunch jobs.  You can also override Amazon's provided
 # data fields (such as price per hour) by setting them here.
+#
+# Additionally, you can ask for a preemptible instance (AWS's spot instance)
+# by adding the appropriate boolean configuration flag. If you want to have
+# both spot & reserved versions of the same size, you can do so by renaming
+# the Size section and specifying the instance type inside it.
 
 [Size m4.large]
 cores = 2
 price = 0.126
 scratch = 100
 
+[Size m4.large.spot]
+instance_type = m4.large
+preemptible = true
+cores = 2
+price = 0.126
+scratch = 100
+
 [Size m4.xlarge]
 cores = 4
 price = 0.252
diff --git a/services/nodemanager/tests/test_computenode_driver_ec2.py b/services/nodemanager/tests/test_computenode_driver_ec2.py
index 9442a8c24..520c0dc0c 100644
--- a/services/nodemanager/tests/test_computenode_driver_ec2.py
+++ b/services/nodemanager/tests/test_computenode_driver_ec2.py
@@ -73,10 +73,10 @@ class EC2ComputeNodeDriverTestCase(testutil.DriverTestMixin, unittest.TestCase):
             create_method.call_args[1].get('ex_metadata', {'arg': 'missing'}).items()
         )
 
-    def test_create_preemptable_instance(self):
+    def test_create_preemptible_instance(self):
         arv_node = testutil.arvados_node_mock()
         driver = self.new_driver()
-        driver.create_node(testutil.MockSize(1, preemptable=True), arv_node)
+        driver.create_node(testutil.MockSize(1, preemptible=True), arv_node)
         create_method = self.driver_mock().create_node
         self.assertTrue(create_method.called)
         self.assertEqual(
diff --git a/services/nodemanager/tests/test_config.py b/services/nodemanager/tests/test_config.py
index 9a48c7cda..8002b3b92 100644
--- a/services/nodemanager/tests/test_config.py
+++ b/services/nodemanager/tests/test_config.py
@@ -29,9 +29,9 @@ creds = dummy_creds
 cores = 1
 price = 0.8
 
-[Size 1.preemptable]
+[Size 1.preemptible]
 instance_type = 1
-preemptable = true
+preemptible = true
 cores = 1
 price = 0.8
 
@@ -65,17 +65,17 @@ testlogger = INFO
         self.assertEqual('Small', size.name)
         self.assertEqual(1, kwargs['cores'])
         self.assertEqual(0.8, kwargs['price'])
-        # preemptable is False by default
-        self.assertEqual(False, kwargs['preemptable'])
+        # preemptible is False by default
+        self.assertEqual(False, kwargs['preemptible'])
         # instance_type == arvados node size id by default
         self.assertEqual(kwargs['id'], kwargs['instance_type'])
-        # Now retrieve the preemptable version
+        # Now retrieve the preemptible version
         size, kwargs = sizes[1]
         self.assertEqual('Small', size.name)
-        self.assertEqual('1.preemptable', kwargs['id'])
+        self.assertEqual('1.preemptible', kwargs['id'])
         self.assertEqual(1, kwargs['cores'])
         self.assertEqual(0.8, kwargs['price'])
-        self.assertEqual(True, kwargs['preemptable'])
+        self.assertEqual(True, kwargs['preemptible'])
         self.assertEqual('1', kwargs['instance_type'])
 
 
diff --git a/services/nodemanager/tests/testutil.py b/services/nodemanager/tests/testutil.py
index 2ec13c0b8..ee475efe7 100644
--- a/services/nodemanager/tests/testutil.py
+++ b/services/nodemanager/tests/testutil.py
@@ -78,7 +78,7 @@ class MockShutdownTimer(object):
 
 
 class MockSize(object):
-    def __init__(self, factor, preemptable=False):
+    def __init__(self, factor, preemptible=False):
         self.id = 'z{}.test'.format(factor)
         self.name = 'test size '+self.id
         self.ram = 128 * factor
@@ -88,7 +88,7 @@ class MockSize(object):
         self.price = float(factor)
         self.extra = {}
         self.real = self
-        self.preemptable = preemptable
+        self.preemptible = preemptible
 
     def __eq__(self, other):
         return self.id == other.id

commit 93661ec76c6c1affcde86563dccda5843a879239
Author: Peter Amstutz <pamstutz at veritasgenetics.com>
Date:   Tue Jun 19 16:29:25 2018 -0400

    Add mypy-extensions to build.list refs #13627
    
    Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <pamstutz at veritasgenetics.com>

diff --git a/build/build.list b/build/build.list
index fa1a260c3..ef6407031 100644
--- a/build/build.list
+++ b/build/build.list
@@ -49,3 +49,4 @@ all|rdflib-jsonld|0.4.0|2|python|all
 all|futures|3.0.5|2|python|all
 all|future|0.16.0|2|python|all
 all|future|0.16.0|2|python3|all
+all|mypy-extensions|0.3.0|1|python|all

-----------------------------------------------------------------------


hooks/post-receive
-- 




More information about the arvados-commits mailing list