ODA Kubernetes Deployment

Deploying ODA using Helm charts

Deploying the ODA umbrella Helm chart will deploy the ODA REST client, an instance of Postgres and pgadmin.

To install the umbrella chart, navigate to the root directory of the repository and run

$ make k8s-install-chart

To uninstall the chart

$ make k8s-uninstall-chart

Inspect the deployment state with

$ make k8s-watch

Backend Storage

Note

These instruction assume you are using the standard SKA Minikube installation that is configured and installed via the ska-cicd-deploy-minikube project.

There are different implementations of the ODA - memory, filesystem and postgres - which can be imported and used as required.

The ODA REST API can be configured to use any of these at runtime, using the Helm values.

Filesystem

To configure a filesystem backend, with a Kubernetes persistent volume which provides persistence that survives Kubernetes redeployments and pod restarts, set the following values for the Helm chart:

rest:
  ...
  backend:
    type: filesystem
    filesystem:
      # true to mount persistent volume, false for non-persistent storage
      use_pv: true
      # path on Kubernetes host to use for entity storage
      pv_hostpath: /mnt/ska-db-oda-persistent-storage
  ...

For a default installation with no Helm value overrides, access the PersistentVolume on the minikube node as follows:

$ # SSH to minikube cluster
$ minikube ssh
$ # navigate to default ODA storage directory
$ cd /mnt/ska-db-oda-persistent-storage/

Files can also be stored outside minikube by making pv_hostpath match the MOUNT_FROM and MOUNT_TO values set when rebuilding minikube with the ska-cicd-deploy-minikube project. For example, if minikube is rebuilt with

$ make MOUNT_FROM=$HOME/oda MOUNT_TO=$HOME/oda all

and pv_hostpath is set to match $MOUNT_TO, entities will be stored directly on your local filesystem in the $HOME/oda directory. See the bottom of this page for a full example.

Postgres

To use postgres as a backend, a running instance of PostgreSQL must be available, and the Helm values set as below:

rest:
  ...
  backend:
    type: postgres
    postgres:
      host:
      port: 5432
      db:
        name: postgres
        table:
          sbd: tab_oda_sbd
  ...

If using the local postgres deployed as part of the umbrella chart, the host will be set at deploy time in the Makefile.

There are also relevant values of the postgresql dependency chart (See the ‘PostgresQL deployment’ section below for more details). By default, the make k8s-install-chart target will set the postgres values to those required to connect to the PostgreSQL instance also deployed by the chart.

Enabling ingress for local testing

The ODA REST server can be exposed to allow local testing. This is achieved by setting ingress.enabled to true when deploying the chart. For example,

Note

The command below to set ingress is different than the usual K8S_CHART_PARAMS=”–set ska-db-oda.ingress.enabled=true” as there are Postgres parameters also set in the Makefile which should not be overwritten.

$ # from the ODA project directory, install the ODA including the custom values
$ make k8s-install-chart

$ # capture the minikube IP address in an environment variable
$ export MINIKUBE_IP=`minikube ip`

$ # capture the ODA deployment namespace in an environment variable
$ export ODA_NAMESPACE=`make k8s-vars | grep 'Selected Namespace' | awk '{ print $3 }'`

$ # construct full ODA endpoint URL
$ export ODA_ENDPOINT=http://$MINIKUBE_IP/$ODA_NAMESPACE/api/v1/sbds

$ # Try an invalid ODA request. The error message shows the ODA is accessible and working.
$ curl $ODA_ENDPOINT
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>

Example: ODA deployment using a local directory for backend storage

Caution

This will delete any existing Minikube deployment!

In this example, the user tango wants to deploy the ODA so that the ODA stores and retrieves SBs from the local directory /home/tango/oda. We’ll set an environment variable to hold this location.

$ export ODA_DIR=$HOME/oda

Minikube needs to be deployed with a persistent volume that makes $ODA_DIR available inside the Kubernetes cluster. This is achieved by redeploying Kubernetes using the ska-cicd-deploy-minikube project. Checkout the project and (re)deploy Minikube like so:

$ # checkout the ska-cicd-deploy-minikube project
$ git clone --recursive https://gitlab.com/ska-telescope/sdi/ska-cicd-deploy-minikube
$ cd ska-cicd-deploy-minikube

$ # redeploy Minikube. Caution! This will delete any existing deployment!
$ make MOUNT_FROM=$ODA_DIR MOUNT_TO=$ODA_DIR clean all

The ODA chart can now be installed. For a local installation, it can be useful to expose the ODA ingress so that the ODA deployment can be exercised from outside Minikube, i.e., from your host machine. We also want to configure the ODA to use the directory exposed at $ODA_DIR. These aspects are configured by setting the relevant Helm chart values. These values could be set individually using K8S_CHART_PARAMS="--set parameter1=foo --set parameter2=bar" etc., but as there are several values to set we will define them in a setting file (overrides.yaml) to be included when deploying the ODA.

$ # navigate to the directory containing the ska-db-oda project
$ cd path/to/ska-db-oda

$ # inspect contents of our Heml chart overrides. This example enables ODA
$ # ingress and configures the backend to to use the persistent volume
$ # exposed at $ODA_DIR. Create this file if required.
$ cat overrides.yaml
rest:
  ingress:
    enabled: true
  backend:
    type: filesystem
    filesystem:
      use_pv: true
      pv_hostpath: /home/tango/oda   <-- replace with the value of $ODA_DIR

$ # install the ODA including the custom values
$ make K8S_CHART_PARAMS="--values overrides.yaml" k8s-install-chart

The state of the deployment can be inspected with make k8s-watch. The output for a successful deployment should look similar to below:

$ make k8s-watch

NAME                         READY   STATUS    RESTARTS   AGE
pod/ska-db-oda-rest-test-0   1/1     Running   0          24s

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/ska-db-oda-rest-test   ClusterIP   10.98.75.197   <none>        5000/TCP   24s

NAME                                    READY   AGE
statefulset.apps/ska-db-oda-rest-test   1/1     24s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                STORAGECLASS   REASON   AGE
persistentvolume/pvc-b44332d8-e8b8-472b-a407-2080c850dee0   1Gi        RWO            Delete           Bound       ska-db-oda/ska-db-oda-persistent-volume-claim-test   standard                24s
persistentvolume/ska-db-oda-persistent-volume-test          1Gi        RWO            Delete           Available                                                        standard                24s

NAME                                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/ska-db-oda-persistent-volume-claim-test   Bound    pvc-b44332d8-e8b8-472b-a407-2080c850dee0   1Gi        RWO            standard       24s

NAME                                             CLASS    HOSTS   ADDRESS   PORTS   AGE
ingress.networking.k8s.io/ska-db-oda-rest-test   <none>   *                 80      24s

SBs uploaded to the ODA will be stored in $ODA_DIR. Any SB JSON files stored in $ODA_DIR directory will be retrievable via the ODA REST API, e.g.,

$ # get the ODA endpoint, which is a combination of Minikube IP address and
$ # deployment namespace
$ MINIKUBE_IP=`minikube ip`
$ ODA_NAMESPACE=`make k8s-vars | grep 'Selected Namespace' | awk '{ print $3 }'`
$ ODA_ENDPOINT=http://$MINIKUBE_IP/$ODA_NAMESPACE/api/v1/sbds

$ # get the SBD ID from the SB used for unit tests. We'll upload the SB to this URL.
$ SBD_ID=`grep sbd_id tests/unit/testfile_sample_low_sb.json \
         | awk '{ gsub("\"", ""); gsub(",", ""); print $2 }'`

$ # upload the unit test SB to the ODA
$ curl -iX PUT -H "Content-Type: application/json"  \
       -d @tests/unit/testfile_sample_low_sb.json   \
       $ODA_ENDPOINT/$SBD_ID
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Date: Mon, 21 Feb 2022 11:24:25 GMT
Content-Type: application/json
Content-Length: 76
Connection: keep-alive

{"message":"Created. A new SB definition with UID sbi-mvp01-20200325-00001."}

$ # list contents of $ODA_DIR. The uploaded SB should be stored there.
$ ls $ODA_DIR
sbi-mvp01-20200325-00001.json

PostgresQL deployment

By default, the ska-db-oda chart with the command make k8s-install-chart will install both the ODA client and ODA PostgresQL DB. The installation of the DB is based on the Bitnami Helm chart.

The parameters that can be set for the DB deployment are:

ADMIN_POSTGRES_PASSWORD        secretpassword ## Password for the "postgres" admin user
ENABLE_POSTGRES                true ## Enable or not postgresql

The DB will be installed only if the variable ENABLE_POSTGRES is true. The service type is set to be LoadBalancer.

This means that it is possible to reach the database directly at the ip address of the result of this command kubectl get svc -n ska-db-oda | grep LoadBalancer | awk '{print $4}'.

Note

If using Minikube, the LoadBalancer needs to be exposed via minikube tunnel See here for more details.

There are other parameters that can be changed only with the values file of the chart. They are the following:

postgresql:
commonLabels:
   app: ska-db-oda

enabled: true

image:
   debug: true

auth:
   postgresPassword: "secretpassword"

primary:
   service:
      type: LoadBalancer

   initdb:
      scriptsConfigMap: ska-db-oda-initdb
      user: "postgres"

   persistence:
      enabled: true
      ## @param primary.persistence.mountPath The path the volume will be mounted at
      ## Note: useful when using custom PostgreSQL images
      ##
      mountPath: /bitnami/postgresql
      ## @param primary.persistence.storageClass PVC Storage Class for PostgreSQL Primary data volume
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      storageClass: "nfss1"
      ## @param primary.persistence.accessModes PVC Access Mode for PostgreSQL volume
      ##
      accessModes:
      - ReadWriteMany
      ## @param primary.persistence.size PVC Storage Request for PostgreSQL volume
      ##
      size: 12Gi

More parameters can be found at the bitnami helm chart for PostgreSQL.