This is the multi-page printable view of this section. Click here to print.
Deployment
1 - ALB Ingress Controller
Creating an Application Load Balancer to connect to the AIS Helm chart XNAT Implementation
We will be following this AWS Guide:
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Before we begin
One thing that you need to know when we want to create new ALB from EKS is service spec type can only support LoadBalancer and NodePort. It won’t supportClusterIP
.The Charts Repo has the service defined as ClusterIP
so some changes need to be made to make this work. We will get to that later after we have created the ALB and policies.
In this document we create a Cluster called xnat in ap-southeast-2. Please update these details for your environment.
Create an IAM OIDC provider and associate with cluster:
eksctl utils associate-iam-oidc-provider --region ap-southeast-2 --cluster xnat --approve
Download the IAM Policy:
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
Create the IAM policy and take a note of the ARN:
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam-policy.json
Create the service account using ARN from the previous command (substitute your ARN for the XXX):
eksctl create iamserviceaccount --cluster=xnat --namespace=kube-system --name=aws-load-balancer-controller --attach-policy-arn=arn:aws:iam::XXXXXXXXX:policy/AWSLoadBalancerControllerIAMPolicy --override-existing-serviceaccounts --approve
Install TargetGroupBinding
:
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
Download the EKS Helm Chart and update repo information:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
Install the AWS Load Balancer Controller:
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller --set clusterName=xnat --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller -n kube-system
Confirm it is installed:
kubectl get deployment -n kube-system aws-load-balancer-controller
You should see - READY 1/1
if it is installed properly
In order to apply this to the XNAT Charts Helm template update the charts/xnat/values.yaml
file to remove the Nginx ingress parts and add the ALB ingress parts.
Added to values file:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: xnat
alb.ingress.kubernetes.io/target-type: ip
For more ALB annotations / options, please see article at the bottom of the page.
Commented out / removed:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: "130.95.0.0/16 127.0.0.0/8"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "150"
nginx.ingress.kubernetes.io/proxy-send-timeout: "100"
nginx.ingress.kubernetes.io/proxy-read-timeout: "100"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
As pointed out ClusterIP
as service type does not work with ALB. So you will have to make some further changes to charts/xnat/charts/xnat-web/values.yaml
:
Change:
service:
type: ClusterIP
port: 80
to:
service:
type: NodePort
port: 80
In xnat/charts/xnat-web/templates/service.yaml
remove the line:
clusterIP: None
Then create the Helm chart with the usual command (after building dependencies - just follow README.md). If you are updating an existing xnat installation it will fail so you will need to create a new application.
helm upgrade xnat . -nxnat
It should now create a Target Group and Application Load Balancer in AWS EC2 Services. I had to make a further change to get this to work.
On the Target Group I had to change health check code from 200
to 302
to get a healthy instance because it redirects.
You can fix this by adding the following line to values file:
# Specify Health Checks
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "302"
Troubleshooting and make sure ALB is created:
watch kubectl -n kube-system get all
Find out controller name in pod. In this case - pod/aws-load-balancer-controller-98f66dcb8-zkz8k
Make sure all are up.
Check logs:
kubectl logs -n kube-system aws-load-balancer-controller-98f66dcb8-zkz8k
When updating ALB is often doesn’t update properly so you will need to delete and recreate the ALB:
kubectl delete deployment -n kube-system aws-load-balancer-controller
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller --set clusterName=xnat --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller -n kube-system
Change the stickiness of the Load Balancer:
It is important to set a stickiness time on the load balancer or you can get an issue where the Database thinks you have logged in but the pod you connect to knows you haven’t so you can’t login. Setting stickiness reasonably high – say 30 minutes, can get round this.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=1800
Change the Load Balancing Algorithm:
alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests
Increase the timeout to 5 minutes from 1. When using the Compressed Image Uploader you can sometimes get a 504 Gateway timeout error message. This will fix that issue.
You can read more about it here:
https://aws.amazon.com/premiumsupport/knowledge-center/eks-http-504-errors/
alb.ingress.kubernetes.io/load-balancer-attributes: "idle_timeout.timeout_seconds=300"
Add SSL encryption to your Application Load Balancer
Firstly, you need to add an SSL certificate to your ALB annotations. Kubernetes has a built in module: Cert Manager, to deal with cross clouds / infrastructure.
https://cert-manager.io/docs/installation/kubernetes/
However, in this case, AWS has a built in Certificate Manager that creates and a renews SSL certificates for free so we will be using this technology.
You can read more about it here:
This assumes you have a valid certificate created through AWS Certificate Manager and you know the ARN.
These are additional annotations to add to values file and explanations above:
Listen on port 80 and 443:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
Specify the ARN of your SSL certificate from AWS Certificate Manager (change for your actual ARN):
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:XXXXXXX:certificate/XXXXXX"
Specify AWS SSL Policy:
alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
For more details see here of SSL policy options:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
Finally, for this to successfully work you need to change the host path to allow any path or the Tomcat URL will be sent to a 404 by the Load Balancer. Put a wildcard in the paths to allow any eventual URL (starting with xnat.example.com in this case):
hosts:
- host: xnat.example.com
paths: [ "/*" ]
Redirect HTTP to HTTPS:
This does not work on Kubernetes 1.19 or above as the “use-annotation” command does not work. There is seemingly no documentation on the required annotations to make this work.
Add the following annotation to your values file below the ports to listen on (see above):
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": {"Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
You must then update the Rules section of ingress.yaml
found within the releases/xnat/charts/xnat-web/templates
directory to look like this when using Ingress apiVersion of networking.k8s.io/v1beta1 on Kuberbetes version prior to v1.22:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
For Ingress apiVersion of networking.k8s.io/v1 on Kubernetes version >= v1.22:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- end }}
{{- end }}
This will redirect HTTP to HTTPS on Kubernetes 1.18 and below.
Full values.yaml
file ingress section:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": {"Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "302"
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:XXXXXXX:certificate/XXXXXX"
alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
alb.ingress.kubernetes.io/target-group-attributes: "stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=1800,load_balancing.algorithm.type=least_outstanding_requests"
alb.ingress.kubernetes.io/load-balancer-attributes: "idle_timeout.timeout_seconds=300"
Further Reading:
Troubleshooting EKS Load Balancers:
- https://aws.amazon.com/premiumsupport/knowledge-center/eks-load-balancers-troubleshooting/
- https://medium.com/@ManagedKube/kubernetes-troubleshooting-ingress-and-services-traffic-flows-547ea867b120
ALB annotations:
2 - Azure Setup Full
Create an AKS Cluster
One of the great things about Azure is the Azure Cli. Specify Bash and then you can run all commands through your web browser and all tools and kubectl / az commands are already installed and available without having to create them on your workstation or spin up a VM instance for the sole purpose of controlling the cluster.
You can do this via the console if you want. By Azure cli, see below. Create a resource group first.
Specify your Resource Group, cluster name (in our case xnat but please update if your Cluster is name differently), node count and VM instance size:
az aks create \
--resource-group <Resource Group Name> \
--name xnat \
--node-count 3 \
--generate-ssh-keys \
--node-vm-size Standard_B2s \
--enable-managed-identity
Get AZ AKS credentials to run kubectl commands against your Cluster
az aks get-credentials --name xnat --resource-group <Resource Group Name>
Confirm everything is setup correctly:
kubectl get nodes -o wide
kubectl cluster-info
Download and install AIS Chart
git clone https://github.com/Australian-Imaging-Service/charts.git
Add the AIS repo and update Helm:
helm repo add ais https://australian-imaging-service.github.io/charts
helm repo update
Change to the correct directory and update dependencies. This will download and install the Postgresql Helm Chart. You don’t need to do this if you want to connect to an external Postgresql DB.
cd ~/charts/releases/xnat
helm dependency update
Create the namespace and install the chart, then watch it be created.
kubectl create namespace xnat
helm upgrade xnat ais/xnat --install -nxnat
watch kubectl -nxnat get all
It will complain that the Postgresql password is empty and needs updating. Create an override values file (in this case values-aks.yaml but feel free to call it what you wish) and add the following inserting your own desired values:
xnat-web:
postgresql:
postgresqlDatabase: <your database>
postgresqlUsername: <your username>
postgresqlPassword: <your password>
Update volume / persistence information
It turns out that there is an issue with Storage classes that means that the volumes are not created automatically. We need to make a small change to the storageClass configuration for the ReadWriteOnce volumes and create new external volumes for the ReadWriteMany ones.
Firstly, we create our own Azure files volumes for archive and prearchive and make a slight adjustment to the values configuration and apply as an override.
Follow this document for the details of how to do that:
https://docs.microsoft.com/en-us/azure/aks/azure-files-volume
Firstly, export some values that will be used to create the Azure files volumes. Please substitute the details of your environment here.
AKS_PERS_STORAGE_ACCOUNT_NAME=<your storage account name>
AKS_PERS_RESOURCE_GROUP=<your resource group>
AKS_PERS_LOCATION=<your region>
AKS_PERS_SHARE_NAME=xnat-xnat-web-archive
xnat-xnat-web-archive
will need to be used or the Helm chart won’t be able to find the mount.
Create a Resource Group:
az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION
Create a storage account:
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
Export the connection string as an environment variable, this is used when creating the Azure file share:
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
Create the file share:
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
Get storage account key:
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
Echo storage account name and key:
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY
Make a note of the Storage account name and key as you will need them.
Now repeat this process but update the Share name to xnat-xnat-web-prearchive and then again with xnat-xnat-web-build. Run this first and then repeat the rest of the commands:
AKS_PERS_SHARE_NAME=xnat-xnat-web-prearchive
and then update Share name and repeat the process again:
AKS_PERS_SHARE_NAME=xnat-xnat-web-build
Create a Kubernetes Secret
In order to mount the volumes, you need to create a secret. As we have created our Helm chart in the xnat namespace, we need to make sure that is added into the following command (not in the original Microsoft guide):
kubectl -nxnat create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
Create Kubernetes Volumes
Now we need to create three persistent volumes outside of the Helm Chart which the Chart can mount - hence requiring the exact name.
Create a file
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: xnat-xnat-web-archive
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
claimRef:
name: xnat-xnat-web-archive
namespace: xnat
azureFile:
secretName: azure-secret
shareName: xnat-xnat-web-archive
readOnly: false
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: xnat-xnat-web-prearchive
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
claimRef:
name: xnat-xnat-web-prearchive
namespace: xnat
azureFile:
secretName: azure-secret
shareName: xnat-xnat-web-prearchive
readOnly: false
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: xnat-xnat-web-build
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
claimRef:
name: xnat-xnat-web-build
namespace: xnat
azureFile:
secretName: azure-secret
shareName: xnat-xnat-web-build
readOnly: false
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
Size doesn’t really matter as like EFS, Azure files is completely scaleable. Just make sure it is the same as your values file for those volumes.
Apply the volumes
kubectl apply -f pv.yaml
We should now have two newly created volumes our Helm chart can mount.
Update our override values file for our Helm chart.
Edit your values-aks.yaml file from above and add the following in (postgresql entries already added):
Paste the following:
xnat-web:
persistence:
cache:
accessMode: ReadWriteOnce
mountPath: /data/xnat/cache
storageClassName: ""
size: 10Gi
size: 0
volumes:
archive:
accessMode: ReadWriteMany
mountPath: /data/xnat/archive
storageClassName: ""
size: 10Gi
prearchive:
accessMode: ReadWriteMany
mountPath: /data/xnat/prearchive
storageClassName: ""
size: 10Gi
build:
accessMode: ReadWriteMany
mountPath: /data/xnat/build
storageClassName: ""
size: 10Gi
postgresql:
postgresqlDatabase: <your database>
postgresqlUsername: <your username>
postgresqlPassword: <your password>
You can now apply the helm chart with your override and all the volumes will mount.
helm upgrade xnat ais/xnat -i -f values-aks.yaml -nxnat
Congratulations! Your should now have a working XNAT environment with properly mounted volumes.
You can check everything is working:
kubectl -nxnat get ev
kubectl -nxnat get all
kubectl -nxnat get pvc,pv
Check that the XNAT service comes up:
kubectl -nxnat logs xnat-xnat-web-0 -f
Create a static public IP, an ingress controller, LetsEncrypt certificates and point it to our Helm chart
OK so all good so far but we can’t actually access our XNAT environment from outside of our cluster so we need to create an Ingress Controller.
You can follow the URL here from Microsoft for more detailed information:
https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip
First, find out the resource name of the AKS Cluster:
az aks show --resource-group <your resource group> --name <your cluster name> --query nodeResourceGroup -o tsv
This will create the output for your next command.
az network public-ip create --resource-group <output from previous command> --name <a name for your public IP> --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv
Point your FQDN to the public IP address you created
For the Letsencrypt certificate issuer to work it needs to be based on a working FQDN (fully qualified domain name), so in whatever DNS manager you use, create a new A record and point your xnat FQDN (xnat.example.com for example) to the IP address you just created.
Add the ingress-nginx repo:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Now create the ingress controller with a DNS Label (doesn’t need to be FQDN here) and the IP created in the last command:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace xnat --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux --set controller.service.loadBalancerIP="1.2.3.4" --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="xnat-aks"
Please ensure to update the details above to suit your environment - including namespace.
Install Cert-Manager and attach to the Helm chart and Ingress Controller
kubectl label namespace xnat cert-manager.io/disable-validation=true
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager --namespace xnat --version v1.3.1 --set installCRDs=true --set nodeSelector."beta\.kubernetes\.io/os"=linux jetstack/cert-manager
You can find a write up of these commands and what they do in the Microsoft article.
Create a cluster-issuer.yaml to issue the Letsencrypt certificates
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@emailaddress.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
In our case, we want production Letsencrypt certificates hence letsencrypt-prod (mentioned twice here and in values-aks.yaml). If you are doing testing you can use letsencrypt-staging. See Microsoft article for more details.
Please do not forget to use your email address here.
Apply the yaml file:
kubectl apply -f cluster-issuer.yaml -nxnat
NB. To allow large uploads via the Compressed uploader tool you need to specify a value in the Nginx annotations or you get an “413 Request Entity Too Large” error. This needs to go in annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
This is included in the example below.
Update your override values file to point to your ingress controller and Letsencrypt Cluster issuer
Add the following to your values-aks.yaml
file (I have added the volume and postgresql details as well for the complete values file):
xnat-web:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
tls:
- hosts:
- "yourxnat.example.com"
secretName: tls-secret
hosts:
- "yourxnat.example.com"
rules:
- host: "yourxnat.example.com"
http:
paths:
- path: "/"
backend:
serviceName: "xnat-xnat-web"
servicePort: 80
persistence:
cache:
accessMode: ReadWriteOnce
mountPath: /data/xnat/cache
storageClassName: ""
size: 10Gi
volumes:
archive:
accessMode: ReadWriteMany
mountPath: /data/xnat/archive
storageClassName: ""
size: 10Gi
prearchive:
accessMode: ReadWriteMany
mountPath: /data/xnat/prearchive
storageClassName: ""
size: 10Gi
build:
accessMode: ReadWriteMany
mountPath: /data/xnat/build
storageClassName: ""
size: 10Gi
postgresql:
postgresqlDatabase: <your database>
postgresqlUsername: <your username>
postgresqlPassword: <your password>
Change yourxnat.example.com
to whatever you want your XNAT FQDN to be.
If you are using Letsencrypt-staging, update the cert-manager.io annotation accordingly.
Now update your helm chart and you should now have a fully working Azure XNAT installation with HTTPS redirection enabled, working volumes and fully automated certificates with automatic renewal.
helm upgrade xnat ais/xnat -i -f values-aks.yaml -nxnat
3 - Deploying Istio Service Mesh for our XNAT environment
What is a Service Mesh?
From this article:
https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
“A service mesh, like the open source project Istio, is a way to control how different parts of an application share data with one another. Unlike other systems for managing this communication, a service mesh is a dedicated infrastructure layer built right into an app. This visible infrastructure layer can document how well (or not) different parts of an app interact, so it becomes easier to optimize communication and avoid downtime as an app grows.”
OK so a service mesh helps secure our environment and the communication between different namespaces and apps in our cluster (or clusters).
Istio is one of the most popular Service Mesh software providers so we will deploy and configure this for our environment.
OK so let’s get to work.
There are several different ways to install Istio - with the Istioctl Operator, Istioctl, even on Virtual machines, but we will install the Helm version as AIS uses a Helm deployment and it seems nice and neat.
Following this guide to perform the helm install:
https://istio.io/latest/docs/setup/install/helm/
For our installation we won’t be installing the Istio Ingress Gateway or Istio Egress Gateway controller for our AWS environment.
This is because AWS Cluster Autoscaler requires Application Load Balancer type to be IP whereas the Ingress Gateway controller does not work with that target type - only target type: Instance.
This catch 22 forces us to use only istio
and istiod
to perform the service mesh and keep our existing AWS ALB Ingress controller. The standard install of Istio is to create an Istio Ingress Gateway, point it to a virtual service and then that virtual service points to your actual service.
For more information on how to install and configure the Istio Ingress Gateway please follow this guide:
https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/
Install Istio
Download Latest version of istioctl
:
curl -L https://istio.io/downloadIstio | sh -
Copy binary to /usr/local/bin
(change to istio install directory first - i.e. istio-1.11.X
):
sudo cp bin/istioctl /usr/local/bin/
Confirm it is working:
istioctl version
Create namespace:
kubectl create ns istio-system
Install the Helm repo:
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
Install Istio base (must be in istio install directory):
helm install istio-base istio/base -n istio-system
Install istiod
:
helm install istiod istio/istiod -n istio-system --wait
Now Istio is installed, we need to apply the configuration to our XNAT namespace to add the Istio sidecars - this is how Istio applies the policies.
https://istio.io/latest/docs/reference/config/networking/sidecar/
Label the namespaces you want the Istio sidecars to install into - in our case XNAT:
kubectl label namespace xnat istio-injection=enabled
Confirm it has been successfully applied:
kubectl get ns xnat --show-labels
At this point you may need to redeploy your pods if there are no sidecars present. When Istio is properly deployed, instead of xnat pods saying 1/1
they will say 2/2
- example:
kubectl get -nxnat all
NAME READY STATUS RESTARTS AGE
pod/xnat-postgresql-0 2/2 Running 0 160m
pod/xnat-xnat-web-0 2/2 Running 0 160m
Note about Cluster Austoscaler / Horizontal Pod Autoscaler as it applies to Istio
When using Kubernetes Horizontal Pod Autoscaling (HPA) to scale out pods automatically, you need to make adjustments for Istio. After enabling Istio for some deployments HPA wasn’t scaling as expected and in some cases not at all.
It turns out that HPA uses the sum of all CPU requests for a pod when determining using CPU metrics when to scale. By adding a istio-proxy
sidecar to a pod we were changing the total amount of CPU & memory requests thereby effectively skewing the scale out point. So for example, if you have HPA configured to scale at 70% targetCPUUtilizationPercentage
and your application requests 100m
, you are scaling at 70m
. When Istio comes into the picture, by default it requests 100m as well. So with istio-proxy injected now your scale out point is 140m ((100m + 100m) * 70% )
, which you may never reach. We have found that istio-proxy
consumes about 10m
in our environment. Even with an extra 10m being consumed by istio-proxy
combined with the previous scale up trigger of 70m
on the application container is well short (10m + 70m)
of the new target of 140m
We solved this by calculating the correct scale out point and setting targetAverageValue
to it.
Referenced from this article:
https://engineering.hellofresh.com/everything-we-learned-running-istio-in-production-part-2-ff4c26844bfb
Apply our Istio Policies
mTLS
We are going to enable Mutual TLS for the entire mesh.
This policy will do that - call it istio-mtls.yaml
:
# istio-mtls.yaml
#
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
Now apply the policy:
kubectl apply -f istio-mtls.yaml
Check that mTLS is enabled for all namespaces:
kubectl get peerauthentication --all-namespaces
NAMESPACE NAME MODE AGE
default default STRICT 16h
istio-system default STRICT 28m
xnat default STRICT 16h
Now if we try to access our XNAT server we will get 502 Bad Gateway
as the XNAT app can’t perform mTLS. Please substitute your XNAT URL below:
curl -X GET https://xnat.example.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
So next we want to allow traffic on port 8080
going to our xnat-xnat-web
app only and apply mTLS for everything else, so amend istio-mtls.yaml
:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
---
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: xnat
spec:
selector:
matchLabels:
app: xnat-web
mtls:
mode: STRICT
portLevelMtls:
8080:
mode: DISABLE
Now apply again:
kubectl apply -f istio-mtls.yaml
If we now run our curl command again:
curl -X GET https://xnat.example.com
It completes successfully.
Authorization Policy
You can also specify what commands we can run on our xnat-xnat-web
app with Authorization policies and even specify via source from specific namespaces and even apps. This gives you the ability to completely lock down the environment.
You can for instance allow a certain source POST access whilst another source only has GET and HEAD access.
Let’s create the following Authorization policy to allow all GET, HEAD, PUT, DELETE and OPTIONS commands to our xnat-web
app called istio-auth-policy.yaml
:
# istio-auth-policy.yaml
#
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "xnat-all"
namespace: xnat
spec:
selector:
matchLabels:
app: xnat-web
rules:
- to:
- operation:
methods: ["GET", "HEAD", "PUT", "DELETE", "OPTIONS"]
If you wanted to specify a source you would add a from value under rules and source.
Please follow this guide for more details:
https://istio.io/latest/docs/tasks/security/authorization/authz-http/
Before you apply the policy, we need to add a destination rule to allow the traffic out. Create a file called istio-destination.yaml
:
# istio-destination.yaml
#
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "xnat-xnat-web"
spec:
host: xnat-xnat-web.xnat.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
Apply both policies:
kubectl apply -f istio-auth-policy.yaml
kubectl apply -f istio-destination.yaml
Now let’s see it in action.
curl -X GET https://xnat.example.com
This completes fine. Now let’s try wtih a POST command not included in the authorization policy:
curl -X POST https://xnat.example.com
RBAC: access denied
So our policy is working correctly. However, as XNAT relies rather heavily on POST we will add it in to the policy and try again.
Amend the yaml file to this:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "xnat-all"
namespace: xnat
spec:
selector:
matchLabels:
app: xnat-web
rules:
- to:
- operation:
methods: ["GET", "POST", "HEAD", "PUT", "DELETE", "OPTIONS"]
Now re-apply the policy:
kubectl apply -f istio-auth-policy.yaml
And curl again:
curl -X POST https://xnat.example.com
This time it works. OK so we have a working Istio service mesh with correctly applied Mutual TLS and Authorization Policies.
This is only a tiny fraction of what Istio can do, so please go to their website for more information. You should try to lock down permissions further than specified above.
Kiali Installation
Kiali is a fantastic visualisation tool for Istio that helps you see at a glance what your namespaces are up to, if they are protected and allows you to add and update Istio configuration policies right through the web GUI.
In combination with Prometheus and Jaeger, it allows to show traffic metrics, tracing and much more.
You can read more about it here:
https://kiali.io/#:~:text=Kiali%20is%20a%20management%20console,part%20of%20your%20production%20environment.
There are several ways of installing it with authentication (which for production workloads is a must). We are going to use the token method and using the AWS Classic Load Balancer to access.
Once you have installed Istio and Istiod, follow this guide to guide to install via helm:
https://kiali.io/docs/installation/installation-guide/example-install/
Install the Operator via Helm and create Namespace:
helm repo add kiali https://kiali.org/helm-charts
helm repo update kiali
helm repo update
helm install --namespace kiali-operator --create-namespace kiali-operator kiali/kiali-operator
Check everything came up properly:
kubectl get -nkiali-operator all
Install Prometheus and Jaeger into Istio-System namespace to show metrics and tracing. From your Istio installation directory (i.e. istio-1.11.X):
kubectl apply -f samples/addons/jaeger.yaml
kubectl apply -f samples/addons/prometheus.yaml
Check they are correctly installed:
kubectl get -nistio-system all
Create Kiali-CR with authentication strategy token and set to service type LoadBalancer to be able to access outside of the cluster:
# kiali_cr.yaml
#
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
name: kiali
namespace: istio-system
spec:
auth:
strategy: "token"
deployment:
service_type: "LoadBalancer"
view_only_mode: false
server:
web_root: "/kiali"
https://kiali.io/docs/configuration/rbac/
Apply the file:
kubectl apply -f kiali_cr.yaml
Watch it complete setup:
kubectl get kiali kiali -n istio-system -o jsonpath='{.status}' | jq
and:
kubectl get -nistio-system all
To find the ELB address, run:
kubectl get -nistio-system svc kiali
In your browser, type in the copied and pasted details - for example:
http://example-elb.ap-southeast-2.elb.amazonaws.com
Then add :20001/kiali
to the end:
http://example-elb.ap-southeast-2.elb.amazonaws.com:20001/kiali
It will then ask you for a Token for the service account to be able to login. Find it out with this command and then copy and paste and you now have a fully running kiali installation:
kubectl get secret -n istio-system \
$(kubectl get sa kiali-service-account -n istio-system -o jsonpath='{.secrets[0].name}') \
-o jsonpath='{.data.token}' | base64 -d
https://kiali.io/docs/installation/installation-guide/accessing-kiali/
At this point I tried to set the AWS Elastic Load Balancer to use SSL and a proper certificate but after 4 hours of investigation it turns out that Kiali ingress requires "class_name"
and AWS ELB doesn’t have one so that doesn’t work. Rather frustratingly I ended up manually updating the LoadBalancer lister details to be SSL over TCP and to specify the SSL Cipher policy and Certificate Manager. You should also point your FQDN to this Load Balancer to work with your custom certificate. No doubt an integration of Nginx and AWS ELB would fix this - Nginx being Kiali’s default ingress method.
Troubleshooting Istio
Use these commands for our XNAT environment to help debugging:
istioctl proxy-status
istioctl x describe pod xnat-xnat-web-0.xnat
istioctl proxy-config listeners xnat-xnat-web-0.xnat
istioctl x authz check xnat-xnat-web-0.xnat
kubectl logs pod/xnat-xnat-web-0 -c istio-proxy -nxnat
kubectl get peerauthentication --all-namespaces
kubectl get destinationrule --all-namespaces
More Articles on Troubleshooting Istio:
https://www.istioworkshop.io/12-debugging/01-istioctl-debug-command/
https://istio.io/latest/docs/ops/common-problems/security-issues/
Further Reading
Istio AuthorizationPolicy testing / config:
https://istiobyexample.dev/authorization/
Istio mTLS status using Kiali:
https://kiali.io/docs/features/security/
Istio Workshop:
https://www.istioworkshop.io
Istio mTLS Example Setup:
https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/
4 - Using Kustomize as a Post renderer for the AIS XNAT Helm Chart
Kustomize
Using a Helm Chart is a pretty awesome way to deploy Kubernetes infrastructure in a neatly packaged, release versioned way.
They can be updated from the upstream repo with a single line of code and for any customisations you want to add into the deployment you specify it in a values.yaml file.
Or at least that’s how it should work. As Helm is based on templates, sometimes a value is hardcoded into the template and you can’t change it in the values file.
Your only option would have been to download the git repo that the Helm chart is based on, edit the template file in question and run it locally.
The problem with this approach is that when a new Helm Chart is released, you have to download the chart again and then apply all of your updates.
This becomes cumbersome and negates the advantages of Helm.
Enter Kustomize. Kustomize can work in several ways but in this guide I will show you how to apply Kustomize as a post-renderer to update the template files to fit our environment.
This allows you to continue to use the Helm Charts from the repo AND customise the Helm Chart templates to allow successful deployment.
https://kustomize.io/
Install Kustomize
Kustomize can be run as its own program using the kustomize build
command or built into kubectl
using kubectl kustomize
. We are going to use the kustomize
standalone binary.
Go here to install:
https://kubectl.docs.kubernetes.io/installation/kustomize/binaries/
Direct install:
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
This downloads to whatever directory you are in for whatever Operating System you are using. Copy it to /usr/local/bin
to use it system wide:
sudo cp kustomize /usr/local/bin
How Kustomize works
When using Kustomize as a post renderer, Kustomize inputs all of the Helm Charts configuration data for a particular Chart in conjunction with the values file you specify with your cluster specific details and then amends the templates and applies them on the fly afterwards. This is why it is called a post renderer.
Let’s break this down.
1. Helm template
In order to extract all of the Helm chart information, you can use the helm template
command. In the case of our XNAT/AIS Helm chart, to extract all of this data into a file called all.yaml
(can be any filename) you would run this command:
helm template xnat ais/xnat > all.yaml
You now have the complete configuration of your Helm Chart including all template files in one file - all.yaml
.
2. kustomization.yaml
The next step is a kustomization.yaml
file. This file must be called kustomization.yaml
or Kustomize doesn’t work.
You create this and in it you specify your resources (inputs) - in our example, the resource will be all.yaml
. The fantastic thing about Kustomize is you can add more resources in as well which combines with the Helm Chart to streamline deployment.
For instance, in my kustomization.yaml
file I also specify a pv.yaml
as another resource. This has information about creating Persistent Volumes for the XNAT deployment and creates the volumes with the deployment so I don’t have to apply this separately. You can do this for any resources you want to add to your deployment not included in the Helm chart.
Example using all.yaml
and pv.yaml
in the kustomization.yaml
file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- all.yaml
- pv.yaml
The second part of the Kustomization.yaml
file is where you specify the files that patch the templates you need to change.
You need to specify Filename and path, name of the original template, type and version. It should be pointed out there are a lot of other ways to use Kustomize - you can read about them in some of the included articles at the end of this guide.
Example:
patches:
- path: service-patch.yaml
target:
kind: Service
name: xnat-xnat-web
version: v1
In the above example, the file is service-patch.yaml
and is in the same directory as kustomization.yaml
, the name is xnat-xnat-web
, the kind is Service
and version is v1
.
Now lets look at the original service.yaml
file to get a better idea. It is located in charts/releases/xnat/charts/xnat-web/templates/service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: {{ include "xnat-web.fullname" . }}
labels:
{{- include "xnat-web.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
#clusterIP: None
ports:
- port: {{ .Values.service.port }}
targetPort: 8080
protocol: TCP
name: http
selector:
{{- include "xnat-web.selectorLabels" . | nindent 4 }}
sessionAffinity: "ClientIP"
{{- if .Values.dicom_scp.recievers }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "xnat-web.fullname" . }}-dicom-scp
labels:
{{- include "xnat-web.labels" . | nindent 4 }}
{{- with .Values.dicom_scp.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.dicom_scp.serviceType | quote }}
ports:
{{- $serviceType := .Values.dicom_scp.serviceType }}
{{- range .Values.dicom_scp.recievers }}
- port: {{ .port }}
targetPort: {{ .port }}
{{- if and (eq $serviceType "NodePort") .nodePort }}
nodePort: {{ .nodePort }}
{{- end }}
{{- if and (eq $serviceType "LoadBalancer") .loadBalancerIP }}
loadBalancerIP: {{ .loadBalancerIP }}
{{- end }}
{{- end }}
selector:
{{- include "xnat-web.selectorLabels" . | nindent 4 }}
sessionAffinity: "ClientIP"
{{- end }}
3. The Patch file
OK, so let’s have a look at our patch file and see what it is actually doing.
- op: remove
path: "/spec/sessionAffinity"
Pretty simple really. - op: remove
just removes whatever we tell it to in our service.yaml file. If we look through our file, we find spec
and then under that we find sessionAffinity
and then remove that.
In this case if we remove all the other code to simplify things you get this:
spec:
sessionAffinity: "ClientIP"
As sessionAffinity
is under spec by indentation it will remove the line:
sessionAffinity: "ClientIP"
In this particular case my AWS Cluster needs Service Type to be NodePort so this particular line causes the XNAT deployment to fail, hence the requirement to remove it.
OK so far so good. You can also use add
and replace
operations so let’s try an add command example as that is slightly more complicated.
Add and Replace commands example
OK continuing with our AWS NodePort example we will add a redirect from port 80 to 443 in the Ingress and replace the existing entry.
In order to do that we need to add a second host path to the charts/releases/xnat/charts/xnat-web/templates/ingress.yaml
. Lets look at the original file:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "xnat-web.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "xnat-web.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This is what we need in our values file to be reflected in the ingress.yaml
file:
hosts:
- host: "xnat.example.com"
paths:
- path: "/*"
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: "/*"
backend:
serviceName: "xnat-xnat-web"
servicePort: 80
And this is what we have at the moment in that file:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
As you can see, we are missing a second backend to allow the redirection from http to https.
In kustomization.yaml
add the following:
- path: ingress-patch.yaml
target:
group: networking.k8s.io
kind: Ingress
name: xnat-xnat-web
version: v1beta1
# ingress-patch.yaml
#
- op: replace
path: /spec/rules/0/http/paths/0/backend/serviceName
value: 'ssl-redirect'
- op: replace
path: /spec/rules/0/http/paths/0/backend/servicePort
value: 'use-annotation'
- op: add
path: /spec/rules/0/http/paths/-
value:
path: '/*'
backend:
serviceName: 'xnat-xnat-web'
servicePort: 80
OK, so let’s break this down. The top command replaces this:
serviceName: {{ $fullName }}
In this path:
rules:
http:
paths:
backend:
With a hardcoded serviceName
value:
serviceName: 'ssl-redirect'
I removed the extra lines to show you only the relevant section.
The second command replaces:
servicePort: {{ $svcPort }}
In the same path, with the hardcoded value:
servicePort: 'use-annotation'
Now for the add
command.
- op: add
path: /spec/rules/0/http/paths/-
This will add the values in normal yaml syntax here:
spec:
rules:
http:
paths:
-
NB. I have removed irrelevant lines to simplify the example. If there were already two sets of path directive, replacing or adding to the second one would require this path:
path: /spec/rules/1/http/paths/-
OK so the resultant transformation of the ingress.yaml
file will change it to look like this:
spec:
rules:
http:
paths:
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: '/*'
backend:
serviceName: 'xnat-xnat-web'
servicePort: 80
Let’s look at our full kustomization.yaml
file with resources and service and ingress patches.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- all.yaml
- pv.yaml
patches:
- path: service-patch.yaml
target:
kind: Service
name: xnat-xnat-web
version: v1
- path: ingress-patch.yaml
target:
group: networking.k8s.io
kind: Ingress
name: xnat-xnat-web
version: v1beta1
We are now ready to apply our kustomizations!
4. Bringing it all together
Create a new fle called whatever you like - and make it executable, in my case we will call it hook.sh
.
vi hook.sh
chmod 755 hook.sh
#!/bin/bash
# hook.sh
#
cat <&0 > all.yaml
kustomize build && rm all.yaml
This takes the contents of all.yaml
and kustomizes it using the kustomization.yaml
file with the resources and patches I have previously described. Finally, it deletes all.yaml
.
When you run kustomize build
it will look for a file called kustomization.yaml
to apply the transformations. As the kustomization.yaml
file is in the same directory as hook.sh only the kustomize build
command is needed, no further directive is required.
5. Deploy the Helm Chart with Kustomize post-renderer
OK to bring it all together and upgrade the XNAT AIS helm chart with your values file as values.yaml
in the namespace xnat
, run this command:
helm template xnat ais/xnat > all.yaml && \
helm upgrade xnat ais/xnat -i -f values.yaml -nxnat --post-renderer=./hook.sh
In this case, you need to make sure that the following files are in the same directory:
values.yaml
hook.sh
kustomization.yaml
ingress-patch.yaml
service-patch.yaml
pv.yaml
Further Reading
There are a lot of configuration options for Kustomize and this just touched on the basics.
Kustomize is also really useful for creating dev, staging and production implementations using the same chart. See these articles:
- https://austindewey.com/2020/07/27/patch-any-helm-chart-template-using-a-kustomize-post-renderer/
- https://learnk8s.io/templating-yaml-with-code#using-templates-with-search-and-replace
Nice Tutorial:
5 - Linode setup
List of steps to be followed to deploy XNAT in Linode LKE using Helm charts
1. LKE Cluster Setup
Set up the Linode LKE cluster using the link https://www.linode.com/docs/guides/how-to-deploy-an-lke-cluster-using-terraform/
2. Preparing for Tweaks pertaining to Linode
As we are tweaking XNAT Values related to PV access modes, let us check out the charts repo rather than using the AIS helm chart repository.
git clone https://github.com/Australian-Imaging-Service/charts.git
3. Actual Tweaks
Replace the access modes of all Volumes from ReadWriteMany
to ReadWriteOnce
in charts/releases/xnat/charts/xnat-web
This is because Linode storage only supports ReadWriteOnce
at this point of time.
4. Dependency Update
Update the dependency by switching to charts/releases/xnat
and execute the following
helm dependency update
5. XNAT Initial Installation
Go to charts/releases
and install xnat using helm.
kubectl create namespace xnat
helm install xnat-deployment xnat --values YOUR-VALUES-FILE --namespace=xnat
The XNAT & POSTGRES service should be up and running fine. Linode Storage Class linode-block-storage-retain
should have automatically
come in place & PVs will be auto created to be consumed by our mentioned PVCs.
6. Ingress Controller/Load balancer Installation
Install Ingress Controller and provision a Load balancer (Nodebalancer in Linode) by executing these commands
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
You may see an output like below
>NAME: ingress-nginx
LAST DEPLOYED: Mon Aug 2 11:51:32 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
7. Domain Mapping
Get the External IP address of the Loadbalancer by running the below command and assign it to any domain or subdomain.
cloud.neura.edu.au
is the subdomain for which the loadbalancer IP is assigned in my case.
Please replace it with your domain in this and all upcoming stepskubectl --namespace default get services -o wide -w ingress-nginx-controller
8. HTTP Traffic Routing via Ingress
It is time to create a Ingress object that directs the traffic based on the host/domain to the already available XNAT service.
Get the XNAT service name by issuing the below command and choose the service name that says TYPE as ClusterIP
kubectl get svc -nxnat -l "app.kubernetes.io/name=xnat-web"
Example: xnat-deployment-xnat-web
Using the above service name, write an ingress object to route the external traffic based on the domain name.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xnat-ingress
namespace: xnat
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: cloud.neura.edu.au
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: xnat-deployment-xnat-web
port:
number: 80
9. Delete the HTTP Ingress project
After the creation of this Ingress object, make sure cloud.neura.edu.au
is routed to the XNAT application over HTTP successfully.Let us delete the ingress object after checking because we will be creating another one with TLS to use HTTPS.
kubectl delete ingress xnat-ingress -nxnat
10. Install cert-manager for Secure Connection HTTPS
Install cert-manager
’s CRDs.
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.crds.yaml
Create a cert-manager
namespace.
kubectl create namespace cert-manager
Add the Helm repository which contains the cert-manager
Helm chart.
helm repo add jetstack https://charts.jetstack.io
Update your Helm repositories.
helm repo update
Install the cert-manager
Helm chart.
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.3.1
Verify that the corresponding cert-manager
pods are now running.
kubectl get pods --namespace cert-manager
You should see a similar output:
>NAME READY STATUS RESTARTS AGE
cert-manager-579d48dff8-84nw9 1/1 Running 3 1m
cert-manager-cainjector-789955d9b7-jfskr 1/1 Running 3 1m
cert-manager-webhook-64869c4997-hnx6n 1/1 Running 0 1m
11. Creation of ClusterIssuer to Issue certificates
Create a manifest file named acme-issuer-prod.yaml
that will be used to create a ClusterIssuer
resource on your cluster. Ensure you replace user@example.com
with your own email address.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: xnat
spec:
acme:
email: user@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-secret-prod
solvers:
- http01:
ingress:
class: nginx
12. HTTPS Routing with Ingress object leveraging ClusterIssuer
Provision a new Ingress object to use the clusterIssuer
for the generation of the certificate and use it
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xnat-ingress-https
namespace: xnat
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- cloud.neura.edu.au
secretName: xnat-tls
rules:
- host: cloud.neura.edu.au
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: xnat-deployment-xnat-web
port:
number: 80
After the creation of the above ingress https://cloud.neura.edu.au/
should bring up the XNAT application in the web browser
Reference Links
- LKE set up using Cloud Manager
https://www.linode.com/docs/guides/deploy-and-manage-a-cluster-with-linode-kubernetes-engine-a-tutorial/ - LKE set up using Terraform
https://www.linode.com/docs/guides/how-to-deploy-an-lke-cluster-using-terraform/ - Linode Storage Class
https://www.linode.com/docs/guides/deploy-volumes-with-the-linode-block-storage-csi-driver/ - Ingress Controller & Loadbalancer
https://www.linode.com/docs/guides/how-to-deploy-nginx-ingress-on-linode-kubernetes-engine/ - HTTP to HTTPS using cert-manager
https://www.linode.com/docs/guides/how-to-configure-load-balancing-with-tls-encryption-on-a-kubernetes-cluster
6 -
Deployments of AIS released service
The /docs/Deployment
folder is a dump directory for any documentation related to deployment of the AIS released services. This includes, but is not limited to, deployment examples:
- from different AIS sites
- utilising alternate Cloud services or on-prem deployments
- configuration snippets
Jekyll is used to render these documents and any MarkDown files with the appropriate FrontMatter
tags will appear in the Deployment
drop-down menu item.
7 - XNAT Quick Start Guide
This quick start guide will follow a progression starting from the most basic single instance XNAT deployment up to a full XNAT service.
Please be aware that this is a guide and not considered a production ready service.
Prerequisites
- a Kubernetes service. You can use Microk8s on your workstation if you do not have access to a cloud service.
- Kubectl client installed and configured to access your Kubernetes service
- Helm client installed
What settings can be modified and where?
helm show values ais/xnat
Just XNAT
Create minimal helm values file ~/values.yaml
---
global:
postgresql:
postgresqlPassword: "xnat"
# Setup AIS Helm charts
helm repo add ais https://australian-imaging-service.github.io/charts
helm repo update
# Deploy minimal XNAT
# This command is also used to action changes to the `values.yaml` file
helm upgrade xnat ais/xnat --install --values ~/values.yaml --namespace xnat-demo --create-namespace
# From another terminal you can run the following commnad to watch deployment of resources
watch kubectl -nxnat-demo get all,pv,pvc
# From another terminal run the following command and
# access XNAT web UI from a browser with address `http://localhost:8080`
kubectl -nxnat-demo port-forward service/xnat-xnat-web-0 8080:80
Things to watch out for.
- This deployment will utilise the default storage class configured for your Kubernetes service. If there is no storage class set as default this deployment will not have any persistent volume(s) provisioned and will not complete. Out of scope for this document is how to manually create a Persistent Volume and bind to a Persistent Volume Claim.
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 145d
You can see that Microk8s has a default storage class. However if this was not the case or another storage class was to be used the following would need to be added to your values.yaml
file.
---
global:
postgresql:
postgresqlPassword: "xnat"
storageClass: "microk8s-hostpath"
You should be seeing something similar to the following
$ kubectl -nxnat-demo get all,pvc
NAME READY STATUS RESTARTS AGE
pod/xnat-postgresql-0 1/1 Running 30 27d
pod/xnat-xnat-web-0 1/1 Running 30 27d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/xnat-xnat-web-headless ClusterIP None <none> 80/TCP 27d
service/xnat-postgresql-headless ClusterIP None <none> 5432/TCP 27d
service/xnat-postgresql ClusterIP 10.152.183.17 <none> 5432/TCP 27d
service/xnat-xnat-web ClusterIP 10.152.183.193 <none> 80/TCP 27d
service/xnat-xnat-web-dicom-scp NodePort 10.152.183.187 <none> 8104:31002/TCP 27d
NAME READY AGE
statefulset.apps/xnat-postgresql 1/1 27d
statefulset.apps/xnat-xnat-web 1/1 27d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/xnat-xnat-web-archive Bound pvc-81a7308c-fb64-4acd-9a04-f54dbc6e1e0b 1Ti RWX microk8s-hostpath 27d
persistentvolumeclaim/xnat-xnat-web-prearchive Bound pvc-357f45aa-79af-4958-a3fe-ec3714e6db13 1Ti RWX microk8s-hostpath 27d
persistentvolumeclaim/data-xnat-postgresql-0 Bound pvc-45d917d7-8660-4183-92cb-0e07c59d9fa7 8Gi RWO microk8s-hostpath 27d
persistentvolumeclaim/cache-xnat-xnat-web-0 Bound pvc-f868215d-0962-4e99-95f5-0cf09440525f 10Gi RWO microk8s-hostpath 27d