161 lines
6.6 KiB
Org Mode
161 lines
6.6 KiB
Org Mode
#+TITLE: Connecting clouds the easy way, introducing Skupper
|
|
#+AUTHOR: James Blair
|
|
#+DATE: <2023-02-10 Tue 17:00>
|
|
|
|
|
|
Exciting open source project [[https://skupper.io/][Skupper]] opens up new opportunities for hybrid cloud and application migration, solving all manner of tricky multi-cluster and traditional infrastructure integration challenges.
|
|
|
|
In this session we will explore Skupper together, with live demos focused on overcoming the business challenges many of us encounter along our cloud native journeys.
|
|
|
|
[[./images/skupper-overview.png]]
|
|
|
|
|
|
* Demo one - progressive migration
|
|
|
|
For our first demo we will highlight the possibility of progressive migrations, using the virtual application network of skupper to join two kubernetes clusters together so that we can have some application components migrated to a new cluster while the remaining application components continue to run in the old cluster.
|
|
|
|
|
|
** Install skupper cli
|
|
|
|
The skupper command-line tool is the primary entrypoint for installing and configuring the Skupper infrastructure. You need to install the skupper cli only once for each development environment.
|
|
|
|
We can use the provided install script to install skupper:
|
|
|
|
#+NAME: Install skupper client and check version
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
curl https://skupper.io/install.sh | sh && skupper version
|
|
#+end_src
|
|
|
|
|
|
** Deploy demo workload on premises
|
|
|
|
Before we get into deploying skupper lets get familiar with our demo workload which is a traditional three tier container based application for a medical clinic consisting of postgres database, java backend service and web frontend.
|
|
|
|
#+NAME: Deploy demo workload on premises
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Set kubeconfig
|
|
export KUBECONFIG=$HOME/.kube/config
|
|
|
|
# Ensure namespace exists & set context
|
|
kubectl create namespace demo-onprem --dry-run=client -o yaml | kubectl apply -f -
|
|
kubectl config set-context --current --namespace demo-onprem
|
|
|
|
# Create deployments and services
|
|
kubectl create -f 1-progressive-migration/database.yaml
|
|
kubectl rollout status deployment/database
|
|
|
|
kubectl create -f 1-progressive-migration/backend.yaml
|
|
kubectl rollout status deployment/payment-processor
|
|
|
|
kubectl create -f 1-progressive-migration/frontend.yaml
|
|
kubectl rollout status deployment/frontend
|
|
|
|
# Launch application in browser
|
|
flatpak run org.chromium.Chromium --new-window "http://localhost:8080"
|
|
|
|
# Start port forward
|
|
kubectl port-forward --pod-running-timeout=10s deployment/frontend 8080 &
|
|
#+end_src
|
|
|
|
|
|
** Initialise skupper on premises
|
|
|
|
Once we have skupper client installed and a workload running lets initialise skupper in the kubernetes cluster running on our local machine, this will be our "private" / "on premise" cluster for the purposes of the demo.
|
|
|
|
#+NAME: Initialise skupper on local cluster
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
skupper init && skupper status
|
|
#+end_src
|
|
|
|
|
|
With skupper initialised lets take a look at the included web console:
|
|
|
|
#+NAME: Open skupper web interface
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Retrieve skupper credentials
|
|
export password=$(kubectl get secret skupper-console-users -o json | jq -r '.data.admin' | base64 --decode)
|
|
|
|
# Retrieve console url
|
|
export console=$(kubectl get service skupper --output jsonpath="{.status.loadBalancer.ingress[0].ip}")
|
|
|
|
# Open skupper console
|
|
flatpak run org.chromium.Chromium --new-window "https://admin:${password}@${console}:8080"
|
|
#+end_src
|
|
|
|
|
|
** Initialise skupper in public cluster
|
|
|
|
So we've been tasked with migrating this application to public cloud, rather than doing a big bang migration lets use skupper to perform a progressive migration. Our first step is to setup skupper in our public cloud cluster.
|
|
|
|
#+NAME: Initialise
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Ensure namespace exists
|
|
kubectl --kubeconfig=$HOME/.kube/rosa create namespace demo-public --dry-run=client -o yaml | kubectl --kubeconfig=$HOME/.kube/rosa apply -f -
|
|
|
|
# Initialise skupper
|
|
skupper --kubeconfig=$HOME/.kube/rosa --namespace demo-public init
|
|
#+end_src
|
|
|
|
|
|
** Link public and private clusters
|
|
|
|
Creating a link requires use of two skupper commands in conjunction, ~skupper token create~ and ~skupper link create~.
|
|
|
|
The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote namespace, The ~skupper link create~ command uses the token to create a link to the namespace that generated it.
|
|
|
|
First, use ~skupper token create~ in one namespace to generate the token. Then, use ~skupper link create~ in the other to create a link.
|
|
|
|
#+NAME: Establish link between clusters
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Create the token on public
|
|
skupper --kubeconfig=$HOME/.kube/rosa --namespace demo-public token create 1-progressive-migration/secret.token
|
|
|
|
# Initiate the link from private
|
|
skupper link create --name "van" 1-progressive-migration/secret.token
|
|
#+end_src
|
|
|
|
|
|
Now that we have linked our clusters lets review the skupper interface to confirm that new link is present.
|
|
|
|
#+NAME: Review skupper console
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Open skupper console
|
|
flatpak run org.chromium.Chromium --new-window "https://admin:${password}@${console}:8080"
|
|
#+end_src
|
|
|
|
|
|
** Expose backend service to public cluster
|
|
|
|
With a virtual application network in place lets use it to expose our backend service to our public cluster.
|
|
|
|
#+NAME: Expose payments-processor service
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Show list of services on public cluster
|
|
kubectl get svc --kubeconfig $HOME/.kube/rosa --namespace demo-public
|
|
|
|
# Expose the services to the skupper network
|
|
skupper expose deployment/payment-processor --port 8080
|
|
skupper expose deployment/database --port 5432
|
|
|
|
# Show list of services after expose
|
|
kubectl get svc --kubeconfig $HOME/.kube/rosa --namespace demo-public
|
|
|
|
# Describe the new service
|
|
kubectl describe svc --kubeconfig $HOME/.kube/rosa --namespace demo-public payment-processor
|
|
#+end_src
|
|
|
|
|
|
** Migrate frontend to public cluster
|
|
|
|
Our backend service is now available in our public cluster thanks to our skupper virtual application network so lets proceed with our cloud migration for our frontend.
|
|
|
|
#+NAME: Migrate frontend to the public cluster
|
|
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
|
# Deploy a fresh set of frontend replicas on public cluster
|
|
kubectl --kubeconfig $HOME/.kube/rosa --namespace demo-public create -f 1-progressive-migration/frontend.yaml
|
|
kubectl --kubeconfig $HOME/.kube/rosa --namespace demo-public rollout status deployment/frontend
|
|
|
|
# Tear down the old frontend on premises
|
|
kubectl delete -f 1-progressive-migration/frontend.yaml
|
|
#+end_src
|