Tidy up before demo.
This commit is contained in:
@ -33,28 +33,23 @@ Before we get into deploying skupper lets get familiar with our demo workload wh
|
||||
|
||||
#+NAME: Deploy demo workload on premises
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Set kubeconfig
|
||||
export KUBECONFIG=$HOME/.kube/config
|
||||
clear && export KUBECONFIG=$HOME/.kube/config
|
||||
|
||||
# Ensure namespace exists & set context
|
||||
kubectl create namespace demo-onprem --dry-run=client -o yaml | kubectl apply -f -
|
||||
kubectl create namespace demo-onprem --dry-run=client --output yaml | kubectl apply --filename -
|
||||
kubectl config set-context --current --namespace demo-onprem
|
||||
|
||||
# Create deployments and services
|
||||
kubectl create -f 1-progressive-migration/database.yaml
|
||||
kubectl create --filename 1-progressive-migration/database.yaml
|
||||
kubectl rollout status deployment/database
|
||||
|
||||
kubectl create -f 1-progressive-migration/backend.yaml
|
||||
kubectl create --filename 1-progressive-migration/backend.yaml
|
||||
kubectl rollout status deployment/payment-processor
|
||||
|
||||
kubectl create -f 1-progressive-migration/frontend.yaml
|
||||
kubectl create --filename 1-progressive-migration/frontend.yaml
|
||||
kubectl rollout status deployment/frontend
|
||||
|
||||
# Launch application in browser
|
||||
flatpak run org.chromium.Chromium --new-window "http://localhost:8080"
|
||||
firefox --new-window "http://localhost:9090"
|
||||
|
||||
# Start port forward
|
||||
kubectl port-forward --pod-running-timeout=10s deployment/frontend 8080 &
|
||||
kubectl port-forward deployment/frontend 9090:8080 &
|
||||
#+end_src
|
||||
|
||||
|
||||
@ -72,31 +67,33 @@ With skupper initialised lets take a look at the included web console:
|
||||
|
||||
#+NAME: Open skupper web interface
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Retrieve skupper credentials
|
||||
export password=$(kubectl get secret skupper-console-users -o json | jq -r '.data.admin' | base64 --decode)
|
||||
|
||||
# Retrieve console url
|
||||
export password=$(kubectl get secret skupper-console-users --output jsonpath="{.data.admin}" | base64 --decode)
|
||||
export console=$(kubectl get service skupper --output jsonpath="{.status.loadBalancer.ingress[0].ip}")
|
||||
|
||||
# Open skupper console
|
||||
flatpak run org.chromium.Chromium --new-window "https://admin:${password}@${console}:8080"
|
||||
firefox --new-window "https://admin:${password}@${console}:8080"
|
||||
#+end_src
|
||||
|
||||
|
||||
** Initialise skupper in public cluster
|
||||
|
||||
So we've been tasked with migrating this application to public cloud, rather than doing a big bang migration lets use skupper to perform a progressive migration. Our first step is to setup skupper in our public cloud cluster.
|
||||
So we've been tasked with migrating this application to public cloud, rather than doing a big bang migration lets use skupper to perform a progressive migration. Our first step is to setup skupper in our public cloud cluster which is a managed ROSA cluster running in ~ap-southeast-1~ (Singapore).
|
||||
|
||||
#+NAME: Initialise
|
||||
#+NAME: Initialise skupper in public cluster
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Ensure namespace exists
|
||||
kubectl --kubeconfig=$HOME/.kube/rosa create namespace demo-public --dry-run=client -o yaml | kubectl --kubeconfig=$HOME/.kube/rosa apply -f -
|
||||
clear && kubectl --kubeconfig=$HOME/.kube/rosa create namespace demo-public --dry-run=client --output yaml | kubectl --kubeconfig=$HOME/.kube/rosa apply --filename -
|
||||
|
||||
# Initialise skupper
|
||||
skupper --kubeconfig=$HOME/.kube/rosa --namespace demo-public init
|
||||
#+end_src
|
||||
|
||||
|
||||
Lets quickly review our public cluster deployment using the OpenShift console. Reviewing the ~demo-public~ project metrics we can see how lightweight a skupper installation is.
|
||||
|
||||
#+NAME: Review skupper status in public cluster
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
firefox --new-window "https://console-openshift-console.apps.rosa-mgmwm.c4s2.p1.openshiftapps.com/k8s/cluster/projects/demo-public"
|
||||
#+end_src
|
||||
|
||||
|
||||
** Link public and private clusters
|
||||
|
||||
Creating a link requires use of two skupper commands in conjunction, ~skupper token create~ and ~skupper link create~.
|
||||
@ -107,10 +104,8 @@ First, use ~skupper token create~ in one namespace to generate the token. Then,
|
||||
|
||||
#+NAME: Establish link between clusters
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Create the token on public
|
||||
skupper --kubeconfig=$HOME/.kube/rosa --namespace demo-public token create 1-progressive-migration/secret.token
|
||||
clear && skupper --kubeconfig=$HOME/.kube/rosa --namespace demo-public token create 1-progressive-migration/secret.token
|
||||
|
||||
# Initiate the link from private
|
||||
skupper link create --name "van" 1-progressive-migration/secret.token
|
||||
#+end_src
|
||||
|
||||
@ -119,8 +114,7 @@ Now that we have linked our clusters lets review the skupper interface to confir
|
||||
|
||||
#+NAME: Review skupper console
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Open skupper console
|
||||
flatpak run org.chromium.Chromium --new-window "https://admin:${password}@${console}:8080"
|
||||
firefox --new-window "https://admin:${password}@${console}:8080"
|
||||
#+end_src
|
||||
|
||||
|
||||
@ -130,17 +124,13 @@ With a virtual application network in place lets use it to expose our backend se
|
||||
|
||||
#+NAME: Expose payments-processor service
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Show list of services on public cluster
|
||||
kubectl get svc --kubeconfig $HOME/.kube/rosa --namespace demo-public
|
||||
clear && kubectl get svc --kubeconfig $HOME/.kube/rosa --namespace demo-public
|
||||
|
||||
# Expose the services to the skupper network
|
||||
skupper expose deployment/payment-processor --port 8080
|
||||
skupper expose deployment/database --port 5432
|
||||
|
||||
# Show list of services after expose
|
||||
kubectl get svc --kubeconfig $HOME/.kube/rosa --namespace demo-public
|
||||
|
||||
# Describe the new service
|
||||
kubectl describe svc --kubeconfig $HOME/.kube/rosa --namespace demo-public payment-processor
|
||||
#+end_src
|
||||
|
||||
@ -149,12 +139,34 @@ kubectl describe svc --kubeconfig $HOME/.kube/rosa --namespace demo-public payme
|
||||
|
||||
Our backend service is now available in our public cluster thanks to our skupper virtual application network so lets proceed with our cloud migration for our frontend.
|
||||
|
||||
We will scale up a fresh deployment on our public cluster, scale down on our on premises cluster then verify that our application frontend can still talk to our backend services and works as expected.
|
||||
|
||||
#+NAME: Migrate frontend to the public cluster
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
# Deploy a fresh set of frontend replicas on public cluster
|
||||
kubectl --kubeconfig $HOME/.kube/rosa --namespace demo-public create -f 1-progressive-migration/frontend.yaml
|
||||
clear
|
||||
kubectl --kubeconfig $HOME/.kube/rosa --namespace demo-public create --filename 1-progressive-migration/frontend.yaml
|
||||
kubectl --kubeconfig $HOME/.kube/rosa --namespace demo-public rollout status deployment/frontend
|
||||
|
||||
# Tear down the old frontend on premises
|
||||
kubectl delete -f 1-progressive-migration/frontend.yaml
|
||||
kubectl delete --filename 1-progressive-migration/frontend.yaml --ignore-not-found=true
|
||||
#+end_src
|
||||
|
||||
|
||||
#+NAME: Verify application functionality
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
firefox --new-window \
|
||||
--new-tab --url "https://admin:${password}@${console}:8080" \
|
||||
--new-tab --url "http://localhost:9090"
|
||||
#+end_src
|
||||
|
||||
In theory our application continues to run as normal, We just performed a progressive migration! 🎉
|
||||
|
||||
|
||||
** Teardown demo
|
||||
|
||||
Finished with the demo? Because skupper is so lightweight and only present in our application namespaces it will automatically be torn down when the namespaces are deleted, otherwise you can run the ~skupper delete~ to remove an installation from a namespace.
|
||||
|
||||
#+NAME: Teardown demo namespaces
|
||||
#+begin_src tmate :socket /tmp/james.tmate.tmate
|
||||
kubectl --kubeconfig $HOME/.kube/config delete namespace demo-onprem
|
||||
kubectl --kubeconfig $HOME/.kube/rosa delete namespace demo-public
|
||||
#+end_src
|
||||
|
||||
Reference in New Issue
Block a user