88 lines
4.5 KiB
Plaintext
88 lines
4.5 KiB
Plaintext
---
|
|
title: Scaling and autohealing applications
|
|
exercise: 3
|
|
date: '2023-12-06'
|
|
tags: ['openshift','containers','kubernetes','deployments','autoscaling']
|
|
draft: false
|
|
authors: ['default']
|
|
summary: "Let's scale our application up 📈"
|
|
---
|
|
|
|
We have our application deployed, let's scale it up to make sure it will be resilient to failures.
|
|
|
|
While **Services** provide discovery and load balancing for **Pods**, the higher level **Deployment** resource specifies how many replicas (pods) of our application will be created and is a simplistic way to configure scaling for the application.
|
|
|
|
> Note: To learn more about **Deployments** refer to this [documentation](https://docs.openshift.com/container-platform/4.14/applications/deployments/what-deployments-are.html).
|
|
|
|
|
|
## 3.1 - Reviewing the parksmap deployment
|
|
|
|
Let's start by confirming how many `replicas` we currently specify for our ParksMap application. We'll also use this exercise step to take a look at how all resources within OpenShift can be viewed and managed as [YAML](https://www.redhat.com/en/topics/automation/what-is-yaml) formatted text files which is extremely useful for more advanced automation and GitOps concepts.
|
|
|
|
Start in the **Topology** view of the **Developer** perspective.
|
|
|
|
Click on your "Parksmap" application icon and click on the **D parksmap** deployment name at the top of the right hand panel.
|
|
|
|
From the **Deployment details** view we can click on the **YAML** tab and scroll down to confirm that we only specify `1` replica for the ParksMap application currently.
|
|
|
|
```yaml
|
|
spec:
|
|
replicas: 1
|
|
```
|
|
|
|
<Zoom>
|
|
| |
|
|
|:-------------------------------------------------------------------:|
|
|
| *ParksMap application deployment replicas* |
|
|
</Zoom>
|
|
|
|
|
|
## 3.2 - Intentionally crashing the application
|
|
|
|
With our ParksMap application only having one pod replica currently it will not be tolerant to failures. OpenShift will automatically restart the single pod if it encounters a failure, however during the time the application pod takes to start back up our users will not be able to access the application.
|
|
|
|
Let's see that in practice by intentionally causing an error in our application.
|
|
|
|
Start in the **Topology** view of the **Developer** perspective and click your Parksmap application icon.
|
|
|
|
In the **Resources** tab of the information pane open a second browser tab showing the ParksMap application **Route** that we explored in the previous exercise. The application should be running as normal.
|
|
|
|
Click on the pod name under the **Pods** header of the **Resources** tab and then click on the **Terminal** tab. This will open a terminal within our running ParksMap application container.
|
|
|
|
Inside the terminal run the following to intentionally crash the application:
|
|
|
|
```bash
|
|
kill 1
|
|
```
|
|
|
|
The pod will automatically be restarted by OpenShift however if you refresh your second browser tab with the application **Route** you should be able to see the application is momentarily unavailable.
|
|
|
|
<Zoom>
|
|
| |
|
|
|:-------------------------------------------------------------------:|
|
|
| *Intentionally crashing the ParksMap application* |
|
|
</Zoom>
|
|
|
|
|
|
## 3.2 - Scaling up the application
|
|
|
|
As a best practice, wherever possible we should try to run multiple replicas of our pods so that if one pod is unavailable our application will continue to be available to users.
|
|
|
|
Let's scale up our application and confirm it is now fault tolerant.
|
|
|
|
In the **Topology** view of the **Developer** perspective click your Parksmap application icon.
|
|
|
|
In the **Details** tab of the information pane click the **^ Increase the pod count** arrow to increase our replicas to `2`. You will see the second pod starting up and becoming ready.
|
|
|
|
> Note: You can also scale the replicas of a deployment in automated and event driven fashions in response to factors like incoming traffic or resource consumption, or by using the `oc` cli for example `oc scale --replicas=2 deployment/parksmap`.
|
|
|
|
Once the new pod is ready, repeat the steps from task `3.2` to crash one of the pods. You should see that the application continues to serve traffic thanks to our OpenShift **Service** load balancing traffic to the second **Pod**.
|
|
|
|
<Zoom>
|
|
| |
|
|
|:-------------------------------------------------------------------:|
|
|
| *Scaling up the ParksMap application* |
|
|
</Zoom>
|
|
|
|
|