Updates for exercise 4-6.
This commit is contained in:
@ -16,7 +16,6 @@ Enter the [**Helm**](https://www.redhat.com/en/topics/devops/what-is-helm) proje
|
|||||||
|
|
||||||
In simple terms, a **Helm chart** is basically a directory containing a collection of YAML template files, which is zipped into an archive. However the `helm` command line utility has a lot of additional features and is good for customising and overriding specific values in our application templates when we deploy them onto our cluster as well as easily deploying, upgrading or rolling back our application.
|
In simple terms, a **Helm chart** is basically a directory containing a collection of YAML template files, which is zipped into an archive. However the `helm` command line utility has a lot of additional features and is good for customising and overriding specific values in our application templates when we deploy them onto our cluster as well as easily deploying, upgrading or rolling back our application.
|
||||||
|
|
||||||
|
|
||||||
## 4.1 - Deploying a helm chart via the web console
|
## 4.1 - Deploying a helm chart via the web console
|
||||||
|
|
||||||
It is common for organisations that produce and ship applications to provide their applications to organisations as a **Helm chart**.
|
It is common for organisations that produce and ship applications to provide their applications to organisations as a **Helm chart**.
|
||||||
@ -39,12 +38,11 @@ tlsRoute: true
|
|||||||
```
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Gitea application deployment via helm chart* |
|
| *Gitea application deployment via helm chart* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.2 - Examine deployed application
|
## 4.2 - Examine deployed application
|
||||||
|
|
||||||
Returning to the **Topology** view of the **Developer** perspective you will now see the Gitea application being deployed in your `userX` project (this can take a few minutes to complete). Notice how the application is made up of two separate pods, the `gitea-db` database and the `gitea` frontend web server.
|
Returning to the **Topology** view of the **Developer** perspective you will now see the Gitea application being deployed in your `userX` project (this can take a few minutes to complete). Notice how the application is made up of two separate pods, the `gitea-db` database and the `gitea` frontend web server.
|
||||||
@ -53,22 +51,21 @@ Once your gitea pods are both running open the **Route** for the `gitea` web fro
|
|||||||
|
|
||||||
Next, if we click on the overall gitea **Helm release** by clicking on the shaded box surrounding our two Gitea pods we can see the full list of resources deployed by this helm chart, which in addition to the two running pods includes the following:
|
Next, if we click on the overall gitea **Helm release** by clicking on the shaded box surrounding our two Gitea pods we can see the full list of resources deployed by this helm chart, which in addition to the two running pods includes the following:
|
||||||
|
|
||||||
- 1 **ConfigMap**
|
- 1 **ConfigMap**
|
||||||
- 1 **ImageStream**
|
- 1 **ImageStream**
|
||||||
- 2 **PersistentVolumeClaims**
|
- 2 **PersistentVolumeClaims**
|
||||||
- 1 **Route**
|
- 1 **Route**
|
||||||
- 1 **Secret**
|
- 1 **Secret**
|
||||||
- 2 **Services**
|
- 2 **Services**
|
||||||
|
|
||||||
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
|
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Gitea helm release resources created* |
|
| *Gitea helm release resources created* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.3 - Upgrade helm chart
|
## 4.3 - Upgrade helm chart
|
||||||
|
|
||||||
If we want to make a change to the configuration of our Gitea application we can perform a `helm upgrade`. OpenShift has built in support to perform helm upgrades through the web console.
|
If we want to make a change to the configuration of our Gitea application we can perform a `helm upgrade`. OpenShift has built in support to perform helm upgrades through the web console.
|
||||||
@ -86,12 +83,11 @@ We will be returned to the **Helm releases** view. Notice how the release status
|
|||||||
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
|
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Attempting a gitea helm upgrade* |
|
| *Attempting a gitea helm upgrade* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.4 - Rollback to a previous helm release
|
## 4.4 - Rollback to a previous helm release
|
||||||
|
|
||||||
Our previous helm upgrade for the Gitea application didn't succeed due to the misconfiguration we supplied. **Helm** has features for rolling back to a previous release through the `helm rollback` command line interface. OpenShift has made this even easier by adding native support for interactive rollbacks in the OpenShift web console so let's give that a go now.
|
Our previous helm upgrade for the Gitea application didn't succeed due to the misconfiguration we supplied. **Helm** has features for rolling back to a previous release through the `helm rollback` command line interface. OpenShift has made this even easier by adding native support for interactive rollbacks in the OpenShift web console so let's give that a go now.
|
||||||
@ -105,12 +101,11 @@ Click the three dot menu to the right hand side of the that helm release and cli
|
|||||||
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
|
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Rolling back to a previous gitea helm release* |
|
| *Rolling back to a previous gitea helm release* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.5 - Deleting an application deployed via helm
|
## 4.5 - Deleting an application deployed via helm
|
||||||
|
|
||||||
Along with upgrades and rollbacks **Helm** also makes deleting deployed applications (along with all of their associated resources) straightforward.
|
Along with upgrades and rollbacks **Helm** also makes deleting deployed applications (along with all of their associated resources) straightforward.
|
||||||
@ -126,15 +121,13 @@ Click the three dot menu to the right hand side of the that helm release and cli
|
|||||||
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
|
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Deleting the gitea application helm release* |
|
| *Deleting the gitea application helm release* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.6 - Bonus objective: Artifact Hub
|
## 4.6 - Bonus objective: Artifact Hub
|
||||||
|
|
||||||
If you have time, take a while to explore https://artifacthub.io/packages/search to see the kinds of applications available in the most popular publicly available Helm Chart repository Artifact Hub.
|
If you have time, take a while to explore https://artifacthub.io/packages/search to see the kinds of applications available in the most popular publicly available Helm Chart repository Artifact Hub.
|
||||||
|
|
||||||
|
|
||||||
Well done, you've finished exercise 4! 🎉
|
Well done, you've finished exercise 4! 🎉
|
||||||
|
|||||||
@ -12,13 +12,10 @@ Another alternative approach for deploying and managing the lifecycle of more co
|
|||||||
|
|
||||||
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
|
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
|
||||||
|
|
||||||
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
|
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
|
||||||
|
|
||||||
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
|
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## 5.1 - Deploying an operator
|
## 5.1 - Deploying an operator
|
||||||
|
|
||||||
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
|
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
|
||||||
@ -52,12 +49,11 @@ Paste the above snippet of YAML into the editor and replace the instance of `use
|
|||||||
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
|
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Deploying grafana operator via static yaml* |
|
| *Deploying grafana operator via static yaml* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 5.2 - Deploying an operator driven application
|
## 5.2 - Deploying an operator driven application
|
||||||
|
|
||||||
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
|
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
|
||||||
@ -100,12 +96,11 @@ spec:
|
|||||||
```
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Deploying grafana application via the grafana operator* |
|
| *Deploying grafana application via the grafana operator* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 5.3 Logging into the application
|
## 5.3 Logging into the application
|
||||||
|
|
||||||
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
|
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
|
||||||
@ -115,12 +110,11 @@ For our first step click on the **Workloads** category on the left hand side men
|
|||||||
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
|
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Confirming the grafana pod is running* |
|
| *Confirming the grafana pod is running* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
|
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
|
||||||
|
|
||||||
Click the **Networking** category on the left hand side menu and then click **Routes**.
|
Click the **Networking** category on the left hand side menu and then click **Routes**.
|
||||||
@ -135,10 +129,8 @@ Once the new tab opens we should be able to login to Grafana using the credentia
|
|||||||
| *Confirming the grafana route is working* |
|
| *Confirming the grafana route is working* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 5.4 - Bonus objective: Grafana dashboards
|
## 5.4 - Bonus objective: Grafana dashboards
|
||||||
|
|
||||||
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
|
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
|
||||||
|
|
||||||
|
Well done, you've finished exercise 5! 🎉
|
||||||
Well done, you've finished exercise 5! 🎉
|
|
||||||
|
|||||||
@ -14,8 +14,7 @@ However, for an interesting scenario let's explore the possibility of what we co
|
|||||||
|
|
||||||
This is where the concept of **Source to Image** or "s2i" comes in. OpenShift has built in support for building container images using source code from an existing repository. This is accomplished using the [source-to-image](https://github.com/openshift/source-to-image) project.
|
This is where the concept of **Source to Image** or "s2i" comes in. OpenShift has built in support for building container images using source code from an existing repository. This is accomplished using the [source-to-image](https://github.com/openshift/source-to-image) project.
|
||||||
|
|
||||||
OpenShift runs the S2I process inside a special **Pod**, called a **Build Pod**, and thus builds are subject to quotas, limits, resource scheduling, and other aspects of OpenShift. A full discussion of S2I is beyond the scope of this class, but you can find more information about it in the [OpenShift S2I documentation](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html).
|
OpenShift runs the S2I process inside a special **Pod**, called a **Build Pod**, and thus builds are subject to quotas, limits, resource scheduling, and other aspects of OpenShift. A full discussion of S2I is beyond the scope of this class, but you can find more information about it in the [OpenShift S2I documentation](https://docs.openshift.com/container-platform/4.16/openshift_images/create-images.html).
|
||||||
|
|
||||||
|
|
||||||
## 6.1 - Starting a source to image build
|
## 6.1 - Starting a source to image build
|
||||||
|
|
||||||
@ -46,12 +45,11 @@ Scroll down and under the **General** header click the **Application** drop down
|
|||||||
Scroll down reviewing the other options then click **Create**.
|
Scroll down reviewing the other options then click **Create**.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-------------------------------------------------------------------:|
|
|:-------------------------------------------------------------------:|
|
||||||
| *Creating a source to image build in OpenShift* |
|
| *Creating a source to image build in OpenShift* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 6.2 - Monitoring the build
|
## 6.2 - Monitoring the build
|
||||||
|
|
||||||
To see the build logs, in **Topology** view of the **Developer** perspective, click the nationalparks python icon, then click on **View Logs** in the **Builds** section of the **Resources** tab.
|
To see the build logs, in **Topology** view of the **Developer** perspective, click the nationalparks python icon, then click on **View Logs** in the **Builds** section of the **Resources** tab.
|
||||||
@ -89,10 +87,8 @@ After the build has completed and successfully:
|
|||||||
|
|
||||||
To conclude, when issuing the `oc get pods` command, you will see that the build **Pod** has finished (exited) and that an application **Pod** is in a ready and running state.
|
To conclude, when issuing the `oc get pods` command, you will see that the build **Pod** has finished (exited) and that an application **Pod** is in a ready and running state.
|
||||||
|
|
||||||
|
|
||||||
## 6.3 - Bonus objective: Podman
|
## 6.3 - Bonus objective: Podman
|
||||||
|
|
||||||
If you have time, take a while to understand how [Podman](https://developers.redhat.com/articles/2022/05/02/podman-basics-resources-beginners-and-experts) can be used to build container images on your device outside of an OpenShift cluster.
|
If you have time, take a while to understand how [Podman](https://developers.redhat.com/articles/2022/05/02/podman-basics-resources-beginners-and-experts) can be used to build container images on your device outside of an OpenShift cluster.
|
||||||
|
|
||||||
|
|
||||||
Well done, you've finished exercise 6! 🎉
|
Well done, you've finished exercise 6! 🎉
|
||||||
|
|||||||
Reference in New Issue
Block a user