Start updating for virt hackathon.

This commit is contained in:
2024-04-05 09:10:56 +13:00
parent 969d5c4e84
commit 40d4135e72
9 changed files with 93 additions and 14 deletions

View File

@ -1,191 +0,0 @@
---
title: Getting familiar with OpenShift
exercise: 1
date: '2023-12-04'
tags: ['openshift','containers','kubernetes']
draft: false
authors: ['default']
summary: "In this first exercise we'll get familiar with OpenShift."
---
Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. In this first excercise we'll get logged into our cluster and familarise ourselves with the OpenShift web console and web terminal.
The OpenShift Container Platform web console is a feature-rich user interface with both an **Administrator** perspective and a **Developer** perspective accessible through any modern web browser. You can use the web console to visualize, browse, and manage your OpenShift cluster and the applications running on it.
In addition to the web console, OpenShift includes command line tools to provide users with a nice interface to work with applications deployed to the platform. The `oc` command line tool is available for Linux, macOS or Windows.
**Let's get started!**
## 1.1 - Login to lab environment
An OpenShift `4.14` cluster has already been provisioned for you to complete these excercises. Open your web browser and navigate to the workshop login page https://demo.redhat.com/workshop/enwmgc.
Once the page loads you can login with the details provided by your workshop facilitator.
<Zoom>
|![workshop](/workshops/static/images/workshop.png) |
|:-----------------------------------------------------------------------------:|
| *Workshop login page* |
</Zoom>
## 1.2 - Login to the cluster web console
Once you're logged into the lab environnment we can open up the OpenShift web console and login with the credentials provided.
When first logging in you will be prompted to take a tour of the **Developer** console view, let's do that now.
<Zoom>
| ![tour](/workshops/static/images/tour.gif) |
|:-----------------------------------------------------------------------------:|
| *Developer perspective web console tour* |
</Zoom>
## 1.3 - Understanding projects
Projects are a logical boundary to help you organize your applications. An OpenShift project allows a community of users (or a single user) to organize and manage their work in isolation from other projects.
Each project has its own resources, role based access control (who can or cannot perform actions), and constraints (quotas and limits on resources, etc).
Projects act as a "wrapper" around all the application services you (or your teams) are using for your work.
In this lab environment, you already have access to single project: `userX` (Where X is the number of your user allocted for the workshop from the previous step.)
Let's click into our `Project` from the left hand panel of the **Developer** web console perspective. We should be able to see that our project has no `Deployments` and there are no compute cpu or memory resources currently being consumed.
<Zoom>
|![project](/workshops/static/images/project.png) |
|:-----------------------------------------------------------------------------:|
| *Developer perspective project view* |
</Zoom>
## 1.4 - Switching between perspectives
Different roles have different needs when it comes to viewing details within the OpenShift web console. At the top of the left navigation menu, you can toggle between the Administrator perspective and the Developer perspective.
Select **Administrator** to switch to the Administrator perspective.
Once the Administrator perspective loads, you should be in the "Home" view and see a wider array of menu sections in the left hand navigation panel.
Switch back to the **Developer** perspective. Once the Developer perspective loads, select the **Topology** view.
Right now, there are no applications or components to view in your `userX` project, but once you begin working on the lab, youll be able to visualize and interact with the components in your application here.
<Zoom>
|![perspectives](/workshops/static/images/perspectives.gif) |
|:-----------------------------------------------------------------------------:|
| *Switching web console perspectives* |
</Zoom>
## 1.5 - Launching a web terminal
While web interfaces are comfortable and easy to use, sometimes we want to quickly run commands to get things done. That is where the `oc` command line utility comes in.
One handy feature of the OpenShift web console is we can launch a web terminal that will create a browser based terminal that already has the `oc` command logged in and ready to use.
Let's launch a web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
<Zoom>
|![web-terminal](/workshops/static/images/web-terminal.gif) |
|:-----------------------------------------------------------------------------:|
| *Launching your web terminal* |
</Zoom>
## 1.6 - Running oc commands
The [`oc` command line utility](https://docs.openshift.com/container-platform/4.14/cli_reference/openshift_cli/getting-started-cli.html#creating-a-new-app) is a superset of the upstream kubernetes `kubectl` command line utility. This means it can do everything that `kubectl` can do, plus some additional OpenShift specific commands.
Let's try a few commands now:
### Checking our current project
Most actions we take in OpenShift will be in relation to a particular project. We can check which project we are currently actively using by running the `oc project` command.
We should see output similar to below showing we are currently using our `userX` project:
```bash
bash-4.4 ~ $ oc project
Using project "user1" from context named "user1-context" on server "https://172.31.0.1:443".
```
### Getting help and explaining concepts
As with any command line utility, there can be complexity that quickly surfaces. Thankfully the `oc` command line utility has excellent built in help.
Let's take a look at that now.
To get an understanding of all the options available, try running `oc help`. You should see options similar to the below sample:
```text
bash-4.4 ~ $ oc help
OpenShift Client
This client helps you develop, build, deploy, and run your applications on any
OpenShift or Kubernetes cluster. It also includes the administrative
commands for managing a cluster under the 'adm' subcommand.
Basic Commands:
login Log in to a server
new-project Request a new project
new-app Create a new application
status Show an overview of the current project
project Switch to another project
projects Display existing projects
explain Get documentation for a resource
Build and Deploy Commands:
rollout Manage a Kubernetes deployment or OpenShift deployment config
rollback Revert part of an application back to a previous deployment
new-build Create a new build configuration
start-build Start a new build
cancel-build Cancel running, pending, or new builds
import-image Import images from a container image registry
tag Tag existing images into image streams
```
To get a more detailed explanataion about a specific concept we can use the `oc explain` command.
Let's run `oc explain project` now to learn more about the concept of a project we introduced earlier:
```text
bash-4.4 ~ $ oc explain project
KIND: Project
VERSION: project.openshift.io/v1
DESCRIPTION:
Projects are the unit of isolation and collaboration in OpenShift. A
project has one or more members, a quota on the resources that the project
may consume, and the security controls on the resources in the project.
Within a project, members may have different roles - project administrators
can set membership, editors can create and manage the resources, and
viewers can see but not access running containers. In a normal cluster
project administrators are not able to alter their quotas - that is
restricted to cluster administrators.
Listing or watching projects will return only projects the user has the
reader role on.
An OpenShift project is an alternative representation of a Kubernetes
namespace. Projects are exposed as editable to end users while namespaces
are not. Direct creation of a project is typically restricted to
administrators, while end users should use the requestproject resource.
```
That's a quick introduction to the `oc` command line utility. Let's close our web terminal now so we can move on to the next excercise.
<Zoom>
|![close-terminal](/workshops/static/images/close-terminal.gif) |
|:-----------------------------------------------------------------------------:|
| *Closing your web terminal* |
</Zoom>
Well done, you've finished exercise 1! 🎉

View File

@ -1,131 +0,0 @@
---
title: Deploying your first application
exercise: 2
date: '2023-12-05'
tags: ['openshift','containers','kubernetes','deployments','images']
draft: false
authors: ['default']
summary: "Time to deploy your first app!"
---
Now that we have had a tour of the OpenShift web console to get familiar, let's use the web console to deploy our first application.
Lets start by doing the simplest thing possible - get a plain old Docker-formatted container image to run on OpenShift. This is incredibly simple to do. With OpenShift it can be done directly from the web console.
Before we begin, if you would like a bit more background on what a container is or why they are important click the following link to learn more: https://www.redhat.com/en/topics/containers#overview
## 2.1 - Deploying the container image
In this exercise, were going to deploy the **web** component of the ParksMap application which uses OpenShift's service discovery mechanism to discover any accompanying backend services deployed and shows their data on the map. Below is a visual overview of the complete ParksMap application.
<Zoom>
|![parksmap-architecture](/workshops/static/images/parksmap-architecture.png) |
|:-------------------------------------------------------------------:|
| *ParksMap application architecture* |
</Zoom>
Within the **Developer** perspective, click the **+Add** entry on the left hand menu.
Once on the **+Add** page, click **Container images** to open a dialog that will allow you to quickly deploy an image.
In the **Image name** field enter the following:
```text
quay.io/openshiftroadshow/parksmap:latest
```
Leave all other fields at their defaults (but take your time to scroll down and review each one to familiarise yourself! 🎓)
Click **Create** to deploy the application.
OpenShift will pull this container image if it does not exist already on the cluster and then deploy a container based on this image. You will be taken back to the **Topology** view in the **Developer** perspective which will show the new "Parksmap" application.
<Zoom>
|![first-app](/workshops/static/images/first-app.gif) |
|:-------------------------------------------------------------------:|
| *Deploying the container image* |
</Zoom>
## 2.2 - Reviewing our deployed application
If you click on the **parksmap** entry in the **Topology** view, you will see some information about that deployed application.
The **Resources** tab may be displayed by default. If so, click on the **Details** tab. On that tab, you will see that there is a single **Pod** that was created by your actions.
<Zoom>
|![app-details](/workshops/static/images/app-details.gif) |
|:-------------------------------------------------------------------:|
| *Deploying the container image* |
</Zoom>
> Note: A pod is the smallest deployable unit in Kubernetes and is effectively a grouping of one or more individual containers. Any containers deployed within a pod are guaranteed to run on the same machine. It is very common for pods in kubernetes to only hold a single container, although sometimes auxiliary services can be included as additional containers in a pod when we want them to run alongside our application container.
## 2.2 - Accessing the application
Now that we have the ParksMap application deployed. How do we access it??
This is where OpenShift **Routes** and **Services** come in.
While **Services** provide internal abstraction and load balancing within an OpenShift cluster, sometimes clients outside of the OpenShift cluster need to access an application. The way that external clients are able to access applications running in OpenShift is through an OpenShift **Route**.
You may remember that when we deployed the ParksMap application, there was a checkbox ticked to automatically create a **Route**. Thanks to this, all we need to do to access the application is go the **Resources** tab of the application details pane and click the url shown under the **Routes** header.
<Zoom>
|![app-details](/workshops/static/images/app-route.gif) |
|:-------------------------------------------------------------------:|
| *Opening ParksMap application Route* |
</Zoom>
Clicking the link you should now see the ParksMap application frontend 🎉
> Note: If this is the first time opening this page, the browser will ask permission to get your position. This is needed by the Frontend app to center the world map to your location, if you dont allow it, it will just use a default location.
<Zoom>
|![app-frontend](/workshops/static/images/app-frontend.png) |
|:-------------------------------------------------------------------:|
| *ParksMap application frontend* |
</Zoom>
## 2.3 - Checking application logs
If we deploy an application and something isn't working the way we expect, reviewing the application logs can often be helpful. OpenShift includes built in support for reviewing application logs.
Let's try it now for our ParksMap frontend.
In the **Developer** perspective, open the **Topology** view.
Click your "Parksmap" application icon then click on the **Resources** tab.
From the **Resources** tab click **View logs**
<Zoom>
|![app-logs](/workshops/static/images/app-logs.gif) |
|:-------------------------------------------------------------------:|
| *Accessing the ParksMap application logs* |
</Zoom>
## 2.4 - Checking application resource usage
Another essential element of supporting applications on OpenShift is understanding what resources the application is consuming, for example cpu, memory, network bandwidth and storage io.
OpenShift includes built in support for reviewing application resource usage. Let's take a look at that now.
In the **Developer** perspective, open the **Observe** view.
You should see the **Dashboard** tab. Set the time range to the `Last 1 hour` then scroll through the dashboard.
How much cpu and memory is your ParksMap application currently using?
<Zoom>
|![app-logs](/workshops/static/images/app-resources.gif) |
|:-------------------------------------------------------------------:|
| *Checking the ParksMap application resource usage* |
</Zoom>
Well done, you've finished exercise 2! 🎉

View File

@ -1,122 +0,0 @@
---
title: Scaling and self-healing applications
exercise: 3
date: '2023-12-06'
tags: ['openshift','containers','kubernetes','deployments','autoscaling']
draft: false
authors: ['default']
summary: "Let's scale our application up 📈"
---
We have our application deployed, let's scale it up to make sure it will be resilient to failures.
While **Services** provide discovery and load balancing for **Pods**, the higher level **Deployment** resource specifies how many replicas (pods) of our application will be created and is a simplistic way to configure scaling for the application.
> Note: To learn more about **Deployments** refer to this [documentation](https://docs.openshift.com/container-platform/4.14/applications/deployments/what-deployments-are.html).
## 3.1 - Reviewing the parksmap deployment
Let's start by confirming how many `replicas` we currently specify for our ParksMap application. We'll also use this exercise step to take a look at how all resources within OpenShift can be viewed and managed as [YAML](https://www.redhat.com/en/topics/automation/what-is-yaml) formatted text files which is extremely useful for more advanced automation and GitOps concepts.
Start in the **Topology** view of the **Developer** perspective.
Click on your "Parksmap" application icon and click on the **D parksmap** deployment name at the top of the right hand panel.
From the **Deployment details** view we can click on the **YAML** tab and scroll down to confirm that we only specify `1` replica for the ParksMap application currently.
```yaml
spec:
replicas: 1
```
<Zoom>
|![parksmap-replicas](/workshops/static/images/app-replicas.gif) |
|:-------------------------------------------------------------------:|
| *ParksMap application deployment replicas* |
</Zoom>
## 3.2 - Intentionally crashing the application
With our ParksMap application only having one pod replica currently it will not be tolerant to failures. OpenShift will automatically restart the single pod if it encounters a failure, however during the time the application pod takes to start back up our users will not be able to access the application.
Let's see that in practice by intentionally causing an error in our application.
Start in the **Topology** view of the **Developer** perspective and click your Parksmap application icon.
In the **Resources** tab of the information pane open a second browser tab showing the ParksMap application **Route** that we explored in the previous exercise. The application should be running as normal.
Click on the pod name under the **Pods** header of the **Resources** tab and then click on the **Terminal** tab. This will open a terminal within our running ParksMap application container.
Inside the terminal run the following to intentionally crash the application:
```bash
kill 1
```
The pod will automatically be restarted by OpenShift however if you refresh your second browser tab with the application **Route** you should be able to see the application is momentarily unavailable.
<Zoom>
|![parksmap-crash](/workshops/static/images/app-crash.gif) |
|:-------------------------------------------------------------------:|
| *Intentionally crashing the ParksMap application* |
</Zoom>
## 3.3 - Scaling up the application
As a best practice, wherever possible we should try to run multiple replicas of our pods so that if one pod is unavailable our application will continue to be available to users.
Let's scale up our application and confirm it is now fault tolerant.
In the **Topology** view of the **Developer** perspective click your Parksmap application icon.
In the **Details** tab of the information pane click the **^ Increase the pod count** arrow to increase our replicas to `2`. You will see the second pod starting up and becoming ready.
> Note: You can also scale the replicas of a deployment in automated and event driven fashions in response to factors like incoming traffic or resource consumption, or by using the `oc` cli for example `oc scale --replicas=2 deployment/parksmap`.
Once the new pod is ready, repeat the steps from task `3.2` to crash one of the pods. You should see that the application continues to serve traffic thanks to our OpenShift **Service** load balancing traffic to the second **Pod**.
<Zoom>
|![parksmap-scale](/workshops/static/images/app-scale.gif) |
|:-------------------------------------------------------------------:|
| *Scaling up the ParksMap application* |
</Zoom>
## 3.4 - Self healing to desired state
In the previous example we saw what happened when we intentionally crashed our application. Let's see what happens if we just outright delete one of our ParksMap applications two **Pods**.
For this step we'll use the `oc` command line utility to build some more familiarity.
Let's start by launching back into our web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
Once our terminal opens let's check our list of **Pods** with `oc get pods`. You should see something similar to the output below:
```bash
bash-4.4 ~ $ oc get pods
NAME READY STATUS RESTARTS AGE
parksmap-ff7477dc4-2nxd2 1/1 Running 0 79s
parksmap-ff7477dc4-n26jl 1/1 Running 0 31m
workspace45c88f4d4f2b4885-74b6d4898f-57dgh 2/2 Running 0 108s
```
Copy one of the pod names and delete it via `oc delete pods <podname>`, i.e `oc delete pod parksmap-ff7477dc4-2nxd2`.
```bash
bash-4.4 ~ $ oc delete pod parksmap-ff7477dc4-2nxd2
pod "parksmap-ff7477dc4-2nxd2" deleted
```
If we now run `oc get pods` again we will see a new **Pod** has automatically been created by OpenShift to replace the one we fully deleted. This is because OpenShift is a container orchestration engine that will always try and enforce the desired state that we declare.
In our ParksMap **Deployment** we have declared we always want two replicas of our application running at all times. Even if we (possibly accidentally) delete one, OpenShift will always attempt to self heal to return to our desired state.
## 3.5 - Bonus objective: Autoscaling
If you have time, take a while to explore the concepts of [HorizontalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-autoscaling.html), [VerticalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-vertical-autoscaler.html) and [Cluster autoscaling](https://docs.openshift.com/container-platform/4.14/machine_management/applying-autoscaling.html).
Well done, you've finished exercise 3! 🎉

View File

@ -1,140 +0,0 @@
---
title: Deploying an application via helm chart
exercise: 4
date: '2023-12-06'
tags: ['openshift','containers','kubernetes','deployments','helm']
draft: false
authors: ['default']
summary: "Exploring alternative deployment approaches."
---
In **Exercise 2** we deployed our ParksMap application in the most simplistic way. Just throwing an individual container image at the cluster via the web console and letting OpenShift automate everything else for us.
With more complex applications comes the need to more finely customise the details of our application **Deployments** along with any other associated resources the application requires.
Enter the [**Helm**](https://www.redhat.com/en/topics/devops/what-is-helm) project, which can package up our application resources and distribute them as something called a **Helm chart**.
In simple terms, a **Helm chart** is basically a directory containing a collection of YAML template files, which is zipped into an archive. However the `helm` command line utility has a lot of additional features and is good for customising and overriding specific values in our application templates when we deploy them onto our cluster as well as easily deploying, upgrading or rolling back our application.
## 4.1 - Deploying a helm chart via the web console
It is common for organisations that produce and ship applications to provide their applications to organisations as a **Helm chart**.
Let's get started by deploying a **Helm chart** for the [Gitea](https://about.gitea.com) application which is a git oriented devops platform similar to GitHub or GitLab.
Start in the **+Add** view of the **Developer** perspective.
Scroll down and click the **Helm chart** tile. OpenShift includes a visual catalog for any helm chart repositories your cluster has available, for this exercise we will search for **Gitea**.
Click on the search result and click **Create**.
In the YAML configuration window enter the following, substituting `userX` with your assigned user and then click **Create** once more.
```yaml
db:
password: userX
hostname: userX-gitea.apps.cluster-dsmsm.dynamic.opentlc.com
tlsRoute: true
```
<Zoom>
|![gitea-deployment](/workshops/static/images/gitea-deployment.gif) |
|:-------------------------------------------------------------------:|
| *Gitea application deployment via helm chart* |
</Zoom>
## 4.2 - Examine deployed application
Returning to the **Topology** view of the **Developer** perspective you will now see the Gitea application being deployed in your `userX` project (this can take a few minutes to complete). Notice how the application is made up of two separate pods, the `gitea-db` database and the `gitea` frontend web server.
Once your gitea pods are both running open the **Route** for the `gitea` web frontend and confirm you can see the application web interface.
Next, if we click on the overall gitea **Helm release** by clicking on the shaded box surrounding our two Gitea pods we can see the full list of resources deployed by this helm chart, which in addition to the two running pods includes the following:
- 1 **ConfigMap**
- 1 **ImageStream**
- 2 **PersistentVolumeClaims**
- 1 **Route**
- 1 **Secret**
- 2 **Services**
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
<Zoom>
|![helm-resources](/workshops/static/images/helm-resources.png) |
|:-------------------------------------------------------------------:|
| *Gitea helm release resources created* |
</Zoom>
## 4.3 - Upgrade helm chart
If we want to make a change to the configuration of our Gitea application we can perform a `helm upgrade`. OpenShift has built in support to perform helm upgrades through the web console.
Start in the **Helm** view of the **Developer** perspective.
In the **Helm Releases** tab you should see one release called `gitea`.
Click the three dot menu to the right hand side of the that helm release and click **Upgrade**.
Now let's intentionally modify the `hostname:` field in the yaml configuration to `hostname: bogushostname.example.com` and click **Upgrade**.
We will be returned to the **Helm releases** view. Notice how the release status is now Failed (due to our bogus configuration), however the previous release of the application is still running. OpenShift has validated the helm release, determined the updates will not work, and prevented the release from proceeding.
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
<Zoom>
|![helm-upgrade](/workshops/static/images/helm-upgrade.gif) |
|:-------------------------------------------------------------------:|
| *Attempting a gitea helm upgrade* |
</Zoom>
## 4.4 - Rollback to a previous helm release
Our previous helm upgrade for the Gitea application didn't succeed due to the misconfiguration we supplied. **Helm** has features for rolling back to a previous release through the `helm rollback` command line interface. OpenShift has made this even easier by adding native support for interactive rollbacks in the OpenShift web console so let's give that a go now.
Start in the **Helm** view of the **Developer** perspective.
In the **Helm Releases** tab you should see one release called `gitea`.
Click the three dot menu to the right hand side of the that helm release and click **Rollback**.
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
<Zoom>
|![helm-rollback](/workshops/static/images/helm-rollback.gif) |
|:-------------------------------------------------------------------:|
| *Rolling back to a previous gitea helm release* |
</Zoom>
## 4.5 - Deleting an application deployed via helm
Along with upgrades and rollbacks **Helm** also makes deleting deployed applications (along with all of their associated resources) straightforward.
Before we move on to exercise 5 let's delete the gitea application.
Start in the **Helm** view of the **Developer** perspective.
In the **Helm Releases** tab you should see one release called `gitea`.
Click the three dot menu to the right hand side of the that helm release and click **Delete Helm Release**.
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
<Zoom>
|![helm-delete](/workshops/static/images/helm-delete.gif) |
|:-------------------------------------------------------------------:|
| *Deleting the gitea application helm release* |
</Zoom>
## 4.6 - Bonus objective: Artifact Hub
If you have time, take a while to explore https://artifacthub.io/packages/search to see the kinds of applications available in the most popular publicly available Helm Chart repository Artifact Hub.
Well done, you've finished exercise 4! 🎉

View File

@ -1,144 +0,0 @@
---
title: Deploying an application via operator
exercise: 5
date: '2023-12-06'
tags: ['openshift','containers','kubernetes','operator-framework']
draft: false
authors: ['default']
summary: "Exploring alternative deployment approaches."
---
Another alternative approach for deploying and managing the lifecycle of more complex applications is via the [Operator Framework](https://operatorframework.io).
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
## 5.1 - Deploying an operator
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
Once the **Operator** is installed we can deploy the application.
For this exercise we will install the **Operator** for the [Grafana](https://grafana.com) observability platform.
Let's start in the **Topology** view of the **Developer** perspective.
Copy the following YAML snippet to your clipboard:
```yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: grafana-operator
namespace: userX
spec:
channel: v5
installPlanApproval: Automatic
name: grafana-operator
source: community-operators
sourceNamespace: openshift-marketplace
```
Click the **+** button in the top right corner menu bar of the OpenShift web console. This is a fast way to quickly import snippets of YAML for testing or exploration purposes.
Paste the above snippet of YAML into the editor and replace the instance of `userX` with your assigned user.
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
<Zoom>
|![operator-deployment](/workshops/static/images/operator-deployment.gif) |
|:-------------------------------------------------------------------:|
| *Deploying grafana operator via static yaml* |
</Zoom>
## 5.2 - Deploying an operator driven application
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
Let's switch over to the **Administrator** perspective for this next task to deploy our Grafana instance.
Under the **Operators** category in the left hand menu click on **Installed Operators**.
In the **Installed Operators** list you should see a **Grafana Operator** entry, click into that.
On the **Operator details** screen you will see a list of "Provided APIs". These are custom resource types that we can now deploy with the help of the operator.
Click on **Create instance** under the provided API titled `Grafana`.
On the next **Create Grafana** screen click on **YAML View** radio button and enter the following, replacing the two instances of `userX` with your assigned user then click **Create**.
```yaml
apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
labels:
dashboards: grafana
folders: grafana
name: grafana
namespace: userX
spec:
config:
auth:
disable_login_form: 'false'
log:
mode: console
security:
admin_password: example
admin_user: example
route:
spec:
tls:
termination: edge
host: grafana-userX.apps.cluster-dsmsm.dynamic.opentlc.com
```
<Zoom>
|![grafana-deployment](/workshops/static/images/grafana-deployment.gif) |
|:-------------------------------------------------------------------:|
| *Deploying grafana application via the grafana operator* |
</Zoom>
## 5.3 Logging into the application
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
For our first step click on the **Workloads** category on the left hand side menu and then click **Pods**.
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
<Zoom>
|![grafana-pod](/workshops/static/images/grafana-pod.png) |
|:-------------------------------------------------------------------:|
| *Confirming the grafana pod is running* |
</Zoom>
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
Click the **Networking** category on the left hand side menu and then click **Routes**.
Click the **Route** named `grafana-route` and open the url on the right hand side under the **Location** header.
Once the new tab opens we should be able to login to Grafana using the credentials we supplied in the previous step in the YAML configuration.
<Zoom>
|![grafana-route](/workshops/static/images/grafana-route.gif) |
|:-------------------------------------------------------------------:|
| *Confirming the grafana route is working* |
</Zoom>
## 5.4 - Bonus objective: Grafana dashboards
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
Well done, you've finished exercise 5! 🎉

View File

@ -1,98 +0,0 @@
---
title: Deploying an application from source
exercise: 6
date: '2023-12-07'
tags: ['openshift','containers','kubernetes','s2i','shipwright']
draft: false
authors: ['default']
summary: "Exploring alternative deployment approaches."
---
Often as a team supporting applications on OpenShift the decision of which deployment method to use will be out of your hands instead be determined by the vendor, organisation or team producing the application in question.
However, for an interesting scenario let's explore the possibility of what we could do if there is no existing deployment tooling in place and all we are given is a codebase in a git repository.
This is where the concept of **Source to Image** or "s2i" comes in. OpenShift has built in support for building container images using source code from an existing repository. This is accomplished using the [source-to-image](https://github.com/openshift/source-to-image) project.
OpenShift runs the S2I process inside a special **Pod**, called a **Build Pod**, and thus builds are subject to quotas, limits, resource scheduling, and other aspects of OpenShift. A full discussion of S2I is beyond the scope of this class, but you can find more information about it in the [OpenShift S2I documentation](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html).
## 6.1 - Starting a source to image build
Deploying an application via an **Source to Image** is straightforward. Let's try it out.
Start in the **+Add** view of the **Developer** perspective.
Click **Import from Git** under the **Git Repository** tile.
**Source to Image** supports a number of popular programming languages as the source. For this example we will use **Python**.
Enter `https://github.com/openshift-roadshow/nationalparks-py.git` for the **Git Repo URL**.
OpenShift will automatically guess the git server type and the programming language used by the source code. You will be now asked to select an **Import Strategy**. You have three options:
- Devfile: this will use Devfile v2 spec to create an application stack. The repo has to contain a file named `devfile.yaml` in the Devfile v2 format.
- Dockerfile: this will create a Container image from an existing Dockerfile.
- Builder Image: this will use a mechanism called Source-to-Image to create automatically a container image directly from the source code.
Select **Builder Image** strategy as we are going to create the container image directly from the source code.
Select **Python** as the **Builder Image** type and **Python 3.8-ubi8** as the **Builder Image Version**.
Scroll down and under the **General** header click the **Application** drop down and select **Create application** entering **workshop** as the name.
Scroll down reviewing the other options then click **Create**.
<Zoom>
|![s2i-build](/workshops/static/images/s2i-build.gif) |
|:-------------------------------------------------------------------:|
| *Creating a source to image build in OpenShift* |
</Zoom>
## 6.2 - Monitoring the build
To see the build logs, in **Topology** view of the **Developer** perspective, click the nationalparks python icon, then click on **View Logs** in the **Builds** section of the **Resources** tab.
Based on the applications language, the build process will be different. However, the initial build will take a few minutes as the dependencies are downloaded. You can see all of this happening in real time!
From the `oc` command line utility, you can also see **Builds**, let's open our **Web Terminal** back up and take a look:
```bash
oc get builds
```
You will see output similar to the example below:
```bash
NAME TYPE FROM STATUS STARTED DURATION
nationalparks-py-git-1 Source Git@f87895b Complete 7 minutes ago 48s
```
Let's also take a look at the logs from the `oc` command line with:
```bash
oc logs -f builds/nationalparks-py-git-1
```
After the build has completed and successfully:
- The S2I process will push the resulting image to the internal OpenShift image registry.
- The Deployment (D) will detect that the image has changed, and this will cause a new deployment to happen.
- A ReplicaSet (RS) will be spawned for this new deployment.
- The ReplicaSet will detect no Pods are running and will cause one to be deployed, as our default replica count is just 1.
To conclude, when issuing the `oc get pods` command, you will see that the build **Pod** has finished (exited) and that an application **Pod** is in a ready and running state.
## 6.3 - Bonus objective: Podman
If you have time, take a while to understand how [Podman](https://developers.redhat.com/articles/2022/05/02/podman-basics-resources-beginners-and-experts) can be used to build container images on your device outside of an OpenShift cluster.
Well done, you've finished exercise 6! 🎉

View File

@ -0,0 +1,89 @@
---
title: Understanding our lab environment
exercise: 1
date: '2023-12-18'
tags: ['openshift','containers','kubernetes','disconnected']
draft: false
authors: ['default']
summary: "Let's get familiar with our lab setup."
---
Welcome to the OpenShift 4 Disconnected Workshop! Here you'll learn about operating an OpenShift 4 cluster in a disconnected network, for our purposes today that will be a network without access to the internet (even through a proxy or firewall).
To level set, Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. OpenShift supports running in disconnected networks, though this does change the way the cluster operates because key ingredients like container images, operator bundles, and helm charts must be brought into the environment from the outside world via mirroring.
There are of course many different options for installing OpenShift in a restricted network; this workshop will primarily cover one opinionated approach. We'll do our best to point out where there's the potential for variability along the way.
**Let's get started!**
## 1.1 - Obtaining your environment
To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20. You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
<Zoom>
|![workshop](/workshops/static/images/disconnected/etherpad.gif) |
|:-----------------------------------------------------------------------------:|
| *Etherpad collaborative editor* |
</Zoom>
## 1.2 - Opening your web terminal
Throughout the remainder of the workshop you will be using a number of command line interface tools for example, `aws` to quickly interact with resources in Amazon Web Services, and `ssh` to login to a remote server.
To save you from needing to install or configure these tools on your own device for the remainder of this workshop a web terminal will be available for you.
Simply copy the link next to the user your reserved in etherpad and paste into your browser. If you are prompted to login select `htpass` and enter the credentials listed in etherpad.
## 1.3 - Creating an air gap
According to the [Internet Security Glossary](https://www.rfc-editor.org/rfc/rfc4949), an Air Gap is:
> "an interface between two systems at which (a) they are not connected physically and (b) any logical connection is not automated (i.e., data is transferred through the interface only manually, under human control)."
In disconnected OpenShift installations, the air gap exists between the **Low Side** and the **High Side**, so it is between these systems where a manual data transfer, or **sneakernet** is required.
For the purposes of this workshop we will be operating within Amazon Web Services. You have been allocated a set of credentials for an environment that already has some basic preparation completed. This will be a single VPC with 3 public subnets, which will serve as our **Low Side**, and 3 private subnets, which will serve as our **High Side**.
The diagram below shows a simplified overview of the networking topology:
<Zoom>
|![workshop](/workshops/static/images/disconnected/vpc-setup.svg) |
|:-----------------------------------------------------------------------------:|
| *Workshop network topology* |
</Zoom>
Let's check the virtual private cloud network is created using the `aws` command line interface by copying the command below into our web terminal:
```bash
aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r
```
You should see output similar to the example below:
```text
vpc-0e6d176c7d9c94412
```
We can also check our three public **Low side** and three private **High side** subnets are ready to go by running the command below in our web terminal:
```bash
aws ec2 describe-subnets | jq '[.Subnets[].Tags[] | select(.Key=="Name").Value] | sort'
```
We should see output matching this example:
```bash
[
"Private Subnet - disco",
"Private Subnet 2 - disco",
"Private Subnet 3 - disco",
"Public Subnet - disco",
"Public Subnet 2 - disco",
"Public Subnet 3 - disco"
]
```
If your environment access and topology is all working you've finished exercise 1! 🎉