Restore application delivery workshop.
This commit is contained in:
@ -1,191 +0,0 @@
|
||||
---
|
||||
title: Getting familiar with OpenShift
|
||||
exercise: 1
|
||||
date: '2023-12-04'
|
||||
tags: ['openshift','containers','kubernetes']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "In this first exercise we'll get familiar with OpenShift."
|
||||
---
|
||||
|
||||
Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. In this first excercise we'll get logged into our cluster and familarise ourselves with the OpenShift web console and web terminal.
|
||||
|
||||
The OpenShift Container Platform web console is a feature-rich user interface with both an **Administrator** perspective and a **Developer** perspective accessible through any modern web browser. You can use the web console to visualize, browse, and manage your OpenShift cluster and the applications running on it.
|
||||
|
||||
In addition to the web console, OpenShift includes command line tools to provide users with a nice interface to work with applications deployed to the platform. The `oc` command line tool is available for Linux, macOS or Windows.
|
||||
|
||||
**Let's get started!**
|
||||
|
||||
## 1.1 - Login to lab environment
|
||||
|
||||
An OpenShift `4.14` cluster has already been provisioned for you to complete these excercises. Open your web browser and navigate to the workshop login page https://demo.redhat.com/workshop/enwmgc.
|
||||
|
||||
Once the page loads you can login with the details provided by your workshop facilitator.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Workshop login page* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.2 - Login to the cluster web console
|
||||
|
||||
Once you're logged into the lab environnment we can open up the OpenShift web console and login with the credentials provided.
|
||||
|
||||
When first logging in you will be prompted to take a tour of the **Developer** console view, let's do that now.
|
||||
|
||||
<Zoom>
|
||||
|  |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Developer perspective web console tour* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.3 - Understanding projects
|
||||
|
||||
Projects are a logical boundary to help you organize your applications. An OpenShift project allows a community of users (or a single user) to organize and manage their work in isolation from other projects.
|
||||
|
||||
Each project has its own resources, role based access control (who can or cannot perform actions), and constraints (quotas and limits on resources, etc).
|
||||
|
||||
Projects act as a "wrapper" around all the application services you (or your teams) are using for your work.
|
||||
|
||||
In this lab environment, you already have access to single project: `userX` (Where X is the number of your user allocted for the workshop from the previous step.)
|
||||
|
||||
Let's click into our `Project` from the left hand panel of the **Developer** web console perspective. We should be able to see that our project has no `Deployments` and there are no compute cpu or memory resources currently being consumed.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Developer perspective project view* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.4 - Switching between perspectives
|
||||
|
||||
Different roles have different needs when it comes to viewing details within the OpenShift web console. At the top of the left navigation menu, you can toggle between the Administrator perspective and the Developer perspective.
|
||||
|
||||
Select **Administrator** to switch to the Administrator perspective.
|
||||
|
||||
Once the Administrator perspective loads, you should be in the "Home" view and see a wider array of menu sections in the left hand navigation panel.
|
||||
|
||||
Switch back to the **Developer** perspective. Once the Developer perspective loads, select the **Topology** view.
|
||||
|
||||
Right now, there are no applications or components to view in your `userX` project, but once you begin working on the lab, you’ll be able to visualize and interact with the components in your application here.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Switching web console perspectives* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
|
||||
## 1.5 - Launching a web terminal
|
||||
|
||||
While web interfaces are comfortable and easy to use, sometimes we want to quickly run commands to get things done. That is where the `oc` command line utility comes in.
|
||||
|
||||
One handy feature of the OpenShift web console is we can launch a web terminal that will create a browser based terminal that already has the `oc` command logged in and ready to use.
|
||||
|
||||
Let's launch a web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching your web terminal* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.6 - Running oc commands
|
||||
|
||||
The [`oc` command line utility](https://docs.openshift.com/container-platform/4.14/cli_reference/openshift_cli/getting-started-cli.html#creating-a-new-app) is a superset of the upstream kubernetes `kubectl` command line utility. This means it can do everything that `kubectl` can do, plus some additional OpenShift specific commands.
|
||||
|
||||
Let's try a few commands now:
|
||||
|
||||
|
||||
### Checking our current project
|
||||
|
||||
Most actions we take in OpenShift will be in relation to a particular project. We can check which project we are currently actively using by running the `oc project` command.
|
||||
|
||||
We should see output similar to below showing we are currently using our `userX` project:
|
||||
|
||||
```bash
|
||||
bash-4.4 ~ $ oc project
|
||||
Using project "user1" from context named "user1-context" on server "https://172.31.0.1:443".
|
||||
```
|
||||
|
||||
### Getting help and explaining concepts
|
||||
|
||||
As with any command line utility, there can be complexity that quickly surfaces. Thankfully the `oc` command line utility has excellent built in help.
|
||||
|
||||
Let's take a look at that now.
|
||||
|
||||
To get an understanding of all the options available, try running `oc help`. You should see options similar to the below sample:
|
||||
|
||||
```text
|
||||
bash-4.4 ~ $ oc help
|
||||
OpenShift Client
|
||||
|
||||
This client helps you develop, build, deploy, and run your applications on any
|
||||
OpenShift or Kubernetes cluster. It also includes the administrative
|
||||
commands for managing a cluster under the 'adm' subcommand.
|
||||
|
||||
Basic Commands:
|
||||
login Log in to a server
|
||||
new-project Request a new project
|
||||
new-app Create a new application
|
||||
status Show an overview of the current project
|
||||
project Switch to another project
|
||||
projects Display existing projects
|
||||
explain Get documentation for a resource
|
||||
|
||||
Build and Deploy Commands:
|
||||
rollout Manage a Kubernetes deployment or OpenShift deployment config
|
||||
rollback Revert part of an application back to a previous deployment
|
||||
new-build Create a new build configuration
|
||||
start-build Start a new build
|
||||
cancel-build Cancel running, pending, or new builds
|
||||
import-image Import images from a container image registry
|
||||
tag Tag existing images into image streams
|
||||
|
||||
```
|
||||
|
||||
|
||||
To get a more detailed explanataion about a specific concept we can use the `oc explain` command.
|
||||
|
||||
Let's run `oc explain project` now to learn more about the concept of a project we introduced earlier:
|
||||
|
||||
```text
|
||||
bash-4.4 ~ $ oc explain project
|
||||
KIND: Project
|
||||
VERSION: project.openshift.io/v1
|
||||
|
||||
DESCRIPTION:
|
||||
Projects are the unit of isolation and collaboration in OpenShift. A
|
||||
project has one or more members, a quota on the resources that the project
|
||||
may consume, and the security controls on the resources in the project.
|
||||
Within a project, members may have different roles - project administrators
|
||||
can set membership, editors can create and manage the resources, and
|
||||
viewers can see but not access running containers. In a normal cluster
|
||||
project administrators are not able to alter their quotas - that is
|
||||
restricted to cluster administrators.
|
||||
|
||||
Listing or watching projects will return only projects the user has the
|
||||
reader role on.
|
||||
|
||||
An OpenShift project is an alternative representation of a Kubernetes
|
||||
namespace. Projects are exposed as editable to end users while namespaces
|
||||
are not. Direct creation of a project is typically restricted to
|
||||
administrators, while end users should use the requestproject resource.
|
||||
```
|
||||
|
||||
|
||||
That's a quick introduction to the `oc` command line utility. Let's close our web terminal now so we can move on to the next excercise.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Closing your web terminal* |
|
||||
</Zoom>
|
||||
|
||||
Well done, you've finished exercise 1! 🎉
|
||||
@ -1,131 +0,0 @@
|
||||
---
|
||||
title: Deploying your first application
|
||||
exercise: 2
|
||||
date: '2023-12-05'
|
||||
tags: ['openshift','containers','kubernetes','deployments','images']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Time to deploy your first app!"
|
||||
---
|
||||
|
||||
|
||||
Now that we have had a tour of the OpenShift web console to get familiar, let's use the web console to deploy our first application.
|
||||
|
||||
Let’s start by doing the simplest thing possible - get a plain old Docker-formatted container image to run on OpenShift. This is incredibly simple to do. With OpenShift it can be done directly from the web console.
|
||||
|
||||
Before we begin, if you would like a bit more background on what a container is or why they are important click the following link to learn more: https://www.redhat.com/en/topics/containers#overview
|
||||
|
||||
|
||||
## 2.1 - Deploying the container image
|
||||
|
||||
In this exercise, we’re going to deploy the **web** component of the ParksMap application which uses OpenShift's service discovery mechanism to discover any accompanying backend services deployed and shows their data on the map. Below is a visual overview of the complete ParksMap application.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application architecture* |
|
||||
</Zoom>
|
||||
|
||||
Within the **Developer** perspective, click the **+Add** entry on the left hand menu.
|
||||
|
||||
Once on the **+Add** page, click **Container images** to open a dialog that will allow you to quickly deploy an image.
|
||||
|
||||
In the **Image name** field enter the following:
|
||||
|
||||
```text
|
||||
quay.io/openshiftroadshow/parksmap:latest
|
||||
```
|
||||
|
||||
Leave all other fields at their defaults (but take your time to scroll down and review each one to familiarise yourself! 🎓)
|
||||
|
||||
Click **Create** to deploy the application.
|
||||
|
||||
OpenShift will pull this container image if it does not exist already on the cluster and then deploy a container based on this image. You will be taken back to the **Topology** view in the **Developer** perspective which will show the new "Parksmap" application.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying the container image* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.2 - Reviewing our deployed application
|
||||
|
||||
If you click on the **parksmap** entry in the **Topology** view, you will see some information about that deployed application.
|
||||
|
||||
The **Resources** tab may be displayed by default. If so, click on the **Details** tab. On that tab, you will see that there is a single **Pod** that was created by your actions.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying the container image* |
|
||||
</Zoom>
|
||||
|
||||
> Note: A pod is the smallest deployable unit in Kubernetes and is effectively a grouping of one or more individual containers. Any containers deployed within a pod are guaranteed to run on the same machine. It is very common for pods in kubernetes to only hold a single container, although sometimes auxiliary services can be included as additional containers in a pod when we want them to run alongside our application container.
|
||||
|
||||
|
||||
## 2.2 - Accessing the application
|
||||
|
||||
Now that we have the ParksMap application deployed. How do we access it??
|
||||
|
||||
This is where OpenShift **Routes** and **Services** come in.
|
||||
|
||||
While **Services** provide internal abstraction and load balancing within an OpenShift cluster, sometimes clients outside of the OpenShift cluster need to access an application. The way that external clients are able to access applications running in OpenShift is through an OpenShift **Route**.
|
||||
|
||||
You may remember that when we deployed the ParksMap application, there was a checkbox ticked to automatically create a **Route**. Thanks to this, all we need to do to access the application is go the **Resources** tab of the application details pane and click the url shown under the **Routes** header.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Opening ParksMap application Route* |
|
||||
</Zoom>
|
||||
|
||||
Clicking the link you should now see the ParksMap application frontend 🎉
|
||||
|
||||
> Note: If this is the first time opening this page, the browser will ask permission to get your position. This is needed by the Frontend app to center the world map to your location, if you don’t allow it, it will just use a default location.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application frontend* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.3 - Checking application logs
|
||||
|
||||
If we deploy an application and something isn't working the way we expect, reviewing the application logs can often be helpful. OpenShift includes built in support for reviewing application logs.
|
||||
|
||||
Let's try it now for our ParksMap frontend.
|
||||
|
||||
In the **Developer** perspective, open the **Topology** view.
|
||||
|
||||
Click your "Parksmap" application icon then click on the **Resources** tab.
|
||||
|
||||
From the **Resources** tab click **View logs**
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Accessing the ParksMap application logs* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.4 - Checking application resource usage
|
||||
|
||||
Another essential element of supporting applications on OpenShift is understanding what resources the application is consuming, for example cpu, memory, network bandwidth and storage io.
|
||||
|
||||
OpenShift includes built in support for reviewing application resource usage. Let's take a look at that now.
|
||||
|
||||
In the **Developer** perspective, open the **Observe** view.
|
||||
|
||||
You should see the **Dashboard** tab. Set the time range to the `Last 1 hour` then scroll through the dashboard.
|
||||
|
||||
How much cpu and memory is your ParksMap application currently using?
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Checking the ParksMap application resource usage* |
|
||||
</Zoom>
|
||||
|
||||
Well done, you've finished exercise 2! 🎉
|
||||
@ -1,122 +0,0 @@
|
||||
---
|
||||
title: Scaling and self-healing applications
|
||||
exercise: 3
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','deployments','autoscaling']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Let's scale our application up 📈"
|
||||
---
|
||||
|
||||
We have our application deployed, let's scale it up to make sure it will be resilient to failures.
|
||||
|
||||
While **Services** provide discovery and load balancing for **Pods**, the higher level **Deployment** resource specifies how many replicas (pods) of our application will be created and is a simplistic way to configure scaling for the application.
|
||||
|
||||
> Note: To learn more about **Deployments** refer to this [documentation](https://docs.openshift.com/container-platform/4.14/applications/deployments/what-deployments-are.html).
|
||||
|
||||
|
||||
## 3.1 - Reviewing the parksmap deployment
|
||||
|
||||
Let's start by confirming how many `replicas` we currently specify for our ParksMap application. We'll also use this exercise step to take a look at how all resources within OpenShift can be viewed and managed as [YAML](https://www.redhat.com/en/topics/automation/what-is-yaml) formatted text files which is extremely useful for more advanced automation and GitOps concepts.
|
||||
|
||||
Start in the **Topology** view of the **Developer** perspective.
|
||||
|
||||
Click on your "Parksmap" application icon and click on the **D parksmap** deployment name at the top of the right hand panel.
|
||||
|
||||
From the **Deployment details** view we can click on the **YAML** tab and scroll down to confirm that we only specify `1` replica for the ParksMap application currently.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
replicas: 1
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application deployment replicas* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.2 - Intentionally crashing the application
|
||||
|
||||
With our ParksMap application only having one pod replica currently it will not be tolerant to failures. OpenShift will automatically restart the single pod if it encounters a failure, however during the time the application pod takes to start back up our users will not be able to access the application.
|
||||
|
||||
Let's see that in practice by intentionally causing an error in our application.
|
||||
|
||||
Start in the **Topology** view of the **Developer** perspective and click your Parksmap application icon.
|
||||
|
||||
In the **Resources** tab of the information pane open a second browser tab showing the ParksMap application **Route** that we explored in the previous exercise. The application should be running as normal.
|
||||
|
||||
Click on the pod name under the **Pods** header of the **Resources** tab and then click on the **Terminal** tab. This will open a terminal within our running ParksMap application container.
|
||||
|
||||
Inside the terminal run the following to intentionally crash the application:
|
||||
|
||||
```bash
|
||||
kill 1
|
||||
```
|
||||
|
||||
The pod will automatically be restarted by OpenShift however if you refresh your second browser tab with the application **Route** you should be able to see the application is momentarily unavailable.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Intentionally crashing the ParksMap application* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.3 - Scaling up the application
|
||||
|
||||
As a best practice, wherever possible we should try to run multiple replicas of our pods so that if one pod is unavailable our application will continue to be available to users.
|
||||
|
||||
Let's scale up our application and confirm it is now fault tolerant.
|
||||
|
||||
In the **Topology** view of the **Developer** perspective click your Parksmap application icon.
|
||||
|
||||
In the **Details** tab of the information pane click the **^ Increase the pod count** arrow to increase our replicas to `2`. You will see the second pod starting up and becoming ready.
|
||||
|
||||
> Note: You can also scale the replicas of a deployment in automated and event driven fashions in response to factors like incoming traffic or resource consumption, or by using the `oc` cli for example `oc scale --replicas=2 deployment/parksmap`.
|
||||
|
||||
Once the new pod is ready, repeat the steps from task `3.2` to crash one of the pods. You should see that the application continues to serve traffic thanks to our OpenShift **Service** load balancing traffic to the second **Pod**.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Scaling up the ParksMap application* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.4 - Self healing to desired state
|
||||
|
||||
In the previous example we saw what happened when we intentionally crashed our application. Let's see what happens if we just outright delete one of our ParksMap applications two **Pods**.
|
||||
|
||||
For this step we'll use the `oc` command line utility to build some more familiarity.
|
||||
|
||||
Let's start by launching back into our web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
|
||||
|
||||
Once our terminal opens let's check our list of **Pods** with `oc get pods`. You should see something similar to the output below:
|
||||
|
||||
```bash
|
||||
bash-4.4 ~ $ oc get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
parksmap-ff7477dc4-2nxd2 1/1 Running 0 79s
|
||||
parksmap-ff7477dc4-n26jl 1/1 Running 0 31m
|
||||
workspace45c88f4d4f2b4885-74b6d4898f-57dgh 2/2 Running 0 108s
|
||||
```
|
||||
|
||||
Copy one of the pod names and delete it via `oc delete pods <podname>`, i.e `oc delete pod parksmap-ff7477dc4-2nxd2`.
|
||||
|
||||
```bash
|
||||
bash-4.4 ~ $ oc delete pod parksmap-ff7477dc4-2nxd2
|
||||
pod "parksmap-ff7477dc4-2nxd2" deleted
|
||||
```
|
||||
|
||||
If we now run `oc get pods` again we will see a new **Pod** has automatically been created by OpenShift to replace the one we fully deleted. This is because OpenShift is a container orchestration engine that will always try and enforce the desired state that we declare.
|
||||
|
||||
In our ParksMap **Deployment** we have declared we always want two replicas of our application running at all times. Even if we (possibly accidentally) delete one, OpenShift will always attempt to self heal to return to our desired state.
|
||||
|
||||
## 3.5 - Bonus objective: Autoscaling
|
||||
|
||||
If you have time, take a while to explore the concepts of [HorizontalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-autoscaling.html), [VerticalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-vertical-autoscaler.html) and [Cluster autoscaling](https://docs.openshift.com/container-platform/4.14/machine_management/applying-autoscaling.html).
|
||||
|
||||
|
||||
Well done, you've finished exercise 3! 🎉
|
||||
@ -1,140 +0,0 @@
|
||||
---
|
||||
title: Deploying an application via helm chart
|
||||
exercise: 4
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','deployments','helm']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Exploring alternative deployment approaches."
|
||||
---
|
||||
|
||||
In **Exercise 2** we deployed our ParksMap application in the most simplistic way. Just throwing an individual container image at the cluster via the web console and letting OpenShift automate everything else for us.
|
||||
|
||||
With more complex applications comes the need to more finely customise the details of our application **Deployments** along with any other associated resources the application requires.
|
||||
|
||||
Enter the [**Helm**](https://www.redhat.com/en/topics/devops/what-is-helm) project, which can package up our application resources and distribute them as something called a **Helm chart**.
|
||||
|
||||
In simple terms, a **Helm chart** is basically a directory containing a collection of YAML template files, which is zipped into an archive. However the `helm` command line utility has a lot of additional features and is good for customising and overriding specific values in our application templates when we deploy them onto our cluster as well as easily deploying, upgrading or rolling back our application.
|
||||
|
||||
|
||||
## 4.1 - Deploying a helm chart via the web console
|
||||
|
||||
It is common for organisations that produce and ship applications to provide their applications to organisations as a **Helm chart**.
|
||||
|
||||
Let's get started by deploying a **Helm chart** for the [Gitea](https://about.gitea.com) application which is a git oriented devops platform similar to GitHub or GitLab.
|
||||
|
||||
Start in the **+Add** view of the **Developer** perspective.
|
||||
|
||||
Scroll down and click the **Helm chart** tile. OpenShift includes a visual catalog for any helm chart repositories your cluster has available, for this exercise we will search for **Gitea**.
|
||||
|
||||
Click on the search result and click **Create**.
|
||||
|
||||
In the YAML configuration window enter the following, substituting `userX` with your assigned user and then click **Create** once more.
|
||||
|
||||
```yaml
|
||||
db:
|
||||
password: userX
|
||||
hostname: userX-gitea.apps.cluster-dsmsm.dynamic.opentlc.com
|
||||
tlsRoute: true
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Gitea application deployment via helm chart* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.2 - Examine deployed application
|
||||
|
||||
Returning to the **Topology** view of the **Developer** perspective you will now see the Gitea application being deployed in your `userX` project (this can take a few minutes to complete). Notice how the application is made up of two separate pods, the `gitea-db` database and the `gitea` frontend web server.
|
||||
|
||||
Once your gitea pods are both running open the **Route** for the `gitea` web frontend and confirm you can see the application web interface.
|
||||
|
||||
Next, if we click on the overall gitea **Helm release** by clicking on the shaded box surrounding our two Gitea pods we can see the full list of resources deployed by this helm chart, which in addition to the two running pods includes the following:
|
||||
|
||||
- 1 **ConfigMap**
|
||||
- 1 **ImageStream**
|
||||
- 2 **PersistentVolumeClaims**
|
||||
- 1 **Route**
|
||||
- 1 **Secret**
|
||||
- 2 **Services**
|
||||
|
||||
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Gitea helm release resources created* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.3 - Upgrade helm chart
|
||||
|
||||
If we want to make a change to the configuration of our Gitea application we can perform a `helm upgrade`. OpenShift has built in support to perform helm upgrades through the web console.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Upgrade**.
|
||||
|
||||
Now let's intentionally modify the `hostname:` field in the yaml configuration to `hostname: bogushostname.example.com` and click **Upgrade**.
|
||||
|
||||
We will be returned to the **Helm releases** view. Notice how the release status is now Failed (due to our bogus configuration), however the previous release of the application is still running. OpenShift has validated the helm release, determined the updates will not work, and prevented the release from proceeding.
|
||||
|
||||
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Attempting a gitea helm upgrade* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.4 - Rollback to a previous helm release
|
||||
|
||||
Our previous helm upgrade for the Gitea application didn't succeed due to the misconfiguration we supplied. **Helm** has features for rolling back to a previous release through the `helm rollback` command line interface. OpenShift has made this even easier by adding native support for interactive rollbacks in the OpenShift web console so let's give that a go now.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Rollback**.
|
||||
|
||||
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Rolling back to a previous gitea helm release* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.5 - Deleting an application deployed via helm
|
||||
|
||||
Along with upgrades and rollbacks **Helm** also makes deleting deployed applications (along with all of their associated resources) straightforward.
|
||||
|
||||
Before we move on to exercise 5 let's delete the gitea application.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Delete Helm Release**.
|
||||
|
||||
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deleting the gitea application helm release* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.6 - Bonus objective: Artifact Hub
|
||||
|
||||
If you have time, take a while to explore https://artifacthub.io/packages/search to see the kinds of applications available in the most popular publicly available Helm Chart repository Artifact Hub.
|
||||
|
||||
|
||||
Well done, you've finished exercise 4! 🎉
|
||||
@ -1,144 +0,0 @@
|
||||
---
|
||||
title: Deploying an application via operator
|
||||
exercise: 5
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','operator-framework']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Exploring alternative deployment approaches."
|
||||
---
|
||||
|
||||
Another alternative approach for deploying and managing the lifecycle of more complex applications is via the [Operator Framework](https://operatorframework.io).
|
||||
|
||||
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
|
||||
|
||||
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
|
||||
|
||||
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
|
||||
|
||||
|
||||
|
||||
|
||||
## 5.1 - Deploying an operator
|
||||
|
||||
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
|
||||
|
||||
Once the **Operator** is installed we can deploy the application.
|
||||
|
||||
For this exercise we will install the **Operator** for the [Grafana](https://grafana.com) observability platform.
|
||||
|
||||
Let's start in the **Topology** view of the **Developer** perspective.
|
||||
|
||||
Copy the following YAML snippet to your clipboard:
|
||||
|
||||
```yaml
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: grafana-operator
|
||||
namespace: userX
|
||||
spec:
|
||||
channel: v5
|
||||
installPlanApproval: Automatic
|
||||
name: grafana-operator
|
||||
source: community-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
```
|
||||
|
||||
Click the **+** button in the top right corner menu bar of the OpenShift web console. This is a fast way to quickly import snippets of YAML for testing or exploration purposes.
|
||||
|
||||
Paste the above snippet of YAML into the editor and replace the instance of `userX` with your assigned user.
|
||||
|
||||
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying grafana operator via static yaml* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 5.2 - Deploying an operator driven application
|
||||
|
||||
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
|
||||
|
||||
Let's switch over to the **Administrator** perspective for this next task to deploy our Grafana instance.
|
||||
|
||||
Under the **Operators** category in the left hand menu click on **Installed Operators**.
|
||||
|
||||
In the **Installed Operators** list you should see a **Grafana Operator** entry, click into that.
|
||||
|
||||
On the **Operator details** screen you will see a list of "Provided APIs". These are custom resource types that we can now deploy with the help of the operator.
|
||||
|
||||
Click on **Create instance** under the provided API titled `Grafana`.
|
||||
|
||||
On the next **Create Grafana** screen click on **YAML View** radio button and enter the following, replacing the two instances of `userX` with your assigned user then click **Create**.
|
||||
|
||||
```yaml
|
||||
apiVersion: grafana.integreatly.org/v1beta1
|
||||
kind: Grafana
|
||||
metadata:
|
||||
labels:
|
||||
dashboards: grafana
|
||||
folders: grafana
|
||||
name: grafana
|
||||
namespace: userX
|
||||
spec:
|
||||
config:
|
||||
auth:
|
||||
disable_login_form: 'false'
|
||||
log:
|
||||
mode: console
|
||||
security:
|
||||
admin_password: example
|
||||
admin_user: example
|
||||
route:
|
||||
spec:
|
||||
tls:
|
||||
termination: edge
|
||||
host: grafana-userX.apps.cluster-dsmsm.dynamic.opentlc.com
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying grafana application via the grafana operator* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 5.3 Logging into the application
|
||||
|
||||
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
|
||||
|
||||
For our first step click on the **Workloads** category on the left hand side menu and then click **Pods**.
|
||||
|
||||
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Confirming the grafana pod is running* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
|
||||
|
||||
Click the **Networking** category on the left hand side menu and then click **Routes**.
|
||||
|
||||
Click the **Route** named `grafana-route` and open the url on the right hand side under the **Location** header.
|
||||
|
||||
Once the new tab opens we should be able to login to Grafana using the credentials we supplied in the previous step in the YAML configuration.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Confirming the grafana route is working* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 5.4 - Bonus objective: Grafana dashboards
|
||||
|
||||
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
|
||||
|
||||
|
||||
Well done, you've finished exercise 5! 🎉
|
||||
89
data/disconnected/exercise1.mdx
Normal file
89
data/disconnected/exercise1.mdx
Normal file
@ -0,0 +1,89 @@
|
||||
---
|
||||
title: Understanding our lab environment
|
||||
exercise: 1
|
||||
date: '2023-12-18'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Let's get familiar with our lab setup."
|
||||
---
|
||||
|
||||
Welcome to the OpenShift 4 Disconnected Workshop! Here you'll learn about operating an OpenShift 4 cluster in a disconnected network, for our purposes today that will be a network without access to the internet (even through a proxy or firewall).
|
||||
|
||||
To level set, Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. OpenShift supports running in disconnected networks, though this does change the way the cluster operates because key ingredients like container images, operator bundles, and helm charts must be brought into the environment from the outside world via mirroring.
|
||||
|
||||
There are of course many different options for installing OpenShift in a restricted network; this workshop will primarily cover one opinionated approach. We'll do our best to point out where there's the potential for variability along the way.
|
||||
|
||||
**Let's get started!**
|
||||
|
||||
|
||||
## 1.1 - Obtaining your environment
|
||||
|
||||
To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20. You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Etherpad collaborative editor* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.2 - Opening your web terminal
|
||||
|
||||
Throughout the remainder of the workshop you will be using a number of command line interface tools for example, `aws` to quickly interact with resources in Amazon Web Services, and `ssh` to login to a remote server.
|
||||
|
||||
To save you from needing to install or configure these tools on your own device for the remainder of this workshop a web terminal will be available for you.
|
||||
|
||||
Simply copy the link next to the user your reserved in etherpad and paste into your browser. If you are prompted to login select `htpass` and enter the credentials listed in etherpad.
|
||||
|
||||
|
||||
## 1.3 - Creating an air gap
|
||||
|
||||
According to the [Internet Security Glossary](https://www.rfc-editor.org/rfc/rfc4949), an Air Gap is:
|
||||
|
||||
> "an interface between two systems at which (a) they are not connected physically and (b) any logical connection is not automated (i.e., data is transferred through the interface only manually, under human control)."
|
||||
|
||||
In disconnected OpenShift installations, the air gap exists between the **Low Side** and the **High Side**, so it is between these systems where a manual data transfer, or **sneakernet** is required.
|
||||
|
||||
For the purposes of this workshop we will be operating within Amazon Web Services. You have been allocated a set of credentials for an environment that already has some basic preparation completed. This will be a single VPC with 3 public subnets, which will serve as our **Low Side**, and 3 private subnets, which will serve as our **High Side**.
|
||||
|
||||
The diagram below shows a simplified overview of the networking topology:
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Workshop network topology* |
|
||||
</Zoom>
|
||||
|
||||
Let's check the virtual private cloud network is created using the `aws` command line interface by copying the command below into our web terminal:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r
|
||||
```
|
||||
|
||||
You should see output similar to the example below:
|
||||
|
||||
```text
|
||||
vpc-0e6d176c7d9c94412
|
||||
```
|
||||
|
||||
We can also check our three public **Low side** and three private **High side** subnets are ready to go by running the command below in our web terminal:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[].Tags[] | select(.Key=="Name").Value] | sort'
|
||||
```
|
||||
|
||||
We should see output matching this example:
|
||||
|
||||
```bash
|
||||
[
|
||||
"Private Subnet - disco",
|
||||
"Private Subnet 2 - disco",
|
||||
"Private Subnet 3 - disco",
|
||||
"Public Subnet - disco",
|
||||
"Public Subnet 2 - disco",
|
||||
"Public Subnet 3 - disco"
|
||||
]
|
||||
```
|
||||
|
||||
If your environment access and topology is all working you've finished exercise 1! 🎉
|
||||
214
data/disconnected/exercise2.mdx
Normal file
214
data/disconnected/exercise2.mdx
Normal file
@ -0,0 +1,214 @@
|
||||
---
|
||||
title: Preparing our low side
|
||||
exercise: 2
|
||||
date: '2023-12-18'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Downloading content and tooling for sneaker ops 💾"
|
||||
---
|
||||
|
||||
A disconnected OpenShift installation begins with downloading content and tooling to a prep system that has outbound access to the Internet. This server resides in an environment commonly referred to as the **Low side** due to its low security profile.
|
||||
|
||||
In this exercise we will be creating a new [AWS ec2 instance](https://aws.amazon.com/ec2) in our **Low side** that we will carry out all our preparation activities on.
|
||||
|
||||
|
||||
## 2.1 - Creating a security group
|
||||
|
||||
We'll start by creating an [AWS security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) and collecting its ID.
|
||||
|
||||
We're going to use this shortly for the **Low side** prep system, and later on in the workshop for the **High side** bastion server.
|
||||
|
||||
Copy the commands below into your web terminal:
|
||||
|
||||
```bash
|
||||
# Obtain vpc id
|
||||
VPC_ID=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r)
|
||||
echo "Virtual private cloud id is: ${VPC_ID}"
|
||||
|
||||
# Obtain first public subnet id
|
||||
PUBLIC_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Public Subnet - disco").SubnetId' -r)
|
||||
|
||||
# Create security group
|
||||
aws ec2 create-security-group --group-name disco-sg --description disco-sg --vpc-id ${VPC_ID} --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=disco-sg}]"
|
||||
|
||||
# Store security group id
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
echo "Security group id is: ${SG_ID}"
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Creating aws ec2 security group* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.2 - Opening ssh port ingress
|
||||
|
||||
We will want to login to our soon to be created **Low side** aws ec2 instance remotely via `ssh` so let's enable ingress on port `22` for this security group now:
|
||||
|
||||
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||
|
||||
```bash
|
||||
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Opening ssh port ingress* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.3 - Create prep system instance
|
||||
|
||||
Ready to launch! 🚀 We'll use the `t3.micro` instance type, which offers `1GiB` of RAM and `2` vCPUs, along with a `50GiB` storage volume to ensure we have enough storage for mirrored content:
|
||||
|
||||
> Note: As mentioned in [OpenShift documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/installing/disconnected-installation-mirroring) about 12 GB of storage space is required for OpenShift Container Platform 4.14 release images, or additionally about 358 GB for OpenShift Container Platform 4.14 release images and all OpenShift Container Platform 4.14 Red Hat Operator images.
|
||||
|
||||
Run the command below in your web terminal to launch the instance. We will specify an Amazon Machine Image (AMI) to use for our prep system which for this lab will be the [Marketplace AMI for RHEL 8](https://access.redhat.com/solutions/15356#us_east_2) in `us-east-2`.
|
||||
|
||||
```bash
|
||||
aws ec2 run-instances --image-id "ami-092b43193629811af" \
|
||||
--count 1 --instance-type t3.micro \
|
||||
--key-name disco-key \
|
||||
--security-group-ids $SG_ID \
|
||||
--subnet-id $PUBLIC_SUBNET \
|
||||
--associate-public-ip-address \
|
||||
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-prep-system}]" \
|
||||
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching a prep rhel8 ec2 instance* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.4 - Connecting to the low side
|
||||
|
||||
Now that our prep system is up, let's `ssh` into it and download the content we'll need to support our install on the **High side**.
|
||||
|
||||
Copy the commands below into your web terminal. Let's start by retrieving the IP for the new ec2 instance and then connecting via `ssh`:
|
||||
|
||||
> Note: If your `ssh` command times out here, your prep system is likely still booting up. Give it a minute and try again.
|
||||
|
||||
```bash
|
||||
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||
echo $PREP_SYSTEM_IP
|
||||
|
||||
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Connecting to the prep rhel8 ec2 instance* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.5 - Downloading required tools
|
||||
|
||||
For the purposes of this workshop, rather than downloading mirror content to a USB drive as we would likely do in a real SneakerOps situation, we will instead be saving content to an EBS volume which will be mounted to our prep system on the **Low side** and then subsequently synced to our bastion system on the **High side**.
|
||||
|
||||
Once your prep system has booted let's mount the EBS volume we attached so we can start downloading content. Copy the commands below into your web terminal:
|
||||
|
||||
```bash
|
||||
sudo mkfs -t xfs /dev/nvme1n1
|
||||
sudo mkdir /mnt/high-side
|
||||
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||
cd /mnt/high-side
|
||||
```
|
||||
|
||||
With our mount in place let's grab the tools we'll need for the bastion server - we'll use some of them on the prep system too. Life's good on the low side; we can download these from the internet and tuck them into our **High side** gift basket at `/mnt/high-side`.
|
||||
|
||||
There are four tools we need, copy the commands into your web terminal to download each one:
|
||||
|
||||
1. `oc` OpenShift cli
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -L -o oc.tar.gz
|
||||
tar -xzf oc.tar.gz oc && rm -f oc.tar.gz
|
||||
sudo cp oc /usr/local/bin/
|
||||
```
|
||||
|
||||
2. `oc-mirror` oc plugin for mirorring release, operator, and helm content
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/oc-mirror.tar.gz -L -o oc-mirror.tar.gz
|
||||
tar -xzf oc-mirror.tar.gz && rm -f oc-mirror.tar.gz
|
||||
chmod +x oc-mirror
|
||||
sudo cp oc-mirror /usr/local/bin/
|
||||
```
|
||||
|
||||
3. `mirror-registry` small-scale Quay registry designed for mirroring
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz -L -o mirror-registry.tar.gz
|
||||
tar -xzf mirror-registry.tar.gz
|
||||
rm -f mirror-registry.tar.gz
|
||||
```
|
||||
|
||||
4. `openshift-installer` The OpenShift installer cli
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -L -o openshift-installer.tar.gz
|
||||
tar -xzf openshift-installer.tar.gz openshift-install
|
||||
rm -f openshift-installer.tar.gz
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Downloading required tools with curl* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.6 - Mirroring content to disk
|
||||
|
||||
The `oc-mirror` plugin supports mirroring content directly from upstream sources to a mirror registry, but since there is an air gap between our **Low side** and **High side**, that's not an option for this lab. Instead, we'll mirror content to a tarball on disk that we can then sneakernet into the bastion server on the **High side**. We'll then mirror from the tarball into the mirror registry from there.
|
||||
|
||||
> Note: A pre-requisite for this process is an OpenShift pull secret to authenticate to the Red Hat registries. This has already been created for you to avoid the delay of registering for individual Red Hat accounts during this workhop. You can copy this into your newly created prep system by running `scp -pr -i disco_key .docker ec2-user@$PREP_SYSTEM_IP:` in your web terminal. In a real world scenario this pull secret can be downloaded from https://console.redhat.com/openshift/install/pull-secret.
|
||||
|
||||
Let's get started by generating an `ImageSetConfiguration` that describes the parameters of our mirror. Run the command below to generate a boilerplate configuration file, it may take a minute:
|
||||
|
||||
```bash
|
||||
oc mirror init > imageset-config.yaml
|
||||
```
|
||||
|
||||
> Note: You can take a look at the default file by running `cat imageset-config.yaml` in your web terminal. Feel free to pause the workshop tasks for a few minutes and read through the [OpenShift documentation](https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.html#oc-mirror-creating-image-set-config_mirroring-ocp-image-repository) for the different options available within the image set configuration.
|
||||
|
||||
To save time and storage, we're going to remove the operator catalogs and mirror only the release images for this workshop. We'll still get a fully functional cluster, but OperatorHub will be empty.
|
||||
|
||||
To complete this, remove the operators object from your `imageset-config.yaml` by running the command below in your web terminal:
|
||||
|
||||
```
|
||||
cat << EOF > imageset-config.yaml
|
||||
kind: ImageSetConfiguration
|
||||
apiVersion: mirror.openshift.io/v1alpha2
|
||||
storageConfig:
|
||||
local:
|
||||
path: ./
|
||||
mirror:
|
||||
platform:
|
||||
channels:
|
||||
- name: stable-4.14
|
||||
type: ocp
|
||||
additionalImages:
|
||||
- name: registry.redhat.io/ubi8/ubi:latest
|
||||
helm: {}
|
||||
EOF
|
||||
```
|
||||
|
||||
Now we're ready to kick off the mirror! This can take 5-15 minutes so this is a good time to go grab a coffee or take a short break:
|
||||
|
||||
> Note: If you're keen to see a bit more verbose output to track the progress of the mirror to disk process you can add the `-v 5` flag to the command below.
|
||||
|
||||
```bash
|
||||
oc mirror --config imageset-config.yaml file:///mnt/high-side
|
||||
```
|
||||
|
||||
Once your content has finished mirroring to disk you've finished exercise 2! 🎉
|
||||
119
data/disconnected/exercise3.mdx
Normal file
119
data/disconnected/exercise3.mdx
Normal file
@ -0,0 +1,119 @@
|
||||
---
|
||||
title: Preparing our high side
|
||||
exercise: 3
|
||||
date: '2023-12-19'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Setting up a bastion server and transferring content"
|
||||
---
|
||||
|
||||
In this exercise, we'll prepare the **High side**. This involves creating a bastion server on the **High side** that will host our mirror registry.
|
||||
|
||||
> Note: We have an interesting dilemma for this excercise: the Amazon Machine Image we used for the prep system earlier does not have `podman` installed. We need `podman`, since it is a key dependency for `mirror-registry`.
|
||||
>
|
||||
> We could rectify this by running `sudo dnf install -y podman` on the bastion system, but the bastion server won't have Internet access, so we need another option for this lab. To solve this problem, we need to build our own RHEL image with podman pre-installed. Real customer environments will likely already have a solution for this, but one approach is to use the [Image Builder](https://console.redhat.com/insights/image-builder) in the Hybrid Cloud Console, and that's exactly what has been done for this lab.
|
||||
>
|
||||
> [workshop](/workshops/static/images/disconnected/image-builder.png)
|
||||
>
|
||||
> In the home directory of your web terminal you will find an `ami.txt` file containng our custom image AMI which will be used by the command that creates our bastion ec2 instance.
|
||||
|
||||
|
||||
## 3.1 - Creating a bastion server
|
||||
|
||||
First up for this exercise we'll grab the ID of one of our **High side** private subnets as well as our ec2 security group.
|
||||
|
||||
Copy the commands below into your web terminal:
|
||||
|
||||
```bash
|
||||
PRIVATE_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Private Subnet - disco").SubnetId' -r)
|
||||
echo $PRIVATE_SUBNET
|
||||
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
echo $SG_ID
|
||||
```
|
||||
|
||||
Once we know our subnet and security group ID's we can spin up our **High side** bastion server. Copy the commands below into your web terminal to complete this:
|
||||
|
||||
```bash
|
||||
aws ec2 run-instances --image-id $(cat ami.txt) \
|
||||
--count 1 \
|
||||
--instance-type t3.large \
|
||||
--key-name disco-key \
|
||||
--security-group-ids $SG_ID \
|
||||
--subnet-id $PRIVATE_SUBNET \
|
||||
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-bastion-server}]" \
|
||||
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching bastion ec2 instance* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.2 - Accessing the high side
|
||||
|
||||
Now we need to access our bastion server on the high side. In real customer environments, this might entail use of a VPN, or physical access to a workstation in a secure facility such as a SCIF.
|
||||
|
||||
To make things a bit simpler for our lab, we're going to restrict access to our bastion to its private IP address. So we'll use the prep system as a sort of bastion-to-the-bastion.
|
||||
|
||||
Let's get access by grabbing the bastion's private IP.
|
||||
|
||||
```bash
|
||||
HIGHSIDE_BASTION_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-bastion-server" | jq -r '.Reservations[0].Instances[0].PrivateIpAddress')
|
||||
echo $HIGHSIDE_BASTION_IP
|
||||
```
|
||||
|
||||
Our next step will be to `exit` back to our web terminal and copy our private key to the prep system so that we can `ssh` to the bastion from there. You may have to wait a minute for the VM to finish initializing:
|
||||
|
||||
```bash
|
||||
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||
|
||||
scp -i disco_key disco_key ec2-user@$PREP_SYSTEM_IP:/home/ec2-user/disco_key
|
||||
```
|
||||
|
||||
To make life a bit easier down the track let's set an environment variable on the prep system so that we can preserve the bastion's IP:
|
||||
|
||||
```bash
|
||||
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP "echo HIGHSIDE_BASTION_IP=$(echo $HIGHSIDE_BASTION_IP) > highside.env"
|
||||
```
|
||||
|
||||
Finally - Let's now connect all the way through to our **High side** bastion 🚀
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Connecting to our bastion ec2 instance* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.3 - Sneakernetting content to the high side
|
||||
|
||||
We'll now deliver the **High side** gift basket to the bastion server. Start by mounting our EBS volume on the bastion server to ensure that we don't run out of space:
|
||||
|
||||
```bash
|
||||
sudo mkfs -t xfs /dev/nvme1n1
|
||||
sudo mkdir /mnt/high-side
|
||||
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||
```
|
||||
|
||||
With the mount in place we can exit back to our base web terminal and send over our gift basket at `/mnt/high-side` using `rsync`. This can take 10-15 minutes depending on the size of the mirror tarball.
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "rsync -avP -e 'ssh -i disco_key' /mnt/high-side ec2-user@$HIGHSIDE_BASTION_IP:/mnt"
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Initiating the sneakernet transfer via rsync* |
|
||||
</Zoom>
|
||||
|
||||
Once your transfer has finished pushing you are finished with exercise 3, well done! 🎉
|
||||
102
data/disconnected/exercise4.mdx
Normal file
102
data/disconnected/exercise4.mdx
Normal file
@ -0,0 +1,102 @@
|
||||
---
|
||||
title: Deploying a mirror registry
|
||||
exercise: 4
|
||||
date: '2023-12-20'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Let's start mirroring some content on our high side!"
|
||||
---
|
||||
|
||||
Images used by operators and platform components must be mirrored from upstream sources into a container registry that is accessible by the **High side**. You can use any registry you like for this as long as it supports Docker `v2-2`, such as:
|
||||
- Red Hat Quay
|
||||
- JFrog Artifactory
|
||||
- Sonatype Nexus Repository
|
||||
- Harbor
|
||||
|
||||
An OpenShift subscription includes access to the [mirror registry](https://docs.openshift.com/container-platform/4.14/installing/disconnected_install/installing-mirroring-creating-registry.html#installing-mirroring-creating-registry) for Red Hat OpenShift, which is a small-scale container registry designed specifically for mirroring images in disconnected installations. We'll make use of this option in this lab.
|
||||
|
||||
Mirroring all release and operator images can take awhile depending on the network bandwidth. For this lab, recall that we're going to mirror just the release images to save time and resources.
|
||||
|
||||
We should have the `mirror-registry` binary along with the required container images available on the bastion in `/mnt/high-side`. The `50GB` volume we created should be enough to hold our mirror (without operators) and binaries.
|
||||
|
||||
|
||||
## 4.1 - Opening mirror registry port ingress
|
||||
|
||||
We are getting close to deploying a disconnected OpenShift cluster that will be spread across multiple machines which are in turn spread across our three private subnets.
|
||||
|
||||
Each of the machines in those private subnets will need to talk back to our mirror registry on port `8443` so let's quickly update our aws security group to ensure this will work.
|
||||
|
||||
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||
|
||||
```bash
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
|
||||
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 8443 --cidr 0.0.0.0/0
|
||||
```
|
||||
|
||||
|
||||
## 4.2 - Running the registry install
|
||||
|
||||
First, let's `ssh` back into the bastion:
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||
```
|
||||
|
||||
And then we can kick off our install:
|
||||
|
||||
```bash
|
||||
cd /mnt/high-side
|
||||
./mirror-registry install --quayHostname $(hostname) --quayRoot /mnt/high-side/quay/quay-install --quayStorage /mnt/high-side/quay/quay-storage --pgStorage /mnt/high-side/quay/pg-data --initPassword discopass
|
||||
```
|
||||
|
||||
If all goes well, you should see something like:
|
||||
|
||||
```text
|
||||
INFO[2023-07-06 15:43:41] Quay installed successfully, config data is stored in /mnt/quay/quay-install
|
||||
INFO[2023-07-06 15:43:41] Quay is available at https://ip-10-0-51-47.ec2.internal:8443 with credentials (init, discopass)
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Running the mirror-registry installer* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.3 Logging into the mirror registry
|
||||
|
||||
Now that our registry is running let's login with `podman` which will generate an auth file at `/run/user/1000/containers/auth.json`.
|
||||
|
||||
```bash
|
||||
podman login -u init -p discopass --tls-verify=false $(hostname):8443
|
||||
```
|
||||
|
||||
We should be greeted with `Login Succeeded!`.
|
||||
|
||||
> Note: We pass `--tls-verify=false` here for simplicity during this workshop, but you can optionally add `/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem` to the system trust store by following the guide in the Quay documentation [here](https://access.redhat.com/documentation/en-us/red_hat_quay/3/html/manage_red_hat_quay/using-ssl-to-protect-quay?extIdCarryOver=true&sc_cid=701f2000001OH74AAG#configuring_the_system_to_trust_the_certificate_authority).
|
||||
|
||||
|
||||
## 4.4 Pushing content into mirror registry
|
||||
|
||||
Now we're ready to mirror images from disk into the registry. Let's add `oc` and `oc-mirror` to the path:
|
||||
|
||||
```bash
|
||||
sudo cp /mnt/high-side/oc /usr/local/bin/
|
||||
sudo cp /mnt/high-side/oc-mirror /usr/local/bin/
|
||||
```
|
||||
|
||||
And now we fire up the mirror process to push our content from disk into the registry ready to be pulled by the OpenShift installation. This can take a similar amount of time to the sneakernet procedure we completed in exercise 3.
|
||||
|
||||
```bash
|
||||
oc mirror --from=/mnt/high-side/mirror_seq1_000000.tar --dest-skip-tls docker://$(hostname):8443
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Running the oc mirror process to push content to our registry* |
|
||||
</Zoom>
|
||||
|
||||
Once your content has finished pushing you are finished with exercise 4, well done! 🎉
|
||||
219
data/disconnected/exercise5.mdx
Normal file
219
data/disconnected/exercise5.mdx
Normal file
@ -0,0 +1,219 @@
|
||||
---
|
||||
title: Installing a disconnected OpenShift cluster
|
||||
exercise: 5
|
||||
date: '2023-12-20'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Time to install a cluster 🚀"
|
||||
---
|
||||
|
||||
We're on the home straight now. In this exercise we'll configure and then execute our `openshift-installer`.
|
||||
|
||||
The OpenShift installation process is initiated from the bastion server on our **High side**. There are a handful of different ways to install OpenShift, but for this lab we're going to be using installer-provisioned infrastructure (IPI).
|
||||
|
||||
By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters.
|
||||
|
||||
We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Installation overview* |
|
||||
</Zoom>
|
||||
|
||||
> Note: You may notice that nodes are provisioned through a process called Ignition. This concept is out of scope for this workshop, but if you're interested to learn more about it, you can read up on it in the documentation [here](https://docs.openshift.com/container-platform/4.14/installing/index.html#about-rhcos).
|
||||
|
||||
IPI is the recommended installation method in most cases because it leverages full automation in installation and cluster management, but there are some key considerations to keep in mind when planning a production installation in a real world scenario.
|
||||
|
||||
You may not have access to the infrastructure APIs. Our lab is going to live in AWS, which requires connectivity to the `.amazonaws.com` domain. We accomplish this by using an allowed list on a Squid proxy running on the **High side**, but a similar approach may not be achievable or permissible for everyone.
|
||||
|
||||
You may not have sufficient permissions with your infrastructure provider. Our lab has full admin in our AWS enclave, so that's not a constraint we'll need to deal with. In real world environments, you'll need to ensure your account has the appropriate permissions which sometimes involves negotiating with security teams.
|
||||
|
||||
Once configuration has been completed, we can kick off the OpenShift Installer and it will do all the work for us to provision the infrastructure and install OpenShift.
|
||||
|
||||
|
||||
## 5.1 - Building install-config.yaml
|
||||
|
||||
Before we run the installer we need to create a configuration file. Let's set up a workspace for it first.
|
||||
|
||||
```bash
|
||||
mkdir /mnt/high-side/install
|
||||
cd /mnt/high-side/install
|
||||
```
|
||||
|
||||
Next we will generate the ssh key pair for access to cluster nodes:
|
||||
|
||||
```bash
|
||||
ssh-keygen -f ~/.ssh/disco-openshift-key -q -N ""
|
||||
```
|
||||
|
||||
Use the following Python code to minify your mirror container registry pull secret to a single line. Copy this output to your clipboard, since you'll need it in a moment:
|
||||
|
||||
```bash
|
||||
python3 -c $'import json\nimport sys\nwith open(sys.argv[1], "r") as f: print(json.dumps(json.load(f)))' /run/user/1000/containers/auth.json
|
||||
```
|
||||
|
||||
> Note: For connected installations, you'd use the secret from the Hybrid Cloud Console, but for our use case, the mirror registry is the only one OpenShift will need to authenticate to.
|
||||
|
||||
Then we can go ahead and generate our `install-config.yaml`:
|
||||
|
||||
> Note: We are setting --log-level to get more verbose output.
|
||||
|
||||
```bash
|
||||
/mnt/high-side/openshift-install create install-config --dir /mnt/high-side/install --log-level=DEBUG
|
||||
```
|
||||
|
||||
The OpenShift installer will prompt you for a number of fields; enter the values below:
|
||||
|
||||
- SSH Public Key: `/home/ec2-user/.ssh/disco-openshift-key.pub`
|
||||
> The SSH public key used to access all nodes within the cluster.
|
||||
|
||||
- Platform: aws
|
||||
> The platform on which the cluster will run.
|
||||
|
||||
- AWS Access Key ID and Secret Access Key: From `cat ~/.aws/credentials`
|
||||
|
||||
- Region: `us-east-2`
|
||||
|
||||
- Base Domain: `sandboxXXXX.opentlc.com` This should automatically populate.
|
||||
> The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
|
||||
|
||||
- Cluster Name: `disco`
|
||||
>The name of the cluster. This will be used when generating sub-domains.
|
||||
|
||||
- Pull Secret: Paste the output from minifying this to a single line in Step 3.
|
||||
|
||||
That's it! The installer will generate `install-config.yaml` and drop it in `/mnt/high-side/install` for you.
|
||||
|
||||
Once the config file is generated take a look through it, we will be making some changes as follows:
|
||||
|
||||
- Change `publish` from `External` to `Internal`. We're using private subnets to house the cluster, so it won't be publicly accessible.
|
||||
|
||||
- Add the subnet IDs for your private subnets to `platform.aws.subnets`. Otherwise, the installer will create its own VPC and subnets. You can retrieve them by running this command from your workstation:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).SubnetId] | unique' -r | yq read - -P
|
||||
```
|
||||
|
||||
Then add them to `platform.aws.subnets` in your `install-config.yaml` so that they look something like this:
|
||||
|
||||
```yaml
|
||||
platform:
|
||||
aws:
|
||||
region: us-east-1
|
||||
subnets:
|
||||
- subnet-00f28bbc11d25d523
|
||||
- subnet-07b4de5ea3a39c0fd
|
||||
- subnet-07b4de5ea3a39c0fd
|
||||
```
|
||||
|
||||
- Next we need to modify the `machineNetwork` to match the IPv4 CIDR blocks from the private subnets. Otherwise your control plane and compute nodes will be assigned IP addresses that are out of range and break the install. You can retrieve them by running this command from your workstation:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).CidrBlock] | unique | map("cidr: " + .)' | yq read -P - | sed "s/'//g"
|
||||
```
|
||||
|
||||
Then use them to **replace the existing** `networking.machineNetwork` entry in your `install-config.yaml` so that they look something like this:
|
||||
|
||||
```yaml
|
||||
networking:
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
machineNetwork:
|
||||
- cidr: 10.0.48.0/20
|
||||
- cidr: 10.0.64.0/20
|
||||
- cidr: 10.0.80.0/20
|
||||
```
|
||||
|
||||
- Next we will add the `imageContentSources` to ensure image mappings happen correctly. You can append them to your `install-config.yaml` by running this command:
|
||||
|
||||
```bash
|
||||
cat << EOF >> install-config.yaml
|
||||
imageContentSources:
|
||||
- mirrors:
|
||||
- $(hostname):8443/ubi8/ubi
|
||||
source: registry.redhat.io/ubi8/ubi
|
||||
- mirrors:
|
||||
- $(hostname):8443/openshift/release-images
|
||||
source: quay.io/openshift-release-dev/ocp-release
|
||||
- mirrors:
|
||||
- $(hostname):8443/openshift/release
|
||||
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
|
||||
EOF
|
||||
```
|
||||
|
||||
- Add the root CA of our mirror registry (`/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem`) to the trust bundle using the `additionalTrustBundle` field by running this command:
|
||||
|
||||
```bash
|
||||
cat <<EOF >> install-config.yaml
|
||||
additionalTrustBundle: |
|
||||
$(cat /mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem | sed 's/^/ /')
|
||||
EOF
|
||||
```
|
||||
|
||||
It should look something like this:
|
||||
|
||||
```yaml
|
||||
additionalTrustBundle: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIID2DCCAsCgAwIBAgIUbL/naWCJ48BEL28wJTvMhJEz/C8wDQYJKoZIhvcNAQEL
|
||||
BQAwdTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
|
||||
azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xJDAiBgNVBAMMG2lw
|
||||
LTEwLTAtNTEtMjA2LmVjMi5pbnRlcm5hbDAeFw0yMzA3MTExODIyMjNaFw0yNjA0
|
||||
MzAxODIyMjNaMHUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTERMA8GA1UEBwwI
|
||||
TmV3IFlvcmsxDTALBgNVBAoMBFF1YXkxETAPBgNVBAsMCERpdmlzaW9uMSQwIgYD
|
||||
VQQDDBtpcC0xMC0wLTUxLTIwNi5lYzIuaW50ZXJuYWwwggEiMA0GCSqGSIb3DQEB
|
||||
AQUAA4IBDwAwggEKAoIBAQDEz/8Pi4UYf/zanB4GHMlo4nbJYIJsyDWx+dPITTMd
|
||||
J3pdOo5BMkkUQL8rSFkc3RjY/grdk2jejVPQ8sVnSabsTl+ku7hT0t1w7E0uPY8d
|
||||
RTeGoa5QvdFOxWz6JsLo+C+JwVOWI088tYX1XZ86TD5FflOEeOwWvs5cmQX6L5O9
|
||||
QGO4PHBc9FWpmaHvFBiRJN3AQkMK4C9XB82G6mCp3c1cmVwFOo3vX7h5738PKXWg
|
||||
KYUTGXHxd/41DBhhY7BpgiwRF1idfLv4OE4bzsb42qaU4rKi1TY+xXIYZ/9DPzTN
|
||||
nQ2AHPWbVxI+m8DZa1DAfPvlZVxAm00E1qPPM30WrU4nAgMBAAGjYDBeMAsGA1Ud
|
||||
DwQEAwIC5DATBgNVHSUEDDAKBggrBgEFBQcDATAmBgNVHREEHzAdghtpcC0xMC0w
|
||||
LTUxLTIwNi5lYzIuaW50ZXJuYWwwEgYDVR0TAQH/BAgwBgEB/wIBATANBgkqhkiG
|
||||
9w0BAQsFAAOCAQEAkkV7/+YhWf1vq//N0Ms0td0WDJnqAlbZUgGkUu/6XiUToFtn
|
||||
OE58KCudP0cAQtvl0ISfw0c7X/Ve11H5YSsVE9afoa0whEO1yntdYQagR0RLJnyo
|
||||
Dj9xhQTEKAk5zXlHS4meIgALi734N2KRu+GJDyb6J0XeYS2V1yQ2Ip7AfCFLdwoY
|
||||
cLtooQugLZ8t+Kkqeopy4pt8l0/FqHDidww1FDoZ+v7PteoYQfx4+R5e8ko/vKAI
|
||||
OCALo9gecCXc9U63l5QL+8z0Y/CU9XYNDfZGNLSKyFTsbQFAqDxnCcIngdnYFbFp
|
||||
mRa1akgfPl+BvAo17AtOiWbhAjipf5kSBpmyJA==
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
Lastly, now is a good time to make a backup of your `install-config.yaml` since the installer will consume (and delete) it:
|
||||
|
||||
```bash
|
||||
cp install-config.yaml install-config.yaml.bak
|
||||
```
|
||||
|
||||
|
||||
## 5.2 Running the installation
|
||||
|
||||
We're ready to run the install! Let's kick off the cluster installation by copying the command below into our web terminal:
|
||||
|
||||
> Note: Once more we can use the `--log-level=DEBUG` flag to get more insight on how the install is progressing.
|
||||
|
||||
```bash
|
||||
/mnt/high-side/openshift-install create cluster --log-level=DEBUG
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Installation overview* |
|
||||
</Zoom>
|
||||
|
||||
The installation process should take about 30 minutes. If you've done everything correctly, you should see something like the example below at the conclusion:
|
||||
|
||||
```text
|
||||
...
|
||||
INFO Install complete!
|
||||
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
|
||||
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
|
||||
INFO Login to the console with user: "kubeadmin", and password: "password"
|
||||
INFO Time elapsed: 30m49s
|
||||
```
|
||||
|
||||
If you made it this far you have completed all the workshop exercises, well done! 🎉
|
||||
@ -1,8 +1,8 @@
|
||||
const siteMetadata = {
|
||||
title: 'Red Hat OpenShift Windows Container Workshop',
|
||||
title: 'Red Hat OpenShift Application Delivery Workshop',
|
||||
author: 'Red Hat',
|
||||
headerTitle: 'Red Hat',
|
||||
description: 'Red Hat OpenShift Windows Container Workshop',
|
||||
description: 'Red Hat OpenShift Application Delivery Workshop',
|
||||
language: 'en-us',
|
||||
siteUrl: 'https://jmhbnz.github.io/workshops',
|
||||
siteRepo: 'https://github.com/jmhbnz/workshops',
|
||||
|
||||
@ -1,89 +1,191 @@
|
||||
---
|
||||
title: Understanding our lab environment
|
||||
title: Getting familiar with OpenShift
|
||||
exercise: 1
|
||||
date: '2023-12-18'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
date: '2023-12-04'
|
||||
tags: ['openshift','containers','kubernetes']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Let's get familiar with our lab setup."
|
||||
summary: "In this first exercise we'll get familiar with OpenShift."
|
||||
---
|
||||
|
||||
Welcome to the OpenShift 4 Disconnected Workshop! Here you'll learn about operating an OpenShift 4 cluster in a disconnected network, for our purposes today that will be a network without access to the internet (even through a proxy or firewall).
|
||||
Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. In this first excercise we'll get logged into our cluster and familarise ourselves with the OpenShift web console and web terminal.
|
||||
|
||||
To level set, Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. OpenShift supports running in disconnected networks, though this does change the way the cluster operates because key ingredients like container images, operator bundles, and helm charts must be brought into the environment from the outside world via mirroring.
|
||||
The OpenShift Container Platform web console is a feature-rich user interface with both an **Administrator** perspective and a **Developer** perspective accessible through any modern web browser. You can use the web console to visualize, browse, and manage your OpenShift cluster and the applications running on it.
|
||||
|
||||
There are of course many different options for installing OpenShift in a restricted network; this workshop will primarily cover one opinionated approach. We'll do our best to point out where there's the potential for variability along the way.
|
||||
In addition to the web console, OpenShift includes command line tools to provide users with a nice interface to work with applications deployed to the platform. The `oc` command line tool is available for Linux, macOS or Windows.
|
||||
|
||||
**Let's get started!**
|
||||
|
||||
## 1.1 - Login to lab environment
|
||||
|
||||
## 1.1 - Obtaining your environment
|
||||
An OpenShift `4.14` cluster has already been provisioned for you to complete these excercises. Open your web browser and navigate to the workshop login page https://demo.redhat.com/workshop/enwmgc.
|
||||
|
||||
To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20. You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
|
||||
Once the page loads you can login with the details provided by your workshop facilitator.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Etherpad collaborative editor* |
|
||||
| *Workshop login page* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.2 - Opening your web terminal
|
||||
## 1.2 - Login to the cluster web console
|
||||
|
||||
Throughout the remainder of the workshop you will be using a number of command line interface tools for example, `aws` to quickly interact with resources in Amazon Web Services, and `ssh` to login to a remote server.
|
||||
Once you're logged into the lab environnment we can open up the OpenShift web console and login with the credentials provided.
|
||||
|
||||
To save you from needing to install or configure these tools on your own device for the remainder of this workshop a web terminal will be available for you.
|
||||
|
||||
Simply copy the link next to the user your reserved in etherpad and paste into your browser. If you are prompted to login select `htpass` and enter the credentials listed in etherpad.
|
||||
|
||||
|
||||
## 1.3 - Creating an air gap
|
||||
|
||||
According to the [Internet Security Glossary](https://www.rfc-editor.org/rfc/rfc4949), an Air Gap is:
|
||||
|
||||
> "an interface between two systems at which (a) they are not connected physically and (b) any logical connection is not automated (i.e., data is transferred through the interface only manually, under human control)."
|
||||
|
||||
In disconnected OpenShift installations, the air gap exists between the **Low Side** and the **High Side**, so it is between these systems where a manual data transfer, or **sneakernet** is required.
|
||||
|
||||
For the purposes of this workshop we will be operating within Amazon Web Services. You have been allocated a set of credentials for an environment that already has some basic preparation completed. This will be a single VPC with 3 public subnets, which will serve as our **Low Side**, and 3 private subnets, which will serve as our **High Side**.
|
||||
|
||||
The diagram below shows a simplified overview of the networking topology:
|
||||
When first logging in you will be prompted to take a tour of the **Developer** console view, let's do that now.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|  |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Workshop network topology* |
|
||||
| *Developer perspective web console tour* |
|
||||
</Zoom>
|
||||
|
||||
Let's check the virtual private cloud network is created using the `aws` command line interface by copying the command below into our web terminal:
|
||||
|
||||
## 1.3 - Understanding projects
|
||||
|
||||
Projects are a logical boundary to help you organize your applications. An OpenShift project allows a community of users (or a single user) to organize and manage their work in isolation from other projects.
|
||||
|
||||
Each project has its own resources, role based access control (who can or cannot perform actions), and constraints (quotas and limits on resources, etc).
|
||||
|
||||
Projects act as a "wrapper" around all the application services you (or your teams) are using for your work.
|
||||
|
||||
In this lab environment, you already have access to single project: `userX` (Where X is the number of your user allocted for the workshop from the previous step.)
|
||||
|
||||
Let's click into our `Project` from the left hand panel of the **Developer** web console perspective. We should be able to see that our project has no `Deployments` and there are no compute cpu or memory resources currently being consumed.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Developer perspective project view* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.4 - Switching between perspectives
|
||||
|
||||
Different roles have different needs when it comes to viewing details within the OpenShift web console. At the top of the left navigation menu, you can toggle between the Administrator perspective and the Developer perspective.
|
||||
|
||||
Select **Administrator** to switch to the Administrator perspective.
|
||||
|
||||
Once the Administrator perspective loads, you should be in the "Home" view and see a wider array of menu sections in the left hand navigation panel.
|
||||
|
||||
Switch back to the **Developer** perspective. Once the Developer perspective loads, select the **Topology** view.
|
||||
|
||||
Right now, there are no applications or components to view in your `userX` project, but once you begin working on the lab, you’ll be able to visualize and interact with the components in your application here.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Switching web console perspectives* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
|
||||
## 1.5 - Launching a web terminal
|
||||
|
||||
While web interfaces are comfortable and easy to use, sometimes we want to quickly run commands to get things done. That is where the `oc` command line utility comes in.
|
||||
|
||||
One handy feature of the OpenShift web console is we can launch a web terminal that will create a browser based terminal that already has the `oc` command logged in and ready to use.
|
||||
|
||||
Let's launch a web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching your web terminal* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 1.6 - Running oc commands
|
||||
|
||||
The [`oc` command line utility](https://docs.openshift.com/container-platform/4.14/cli_reference/openshift_cli/getting-started-cli.html#creating-a-new-app) is a superset of the upstream kubernetes `kubectl` command line utility. This means it can do everything that `kubectl` can do, plus some additional OpenShift specific commands.
|
||||
|
||||
Let's try a few commands now:
|
||||
|
||||
|
||||
### Checking our current project
|
||||
|
||||
Most actions we take in OpenShift will be in relation to a particular project. We can check which project we are currently actively using by running the `oc project` command.
|
||||
|
||||
We should see output similar to below showing we are currently using our `userX` project:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r
|
||||
bash-4.4 ~ $ oc project
|
||||
Using project "user1" from context named "user1-context" on server "https://172.31.0.1:443".
|
||||
```
|
||||
|
||||
You should see output similar to the example below:
|
||||
### Getting help and explaining concepts
|
||||
|
||||
As with any command line utility, there can be complexity that quickly surfaces. Thankfully the `oc` command line utility has excellent built in help.
|
||||
|
||||
Let's take a look at that now.
|
||||
|
||||
To get an understanding of all the options available, try running `oc help`. You should see options similar to the below sample:
|
||||
|
||||
```text
|
||||
vpc-0e6d176c7d9c94412
|
||||
bash-4.4 ~ $ oc help
|
||||
OpenShift Client
|
||||
|
||||
This client helps you develop, build, deploy, and run your applications on any
|
||||
OpenShift or Kubernetes cluster. It also includes the administrative
|
||||
commands for managing a cluster under the 'adm' subcommand.
|
||||
|
||||
Basic Commands:
|
||||
login Log in to a server
|
||||
new-project Request a new project
|
||||
new-app Create a new application
|
||||
status Show an overview of the current project
|
||||
project Switch to another project
|
||||
projects Display existing projects
|
||||
explain Get documentation for a resource
|
||||
|
||||
Build and Deploy Commands:
|
||||
rollout Manage a Kubernetes deployment or OpenShift deployment config
|
||||
rollback Revert part of an application back to a previous deployment
|
||||
new-build Create a new build configuration
|
||||
start-build Start a new build
|
||||
cancel-build Cancel running, pending, or new builds
|
||||
import-image Import images from a container image registry
|
||||
tag Tag existing images into image streams
|
||||
|
||||
```
|
||||
|
||||
We can also check our three public **Low side** and three private **High side** subnets are ready to go by running the command below in our web terminal:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[].Tags[] | select(.Key=="Name").Value] | sort'
|
||||
To get a more detailed explanataion about a specific concept we can use the `oc explain` command.
|
||||
|
||||
Let's run `oc explain project` now to learn more about the concept of a project we introduced earlier:
|
||||
|
||||
```text
|
||||
bash-4.4 ~ $ oc explain project
|
||||
KIND: Project
|
||||
VERSION: project.openshift.io/v1
|
||||
|
||||
DESCRIPTION:
|
||||
Projects are the unit of isolation and collaboration in OpenShift. A
|
||||
project has one or more members, a quota on the resources that the project
|
||||
may consume, and the security controls on the resources in the project.
|
||||
Within a project, members may have different roles - project administrators
|
||||
can set membership, editors can create and manage the resources, and
|
||||
viewers can see but not access running containers. In a normal cluster
|
||||
project administrators are not able to alter their quotas - that is
|
||||
restricted to cluster administrators.
|
||||
|
||||
Listing or watching projects will return only projects the user has the
|
||||
reader role on.
|
||||
|
||||
An OpenShift project is an alternative representation of a Kubernetes
|
||||
namespace. Projects are exposed as editable to end users while namespaces
|
||||
are not. Direct creation of a project is typically restricted to
|
||||
administrators, while end users should use the requestproject resource.
|
||||
```
|
||||
|
||||
We should see output matching this example:
|
||||
|
||||
```bash
|
||||
[
|
||||
"Private Subnet - disco",
|
||||
"Private Subnet 2 - disco",
|
||||
"Private Subnet 3 - disco",
|
||||
"Public Subnet - disco",
|
||||
"Public Subnet 2 - disco",
|
||||
"Public Subnet 3 - disco"
|
||||
]
|
||||
```
|
||||
That's a quick introduction to the `oc` command line utility. Let's close our web terminal now so we can move on to the next excercise.
|
||||
|
||||
If your environment access and topology is all working you've finished exercise 1! 🎉
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Closing your web terminal* |
|
||||
</Zoom>
|
||||
|
||||
Well done, you've finished exercise 1! 🎉
|
||||
|
||||
@ -1,214 +1,131 @@
|
||||
---
|
||||
title: Preparing our low side
|
||||
title: Deploying your first application
|
||||
exercise: 2
|
||||
date: '2023-12-18'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
date: '2023-12-05'
|
||||
tags: ['openshift','containers','kubernetes','deployments','images']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Downloading content and tooling for sneaker ops 💾"
|
||||
summary: "Time to deploy your first app!"
|
||||
---
|
||||
|
||||
A disconnected OpenShift installation begins with downloading content and tooling to a prep system that has outbound access to the Internet. This server resides in an environment commonly referred to as the **Low side** due to its low security profile.
|
||||
|
||||
In this exercise we will be creating a new [AWS ec2 instance](https://aws.amazon.com/ec2) in our **Low side** that we will carry out all our preparation activities on.
|
||||
Now that we have had a tour of the OpenShift web console to get familiar, let's use the web console to deploy our first application.
|
||||
|
||||
Let’s start by doing the simplest thing possible - get a plain old Docker-formatted container image to run on OpenShift. This is incredibly simple to do. With OpenShift it can be done directly from the web console.
|
||||
|
||||
Before we begin, if you would like a bit more background on what a container is or why they are important click the following link to learn more: https://www.redhat.com/en/topics/containers#overview
|
||||
|
||||
|
||||
## 2.1 - Creating a security group
|
||||
## 2.1 - Deploying the container image
|
||||
|
||||
We'll start by creating an [AWS security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) and collecting its ID.
|
||||
|
||||
We're going to use this shortly for the **Low side** prep system, and later on in the workshop for the **High side** bastion server.
|
||||
|
||||
Copy the commands below into your web terminal:
|
||||
|
||||
```bash
|
||||
# Obtain vpc id
|
||||
VPC_ID=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r)
|
||||
echo "Virtual private cloud id is: ${VPC_ID}"
|
||||
|
||||
# Obtain first public subnet id
|
||||
PUBLIC_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Public Subnet - disco").SubnetId' -r)
|
||||
|
||||
# Create security group
|
||||
aws ec2 create-security-group --group-name disco-sg --description disco-sg --vpc-id ${VPC_ID} --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=disco-sg}]"
|
||||
|
||||
# Store security group id
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
echo "Security group id is: ${SG_ID}"
|
||||
```
|
||||
In this exercise, we’re going to deploy the **web** component of the ParksMap application which uses OpenShift's service discovery mechanism to discover any accompanying backend services deployed and shows their data on the map. Below is a visual overview of the complete ParksMap application.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Creating aws ec2 security group* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application architecture* |
|
||||
</Zoom>
|
||||
|
||||
Within the **Developer** perspective, click the **+Add** entry on the left hand menu.
|
||||
|
||||
Once on the **+Add** page, click **Container images** to open a dialog that will allow you to quickly deploy an image.
|
||||
|
||||
In the **Image name** field enter the following:
|
||||
|
||||
```text
|
||||
quay.io/openshiftroadshow/parksmap:latest
|
||||
```
|
||||
|
||||
Leave all other fields at their defaults (but take your time to scroll down and review each one to familiarise yourself! 🎓)
|
||||
|
||||
Click **Create** to deploy the application.
|
||||
|
||||
OpenShift will pull this container image if it does not exist already on the cluster and then deploy a container based on this image. You will be taken back to the **Topology** view in the **Developer** perspective which will show the new "Parksmap" application.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying the container image* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.2 - Opening ssh port ingress
|
||||
## 2.2 - Reviewing our deployed application
|
||||
|
||||
We will want to login to our soon to be created **Low side** aws ec2 instance remotely via `ssh` so let's enable ingress on port `22` for this security group now:
|
||||
If you click on the **parksmap** entry in the **Topology** view, you will see some information about that deployed application.
|
||||
|
||||
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||
|
||||
```bash
|
||||
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||
```
|
||||
The **Resources** tab may be displayed by default. If so, click on the **Details** tab. On that tab, you will see that there is a single **Pod** that was created by your actions.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Opening ssh port ingress* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying the container image* |
|
||||
</Zoom>
|
||||
|
||||
> Note: A pod is the smallest deployable unit in Kubernetes and is effectively a grouping of one or more individual containers. Any containers deployed within a pod are guaranteed to run on the same machine. It is very common for pods in kubernetes to only hold a single container, although sometimes auxiliary services can be included as additional containers in a pod when we want them to run alongside our application container.
|
||||
|
||||
|
||||
## 2.2 - Accessing the application
|
||||
|
||||
Now that we have the ParksMap application deployed. How do we access it??
|
||||
|
||||
This is where OpenShift **Routes** and **Services** come in.
|
||||
|
||||
While **Services** provide internal abstraction and load balancing within an OpenShift cluster, sometimes clients outside of the OpenShift cluster need to access an application. The way that external clients are able to access applications running in OpenShift is through an OpenShift **Route**.
|
||||
|
||||
You may remember that when we deployed the ParksMap application, there was a checkbox ticked to automatically create a **Route**. Thanks to this, all we need to do to access the application is go the **Resources** tab of the application details pane and click the url shown under the **Routes** header.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Opening ParksMap application Route* |
|
||||
</Zoom>
|
||||
|
||||
Clicking the link you should now see the ParksMap application frontend 🎉
|
||||
|
||||
> Note: If this is the first time opening this page, the browser will ask permission to get your position. This is needed by the Frontend app to center the world map to your location, if you don’t allow it, it will just use a default location.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application frontend* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.3 - Create prep system instance
|
||||
## 2.3 - Checking application logs
|
||||
|
||||
Ready to launch! 🚀 We'll use the `t3.micro` instance type, which offers `1GiB` of RAM and `2` vCPUs, along with a `50GiB` storage volume to ensure we have enough storage for mirrored content:
|
||||
If we deploy an application and something isn't working the way we expect, reviewing the application logs can often be helpful. OpenShift includes built in support for reviewing application logs.
|
||||
|
||||
> Note: As mentioned in [OpenShift documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/installing/disconnected-installation-mirroring) about 12 GB of storage space is required for OpenShift Container Platform 4.14 release images, or additionally about 358 GB for OpenShift Container Platform 4.14 release images and all OpenShift Container Platform 4.14 Red Hat Operator images.
|
||||
Let's try it now for our ParksMap frontend.
|
||||
|
||||
Run the command below in your web terminal to launch the instance. We will specify an Amazon Machine Image (AMI) to use for our prep system which for this lab will be the [Marketplace AMI for RHEL 8](https://access.redhat.com/solutions/15356#us_east_2) in `us-east-2`.
|
||||
In the **Developer** perspective, open the **Topology** view.
|
||||
|
||||
```bash
|
||||
aws ec2 run-instances --image-id "ami-092b43193629811af" \
|
||||
--count 1 --instance-type t3.micro \
|
||||
--key-name disco-key \
|
||||
--security-group-ids $SG_ID \
|
||||
--subnet-id $PUBLIC_SUBNET \
|
||||
--associate-public-ip-address \
|
||||
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-prep-system}]" \
|
||||
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||
```
|
||||
Click your "Parksmap" application icon then click on the **Resources** tab.
|
||||
|
||||
From the **Resources** tab click **View logs**
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching a prep rhel8 ec2 instance* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Accessing the ParksMap application logs* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.4 - Connecting to the low side
|
||||
## 2.4 - Checking application resource usage
|
||||
|
||||
Now that our prep system is up, let's `ssh` into it and download the content we'll need to support our install on the **High side**.
|
||||
Another essential element of supporting applications on OpenShift is understanding what resources the application is consuming, for example cpu, memory, network bandwidth and storage io.
|
||||
|
||||
Copy the commands below into your web terminal. Let's start by retrieving the IP for the new ec2 instance and then connecting via `ssh`:
|
||||
OpenShift includes built in support for reviewing application resource usage. Let's take a look at that now.
|
||||
|
||||
> Note: If your `ssh` command times out here, your prep system is likely still booting up. Give it a minute and try again.
|
||||
In the **Developer** perspective, open the **Observe** view.
|
||||
|
||||
```bash
|
||||
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||
echo $PREP_SYSTEM_IP
|
||||
You should see the **Dashboard** tab. Set the time range to the `Last 1 hour` then scroll through the dashboard.
|
||||
|
||||
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
|
||||
```
|
||||
How much cpu and memory is your ParksMap application currently using?
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Connecting to the prep rhel8 ec2 instance* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Checking the ParksMap application resource usage* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.5 - Downloading required tools
|
||||
|
||||
For the purposes of this workshop, rather than downloading mirror content to a USB drive as we would likely do in a real SneakerOps situation, we will instead be saving content to an EBS volume which will be mounted to our prep system on the **Low side** and then subsequently synced to our bastion system on the **High side**.
|
||||
|
||||
Once your prep system has booted let's mount the EBS volume we attached so we can start downloading content. Copy the commands below into your web terminal:
|
||||
|
||||
```bash
|
||||
sudo mkfs -t xfs /dev/nvme1n1
|
||||
sudo mkdir /mnt/high-side
|
||||
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||
cd /mnt/high-side
|
||||
```
|
||||
|
||||
With our mount in place let's grab the tools we'll need for the bastion server - we'll use some of them on the prep system too. Life's good on the low side; we can download these from the internet and tuck them into our **High side** gift basket at `/mnt/high-side`.
|
||||
|
||||
There are four tools we need, copy the commands into your web terminal to download each one:
|
||||
|
||||
1. `oc` OpenShift cli
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -L -o oc.tar.gz
|
||||
tar -xzf oc.tar.gz oc && rm -f oc.tar.gz
|
||||
sudo cp oc /usr/local/bin/
|
||||
```
|
||||
|
||||
2. `oc-mirror` oc plugin for mirorring release, operator, and helm content
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/oc-mirror.tar.gz -L -o oc-mirror.tar.gz
|
||||
tar -xzf oc-mirror.tar.gz && rm -f oc-mirror.tar.gz
|
||||
chmod +x oc-mirror
|
||||
sudo cp oc-mirror /usr/local/bin/
|
||||
```
|
||||
|
||||
3. `mirror-registry` small-scale Quay registry designed for mirroring
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz -L -o mirror-registry.tar.gz
|
||||
tar -xzf mirror-registry.tar.gz
|
||||
rm -f mirror-registry.tar.gz
|
||||
```
|
||||
|
||||
4. `openshift-installer` The OpenShift installer cli
|
||||
|
||||
```bash
|
||||
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -L -o openshift-installer.tar.gz
|
||||
tar -xzf openshift-installer.tar.gz openshift-install
|
||||
rm -f openshift-installer.tar.gz
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Downloading required tools with curl* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 2.6 - Mirroring content to disk
|
||||
|
||||
The `oc-mirror` plugin supports mirroring content directly from upstream sources to a mirror registry, but since there is an air gap between our **Low side** and **High side**, that's not an option for this lab. Instead, we'll mirror content to a tarball on disk that we can then sneakernet into the bastion server on the **High side**. We'll then mirror from the tarball into the mirror registry from there.
|
||||
|
||||
> Note: A pre-requisite for this process is an OpenShift pull secret to authenticate to the Red Hat registries. This has already been created for you to avoid the delay of registering for individual Red Hat accounts during this workhop. You can copy this into your newly created prep system by running `scp -pr -i disco_key .docker ec2-user@$PREP_SYSTEM_IP:` in your web terminal. In a real world scenario this pull secret can be downloaded from https://console.redhat.com/openshift/install/pull-secret.
|
||||
|
||||
Let's get started by generating an `ImageSetConfiguration` that describes the parameters of our mirror. Run the command below to generate a boilerplate configuration file, it may take a minute:
|
||||
|
||||
```bash
|
||||
oc mirror init > imageset-config.yaml
|
||||
```
|
||||
|
||||
> Note: You can take a look at the default file by running `cat imageset-config.yaml` in your web terminal. Feel free to pause the workshop tasks for a few minutes and read through the [OpenShift documentation](https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.html#oc-mirror-creating-image-set-config_mirroring-ocp-image-repository) for the different options available within the image set configuration.
|
||||
|
||||
To save time and storage, we're going to remove the operator catalogs and mirror only the release images for this workshop. We'll still get a fully functional cluster, but OperatorHub will be empty.
|
||||
|
||||
To complete this, remove the operators object from your `imageset-config.yaml` by running the command below in your web terminal:
|
||||
|
||||
```
|
||||
cat << EOF > imageset-config.yaml
|
||||
kind: ImageSetConfiguration
|
||||
apiVersion: mirror.openshift.io/v1alpha2
|
||||
storageConfig:
|
||||
local:
|
||||
path: ./
|
||||
mirror:
|
||||
platform:
|
||||
channels:
|
||||
- name: stable-4.14
|
||||
type: ocp
|
||||
additionalImages:
|
||||
- name: registry.redhat.io/ubi8/ubi:latest
|
||||
helm: {}
|
||||
EOF
|
||||
```
|
||||
|
||||
Now we're ready to kick off the mirror! This can take 5-15 minutes so this is a good time to go grab a coffee or take a short break:
|
||||
|
||||
> Note: If you're keen to see a bit more verbose output to track the progress of the mirror to disk process you can add the `-v 5` flag to the command below.
|
||||
|
||||
```bash
|
||||
oc mirror --config imageset-config.yaml file:///mnt/high-side
|
||||
```
|
||||
|
||||
Once your content has finished mirroring to disk you've finished exercise 2! 🎉
|
||||
Well done, you've finished exercise 2! 🎉
|
||||
|
||||
@ -1,119 +1,122 @@
|
||||
---
|
||||
title: Preparing our high side
|
||||
title: Scaling and self-healing applications
|
||||
exercise: 3
|
||||
date: '2023-12-19'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','deployments','autoscaling']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Setting up a bastion server and transferring content"
|
||||
summary: "Let's scale our application up 📈"
|
||||
---
|
||||
|
||||
In this exercise, we'll prepare the **High side**. This involves creating a bastion server on the **High side** that will host our mirror registry.
|
||||
We have our application deployed, let's scale it up to make sure it will be resilient to failures.
|
||||
|
||||
> Note: We have an interesting dilemma for this excercise: the Amazon Machine Image we used for the prep system earlier does not have `podman` installed. We need `podman`, since it is a key dependency for `mirror-registry`.
|
||||
>
|
||||
> We could rectify this by running `sudo dnf install -y podman` on the bastion system, but the bastion server won't have Internet access, so we need another option for this lab. To solve this problem, we need to build our own RHEL image with podman pre-installed. Real customer environments will likely already have a solution for this, but one approach is to use the [Image Builder](https://console.redhat.com/insights/image-builder) in the Hybrid Cloud Console, and that's exactly what has been done for this lab.
|
||||
>
|
||||
> [workshop](/workshops/static/images/disconnected/image-builder.png)
|
||||
>
|
||||
> In the home directory of your web terminal you will find an `ami.txt` file containng our custom image AMI which will be used by the command that creates our bastion ec2 instance.
|
||||
While **Services** provide discovery and load balancing for **Pods**, the higher level **Deployment** resource specifies how many replicas (pods) of our application will be created and is a simplistic way to configure scaling for the application.
|
||||
|
||||
> Note: To learn more about **Deployments** refer to this [documentation](https://docs.openshift.com/container-platform/4.14/applications/deployments/what-deployments-are.html).
|
||||
|
||||
|
||||
## 3.1 - Creating a bastion server
|
||||
## 3.1 - Reviewing the parksmap deployment
|
||||
|
||||
First up for this exercise we'll grab the ID of one of our **High side** private subnets as well as our ec2 security group.
|
||||
Let's start by confirming how many `replicas` we currently specify for our ParksMap application. We'll also use this exercise step to take a look at how all resources within OpenShift can be viewed and managed as [YAML](https://www.redhat.com/en/topics/automation/what-is-yaml) formatted text files which is extremely useful for more advanced automation and GitOps concepts.
|
||||
|
||||
Copy the commands below into your web terminal:
|
||||
Start in the **Topology** view of the **Developer** perspective.
|
||||
|
||||
```bash
|
||||
PRIVATE_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Private Subnet - disco").SubnetId' -r)
|
||||
echo $PRIVATE_SUBNET
|
||||
Click on your "Parksmap" application icon and click on the **D parksmap** deployment name at the top of the right hand panel.
|
||||
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
echo $SG_ID
|
||||
```
|
||||
From the **Deployment details** view we can click on the **YAML** tab and scroll down to confirm that we only specify `1` replica for the ParksMap application currently.
|
||||
|
||||
Once we know our subnet and security group ID's we can spin up our **High side** bastion server. Copy the commands below into your web terminal to complete this:
|
||||
|
||||
```bash
|
||||
aws ec2 run-instances --image-id $(cat ami.txt) \
|
||||
--count 1 \
|
||||
--instance-type t3.large \
|
||||
--key-name disco-key \
|
||||
--security-group-ids $SG_ID \
|
||||
--subnet-id $PRIVATE_SUBNET \
|
||||
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-bastion-server}]" \
|
||||
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||
```yaml
|
||||
spec:
|
||||
replicas: 1
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Launching bastion ec2 instance* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *ParksMap application deployment replicas* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.2 - Accessing the high side
|
||||
## 3.2 - Intentionally crashing the application
|
||||
|
||||
Now we need to access our bastion server on the high side. In real customer environments, this might entail use of a VPN, or physical access to a workstation in a secure facility such as a SCIF.
|
||||
With our ParksMap application only having one pod replica currently it will not be tolerant to failures. OpenShift will automatically restart the single pod if it encounters a failure, however during the time the application pod takes to start back up our users will not be able to access the application.
|
||||
|
||||
To make things a bit simpler for our lab, we're going to restrict access to our bastion to its private IP address. So we'll use the prep system as a sort of bastion-to-the-bastion.
|
||||
Let's see that in practice by intentionally causing an error in our application.
|
||||
|
||||
Let's get access by grabbing the bastion's private IP.
|
||||
Start in the **Topology** view of the **Developer** perspective and click your Parksmap application icon.
|
||||
|
||||
In the **Resources** tab of the information pane open a second browser tab showing the ParksMap application **Route** that we explored in the previous exercise. The application should be running as normal.
|
||||
|
||||
Click on the pod name under the **Pods** header of the **Resources** tab and then click on the **Terminal** tab. This will open a terminal within our running ParksMap application container.
|
||||
|
||||
Inside the terminal run the following to intentionally crash the application:
|
||||
|
||||
```bash
|
||||
HIGHSIDE_BASTION_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-bastion-server" | jq -r '.Reservations[0].Instances[0].PrivateIpAddress')
|
||||
echo $HIGHSIDE_BASTION_IP
|
||||
kill 1
|
||||
```
|
||||
|
||||
Our next step will be to `exit` back to our web terminal and copy our private key to the prep system so that we can `ssh` to the bastion from there. You may have to wait a minute for the VM to finish initializing:
|
||||
|
||||
```bash
|
||||
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||
|
||||
scp -i disco_key disco_key ec2-user@$PREP_SYSTEM_IP:/home/ec2-user/disco_key
|
||||
```
|
||||
|
||||
To make life a bit easier down the track let's set an environment variable on the prep system so that we can preserve the bastion's IP:
|
||||
|
||||
```bash
|
||||
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP "echo HIGHSIDE_BASTION_IP=$(echo $HIGHSIDE_BASTION_IP) > highside.env"
|
||||
```
|
||||
|
||||
Finally - Let's now connect all the way through to our **High side** bastion 🚀
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||
```
|
||||
The pod will automatically be restarted by OpenShift however if you refresh your second browser tab with the application **Route** you should be able to see the application is momentarily unavailable.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Connecting to our bastion ec2 instance* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Intentionally crashing the ParksMap application* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 3.3 - Sneakernetting content to the high side
|
||||
## 3.3 - Scaling up the application
|
||||
|
||||
We'll now deliver the **High side** gift basket to the bastion server. Start by mounting our EBS volume on the bastion server to ensure that we don't run out of space:
|
||||
As a best practice, wherever possible we should try to run multiple replicas of our pods so that if one pod is unavailable our application will continue to be available to users.
|
||||
|
||||
```bash
|
||||
sudo mkfs -t xfs /dev/nvme1n1
|
||||
sudo mkdir /mnt/high-side
|
||||
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||
```
|
||||
Let's scale up our application and confirm it is now fault tolerant.
|
||||
|
||||
With the mount in place we can exit back to our base web terminal and send over our gift basket at `/mnt/high-side` using `rsync`. This can take 10-15 minutes depending on the size of the mirror tarball.
|
||||
In the **Topology** view of the **Developer** perspective click your Parksmap application icon.
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "rsync -avP -e 'ssh -i disco_key' /mnt/high-side ec2-user@$HIGHSIDE_BASTION_IP:/mnt"
|
||||
```
|
||||
In the **Details** tab of the information pane click the **^ Increase the pod count** arrow to increase our replicas to `2`. You will see the second pod starting up and becoming ready.
|
||||
|
||||
> Note: You can also scale the replicas of a deployment in automated and event driven fashions in response to factors like incoming traffic or resource consumption, or by using the `oc` cli for example `oc scale --replicas=2 deployment/parksmap`.
|
||||
|
||||
Once the new pod is ready, repeat the steps from task `3.2` to crash one of the pods. You should see that the application continues to serve traffic thanks to our OpenShift **Service** load balancing traffic to the second **Pod**.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Initiating the sneakernet transfer via rsync* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Scaling up the ParksMap application* |
|
||||
</Zoom>
|
||||
|
||||
Once your transfer has finished pushing you are finished with exercise 3, well done! 🎉
|
||||
|
||||
## 3.4 - Self healing to desired state
|
||||
|
||||
In the previous example we saw what happened when we intentionally crashed our application. Let's see what happens if we just outright delete one of our ParksMap applications two **Pods**.
|
||||
|
||||
For this step we'll use the `oc` command line utility to build some more familiarity.
|
||||
|
||||
Let's start by launching back into our web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
|
||||
|
||||
Once our terminal opens let's check our list of **Pods** with `oc get pods`. You should see something similar to the output below:
|
||||
|
||||
```bash
|
||||
bash-4.4 ~ $ oc get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
parksmap-ff7477dc4-2nxd2 1/1 Running 0 79s
|
||||
parksmap-ff7477dc4-n26jl 1/1 Running 0 31m
|
||||
workspace45c88f4d4f2b4885-74b6d4898f-57dgh 2/2 Running 0 108s
|
||||
```
|
||||
|
||||
Copy one of the pod names and delete it via `oc delete pods <podname>`, i.e `oc delete pod parksmap-ff7477dc4-2nxd2`.
|
||||
|
||||
```bash
|
||||
bash-4.4 ~ $ oc delete pod parksmap-ff7477dc4-2nxd2
|
||||
pod "parksmap-ff7477dc4-2nxd2" deleted
|
||||
```
|
||||
|
||||
If we now run `oc get pods` again we will see a new **Pod** has automatically been created by OpenShift to replace the one we fully deleted. This is because OpenShift is a container orchestration engine that will always try and enforce the desired state that we declare.
|
||||
|
||||
In our ParksMap **Deployment** we have declared we always want two replicas of our application running at all times. Even if we (possibly accidentally) delete one, OpenShift will always attempt to self heal to return to our desired state.
|
||||
|
||||
## 3.5 - Bonus objective: Autoscaling
|
||||
|
||||
If you have time, take a while to explore the concepts of [HorizontalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-autoscaling.html), [VerticalPodAutoscaling](https://docs.openshift.com/container-platform/4.14/nodes/pods/nodes-pods-vertical-autoscaler.html) and [Cluster autoscaling](https://docs.openshift.com/container-platform/4.14/machine_management/applying-autoscaling.html).
|
||||
|
||||
|
||||
Well done, you've finished exercise 3! 🎉
|
||||
|
||||
@ -1,102 +1,140 @@
|
||||
---
|
||||
title: Deploying a mirror registry
|
||||
title: Deploying an application via helm chart
|
||||
exercise: 4
|
||||
date: '2023-12-20'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','deployments','helm']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Let's start mirroring some content on our high side!"
|
||||
summary: "Exploring alternative deployment approaches."
|
||||
---
|
||||
|
||||
Images used by operators and platform components must be mirrored from upstream sources into a container registry that is accessible by the **High side**. You can use any registry you like for this as long as it supports Docker `v2-2`, such as:
|
||||
- Red Hat Quay
|
||||
- JFrog Artifactory
|
||||
- Sonatype Nexus Repository
|
||||
- Harbor
|
||||
In **Exercise 2** we deployed our ParksMap application in the most simplistic way. Just throwing an individual container image at the cluster via the web console and letting OpenShift automate everything else for us.
|
||||
|
||||
An OpenShift subscription includes access to the [mirror registry](https://docs.openshift.com/container-platform/4.14/installing/disconnected_install/installing-mirroring-creating-registry.html#installing-mirroring-creating-registry) for Red Hat OpenShift, which is a small-scale container registry designed specifically for mirroring images in disconnected installations. We'll make use of this option in this lab.
|
||||
With more complex applications comes the need to more finely customise the details of our application **Deployments** along with any other associated resources the application requires.
|
||||
|
||||
Mirroring all release and operator images can take awhile depending on the network bandwidth. For this lab, recall that we're going to mirror just the release images to save time and resources.
|
||||
Enter the [**Helm**](https://www.redhat.com/en/topics/devops/what-is-helm) project, which can package up our application resources and distribute them as something called a **Helm chart**.
|
||||
|
||||
We should have the `mirror-registry` binary along with the required container images available on the bastion in `/mnt/high-side`. The `50GB` volume we created should be enough to hold our mirror (without operators) and binaries.
|
||||
In simple terms, a **Helm chart** is basically a directory containing a collection of YAML template files, which is zipped into an archive. However the `helm` command line utility has a lot of additional features and is good for customising and overriding specific values in our application templates when we deploy them onto our cluster as well as easily deploying, upgrading or rolling back our application.
|
||||
|
||||
|
||||
## 4.1 - Opening mirror registry port ingress
|
||||
## 4.1 - Deploying a helm chart via the web console
|
||||
|
||||
We are getting close to deploying a disconnected OpenShift cluster that will be spread across multiple machines which are in turn spread across our three private subnets.
|
||||
It is common for organisations that produce and ship applications to provide their applications to organisations as a **Helm chart**.
|
||||
|
||||
Each of the machines in those private subnets will need to talk back to our mirror registry on port `8443` so let's quickly update our aws security group to ensure this will work.
|
||||
Let's get started by deploying a **Helm chart** for the [Gitea](https://about.gitea.com) application which is a git oriented devops platform similar to GitHub or GitLab.
|
||||
|
||||
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||
Start in the **+Add** view of the **Developer** perspective.
|
||||
|
||||
```bash
|
||||
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||
Scroll down and click the **Helm chart** tile. OpenShift includes a visual catalog for any helm chart repositories your cluster has available, for this exercise we will search for **Gitea**.
|
||||
|
||||
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 8443 --cidr 0.0.0.0/0
|
||||
```
|
||||
Click on the search result and click **Create**.
|
||||
|
||||
In the YAML configuration window enter the following, substituting `userX` with your assigned user and then click **Create** once more.
|
||||
|
||||
## 4.2 - Running the registry install
|
||||
|
||||
First, let's `ssh` back into the bastion:
|
||||
|
||||
```bash
|
||||
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||
```
|
||||
|
||||
And then we can kick off our install:
|
||||
|
||||
```bash
|
||||
cd /mnt/high-side
|
||||
./mirror-registry install --quayHostname $(hostname) --quayRoot /mnt/high-side/quay/quay-install --quayStorage /mnt/high-side/quay/quay-storage --pgStorage /mnt/high-side/quay/pg-data --initPassword discopass
|
||||
```
|
||||
|
||||
If all goes well, you should see something like:
|
||||
|
||||
```text
|
||||
INFO[2023-07-06 15:43:41] Quay installed successfully, config data is stored in /mnt/quay/quay-install
|
||||
INFO[2023-07-06 15:43:41] Quay is available at https://ip-10-0-51-47.ec2.internal:8443 with credentials (init, discopass)
|
||||
```yaml
|
||||
db:
|
||||
password: userX
|
||||
hostname: userX-gitea.apps.cluster-dsmsm.dynamic.opentlc.com
|
||||
tlsRoute: true
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Running the mirror-registry installer* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Gitea application deployment via helm chart* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.3 Logging into the mirror registry
|
||||
## 4.2 - Examine deployed application
|
||||
|
||||
Now that our registry is running let's login with `podman` which will generate an auth file at `/run/user/1000/containers/auth.json`.
|
||||
Returning to the **Topology** view of the **Developer** perspective you will now see the Gitea application being deployed in your `userX` project (this can take a few minutes to complete). Notice how the application is made up of two separate pods, the `gitea-db` database and the `gitea` frontend web server.
|
||||
|
||||
```bash
|
||||
podman login -u init -p discopass --tls-verify=false $(hostname):8443
|
||||
```
|
||||
Once your gitea pods are both running open the **Route** for the `gitea` web frontend and confirm you can see the application web interface.
|
||||
|
||||
We should be greeted with `Login Succeeded!`.
|
||||
Next, if we click on the overall gitea **Helm release** by clicking on the shaded box surrounding our two Gitea pods we can see the full list of resources deployed by this helm chart, which in addition to the two running pods includes the following:
|
||||
|
||||
> Note: We pass `--tls-verify=false` here for simplicity during this workshop, but you can optionally add `/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem` to the system trust store by following the guide in the Quay documentation [here](https://access.redhat.com/documentation/en-us/red_hat_quay/3/html/manage_red_hat_quay/using-ssl-to-protect-quay?extIdCarryOver=true&sc_cid=701f2000001OH74AAG#configuring_the_system_to_trust_the_certificate_authority).
|
||||
- 1 **ConfigMap**
|
||||
- 1 **ImageStream**
|
||||
- 2 **PersistentVolumeClaims**
|
||||
- 1 **Route**
|
||||
- 1 **Secret**
|
||||
- 2 **Services**
|
||||
|
||||
|
||||
## 4.4 Pushing content into mirror registry
|
||||
|
||||
Now we're ready to mirror images from disk into the registry. Let's add `oc` and `oc-mirror` to the path:
|
||||
|
||||
```bash
|
||||
sudo cp /mnt/high-side/oc /usr/local/bin/
|
||||
sudo cp /mnt/high-side/oc-mirror /usr/local/bin/
|
||||
```
|
||||
|
||||
And now we fire up the mirror process to push our content from disk into the registry ready to be pulled by the OpenShift installation. This can take a similar amount of time to the sneakernet procedure we completed in exercise 3.
|
||||
|
||||
```bash
|
||||
oc mirror --from=/mnt/high-side/mirror_seq1_000000.tar --dest-skip-tls docker://$(hostname):8443
|
||||
```
|
||||
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Running the oc mirror process to push content to our registry* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Gitea helm release resources created* |
|
||||
</Zoom>
|
||||
|
||||
Once your content has finished pushing you are finished with exercise 4, well done! 🎉
|
||||
|
||||
## 4.3 - Upgrade helm chart
|
||||
|
||||
If we want to make a change to the configuration of our Gitea application we can perform a `helm upgrade`. OpenShift has built in support to perform helm upgrades through the web console.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Upgrade**.
|
||||
|
||||
Now let's intentionally modify the `hostname:` field in the yaml configuration to `hostname: bogushostname.example.com` and click **Upgrade**.
|
||||
|
||||
We will be returned to the **Helm releases** view. Notice how the release status is now Failed (due to our bogus configuration), however the previous release of the application is still running. OpenShift has validated the helm release, determined the updates will not work, and prevented the release from proceeding.
|
||||
|
||||
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Attempting a gitea helm upgrade* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.4 - Rollback to a previous helm release
|
||||
|
||||
Our previous helm upgrade for the Gitea application didn't succeed due to the misconfiguration we supplied. **Helm** has features for rolling back to a previous release through the `helm rollback` command line interface. OpenShift has made this even easier by adding native support for interactive rollbacks in the OpenShift web console so let's give that a go now.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Rollback**.
|
||||
|
||||
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Rolling back to a previous gitea helm release* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.5 - Deleting an application deployed via helm
|
||||
|
||||
Along with upgrades and rollbacks **Helm** also makes deleting deployed applications (along with all of their associated resources) straightforward.
|
||||
|
||||
Before we move on to exercise 5 let's delete the gitea application.
|
||||
|
||||
Start in the **Helm** view of the **Developer** perspective.
|
||||
|
||||
In the **Helm Releases** tab you should see one release called `gitea`.
|
||||
|
||||
Click the three dot menu to the right hand side of the that helm release and click **Delete Helm Release**.
|
||||
|
||||
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deleting the gitea application helm release* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 4.6 - Bonus objective: Artifact Hub
|
||||
|
||||
If you have time, take a while to explore https://artifacthub.io/packages/search to see the kinds of applications available in the most popular publicly available Helm Chart repository Artifact Hub.
|
||||
|
||||
|
||||
Well done, you've finished exercise 4! 🎉
|
||||
|
||||
@ -1,219 +1,144 @@
|
||||
---
|
||||
title: Installing a disconnected OpenShift cluster
|
||||
title: Deploying an application via operator
|
||||
exercise: 5
|
||||
date: '2023-12-20'
|
||||
tags: ['openshift','containers','kubernetes','disconnected']
|
||||
date: '2023-12-06'
|
||||
tags: ['openshift','containers','kubernetes','operator-framework']
|
||||
draft: false
|
||||
authors: ['default']
|
||||
summary: "Time to install a cluster 🚀"
|
||||
summary: "Exploring alternative deployment approaches."
|
||||
---
|
||||
|
||||
We're on the home straight now. In this exercise we'll configure and then execute our `openshift-installer`.
|
||||
Another alternative approach for deploying and managing the lifecycle of more complex applications is via the [Operator Framework](https://operatorframework.io).
|
||||
|
||||
The OpenShift installation process is initiated from the bastion server on our **High side**. There are a handful of different ways to install OpenShift, but for this lab we're going to be using installer-provisioned infrastructure (IPI).
|
||||
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
|
||||
|
||||
By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters.
|
||||
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
|
||||
|
||||
We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
|
||||
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
|
||||
|
||||
|
||||
|
||||
|
||||
## 5.1 - Deploying an operator
|
||||
|
||||
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
|
||||
|
||||
Once the **Operator** is installed we can deploy the application.
|
||||
|
||||
For this exercise we will install the **Operator** for the [Grafana](https://grafana.com) observability platform.
|
||||
|
||||
Let's start in the **Topology** view of the **Developer** perspective.
|
||||
|
||||
Copy the following YAML snippet to your clipboard:
|
||||
|
||||
```yaml
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: grafana-operator
|
||||
namespace: userX
|
||||
spec:
|
||||
channel: v5
|
||||
installPlanApproval: Automatic
|
||||
name: grafana-operator
|
||||
source: community-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
```
|
||||
|
||||
Click the **+** button in the top right corner menu bar of the OpenShift web console. This is a fast way to quickly import snippets of YAML for testing or exploration purposes.
|
||||
|
||||
Paste the above snippet of YAML into the editor and replace the instance of `userX` with your assigned user.
|
||||
|
||||
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Installation overview* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying grafana operator via static yaml* |
|
||||
</Zoom>
|
||||
|
||||
> Note: You may notice that nodes are provisioned through a process called Ignition. This concept is out of scope for this workshop, but if you're interested to learn more about it, you can read up on it in the documentation [here](https://docs.openshift.com/container-platform/4.14/installing/index.html#about-rhcos).
|
||||
|
||||
IPI is the recommended installation method in most cases because it leverages full automation in installation and cluster management, but there are some key considerations to keep in mind when planning a production installation in a real world scenario.
|
||||
## 5.2 - Deploying an operator driven application
|
||||
|
||||
You may not have access to the infrastructure APIs. Our lab is going to live in AWS, which requires connectivity to the `.amazonaws.com` domain. We accomplish this by using an allowed list on a Squid proxy running on the **High side**, but a similar approach may not be achievable or permissible for everyone.
|
||||
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
|
||||
|
||||
You may not have sufficient permissions with your infrastructure provider. Our lab has full admin in our AWS enclave, so that's not a constraint we'll need to deal with. In real world environments, you'll need to ensure your account has the appropriate permissions which sometimes involves negotiating with security teams.
|
||||
Let's switch over to the **Administrator** perspective for this next task to deploy our Grafana instance.
|
||||
|
||||
Once configuration has been completed, we can kick off the OpenShift Installer and it will do all the work for us to provision the infrastructure and install OpenShift.
|
||||
Under the **Operators** category in the left hand menu click on **Installed Operators**.
|
||||
|
||||
In the **Installed Operators** list you should see a **Grafana Operator** entry, click into that.
|
||||
|
||||
## 5.1 - Building install-config.yaml
|
||||
On the **Operator details** screen you will see a list of "Provided APIs". These are custom resource types that we can now deploy with the help of the operator.
|
||||
|
||||
Before we run the installer we need to create a configuration file. Let's set up a workspace for it first.
|
||||
Click on **Create instance** under the provided API titled `Grafana`.
|
||||
|
||||
```bash
|
||||
mkdir /mnt/high-side/install
|
||||
cd /mnt/high-side/install
|
||||
```
|
||||
|
||||
Next we will generate the ssh key pair for access to cluster nodes:
|
||||
|
||||
```bash
|
||||
ssh-keygen -f ~/.ssh/disco-openshift-key -q -N ""
|
||||
```
|
||||
|
||||
Use the following Python code to minify your mirror container registry pull secret to a single line. Copy this output to your clipboard, since you'll need it in a moment:
|
||||
|
||||
```bash
|
||||
python3 -c $'import json\nimport sys\nwith open(sys.argv[1], "r") as f: print(json.dumps(json.load(f)))' /run/user/1000/containers/auth.json
|
||||
```
|
||||
|
||||
> Note: For connected installations, you'd use the secret from the Hybrid Cloud Console, but for our use case, the mirror registry is the only one OpenShift will need to authenticate to.
|
||||
|
||||
Then we can go ahead and generate our `install-config.yaml`:
|
||||
|
||||
> Note: We are setting --log-level to get more verbose output.
|
||||
|
||||
```bash
|
||||
/mnt/high-side/openshift-install create install-config --dir /mnt/high-side/install --log-level=DEBUG
|
||||
```
|
||||
|
||||
The OpenShift installer will prompt you for a number of fields; enter the values below:
|
||||
|
||||
- SSH Public Key: `/home/ec2-user/.ssh/disco-openshift-key.pub`
|
||||
> The SSH public key used to access all nodes within the cluster.
|
||||
|
||||
- Platform: aws
|
||||
> The platform on which the cluster will run.
|
||||
|
||||
- AWS Access Key ID and Secret Access Key: From `cat ~/.aws/credentials`
|
||||
|
||||
- Region: `us-east-2`
|
||||
|
||||
- Base Domain: `sandboxXXXX.opentlc.com` This should automatically populate.
|
||||
> The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
|
||||
|
||||
- Cluster Name: `disco`
|
||||
>The name of the cluster. This will be used when generating sub-domains.
|
||||
|
||||
- Pull Secret: Paste the output from minifying this to a single line in Step 3.
|
||||
|
||||
That's it! The installer will generate `install-config.yaml` and drop it in `/mnt/high-side/install` for you.
|
||||
|
||||
Once the config file is generated take a look through it, we will be making some changes as follows:
|
||||
|
||||
- Change `publish` from `External` to `Internal`. We're using private subnets to house the cluster, so it won't be publicly accessible.
|
||||
|
||||
- Add the subnet IDs for your private subnets to `platform.aws.subnets`. Otherwise, the installer will create its own VPC and subnets. You can retrieve them by running this command from your workstation:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).SubnetId] | unique' -r | yq read - -P
|
||||
```
|
||||
|
||||
Then add them to `platform.aws.subnets` in your `install-config.yaml` so that they look something like this:
|
||||
On the next **Create Grafana** screen click on **YAML View** radio button and enter the following, replacing the two instances of `userX` with your assigned user then click **Create**.
|
||||
|
||||
```yaml
|
||||
platform:
|
||||
aws:
|
||||
region: us-east-1
|
||||
subnets:
|
||||
- subnet-00f28bbc11d25d523
|
||||
- subnet-07b4de5ea3a39c0fd
|
||||
- subnet-07b4de5ea3a39c0fd
|
||||
```
|
||||
|
||||
- Next we need to modify the `machineNetwork` to match the IPv4 CIDR blocks from the private subnets. Otherwise your control plane and compute nodes will be assigned IP addresses that are out of range and break the install. You can retrieve them by running this command from your workstation:
|
||||
|
||||
```bash
|
||||
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).CidrBlock] | unique | map("cidr: " + .)' | yq read -P - | sed "s/'//g"
|
||||
```
|
||||
|
||||
Then use them to **replace the existing** `networking.machineNetwork` entry in your `install-config.yaml` so that they look something like this:
|
||||
|
||||
```yaml
|
||||
networking:
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
machineNetwork:
|
||||
- cidr: 10.0.48.0/20
|
||||
- cidr: 10.0.64.0/20
|
||||
- cidr: 10.0.80.0/20
|
||||
```
|
||||
|
||||
- Next we will add the `imageContentSources` to ensure image mappings happen correctly. You can append them to your `install-config.yaml` by running this command:
|
||||
|
||||
```bash
|
||||
cat << EOF >> install-config.yaml
|
||||
imageContentSources:
|
||||
- mirrors:
|
||||
- $(hostname):8443/ubi8/ubi
|
||||
source: registry.redhat.io/ubi8/ubi
|
||||
- mirrors:
|
||||
- $(hostname):8443/openshift/release-images
|
||||
source: quay.io/openshift-release-dev/ocp-release
|
||||
- mirrors:
|
||||
- $(hostname):8443/openshift/release
|
||||
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
|
||||
EOF
|
||||
```
|
||||
|
||||
- Add the root CA of our mirror registry (`/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem`) to the trust bundle using the `additionalTrustBundle` field by running this command:
|
||||
|
||||
```bash
|
||||
cat <<EOF >> install-config.yaml
|
||||
additionalTrustBundle: |
|
||||
$(cat /mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem | sed 's/^/ /')
|
||||
EOF
|
||||
```
|
||||
|
||||
It should look something like this:
|
||||
|
||||
```yaml
|
||||
additionalTrustBundle: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIID2DCCAsCgAwIBAgIUbL/naWCJ48BEL28wJTvMhJEz/C8wDQYJKoZIhvcNAQEL
|
||||
BQAwdTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
|
||||
azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xJDAiBgNVBAMMG2lw
|
||||
LTEwLTAtNTEtMjA2LmVjMi5pbnRlcm5hbDAeFw0yMzA3MTExODIyMjNaFw0yNjA0
|
||||
MzAxODIyMjNaMHUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTERMA8GA1UEBwwI
|
||||
TmV3IFlvcmsxDTALBgNVBAoMBFF1YXkxETAPBgNVBAsMCERpdmlzaW9uMSQwIgYD
|
||||
VQQDDBtpcC0xMC0wLTUxLTIwNi5lYzIuaW50ZXJuYWwwggEiMA0GCSqGSIb3DQEB
|
||||
AQUAA4IBDwAwggEKAoIBAQDEz/8Pi4UYf/zanB4GHMlo4nbJYIJsyDWx+dPITTMd
|
||||
J3pdOo5BMkkUQL8rSFkc3RjY/grdk2jejVPQ8sVnSabsTl+ku7hT0t1w7E0uPY8d
|
||||
RTeGoa5QvdFOxWz6JsLo+C+JwVOWI088tYX1XZ86TD5FflOEeOwWvs5cmQX6L5O9
|
||||
QGO4PHBc9FWpmaHvFBiRJN3AQkMK4C9XB82G6mCp3c1cmVwFOo3vX7h5738PKXWg
|
||||
KYUTGXHxd/41DBhhY7BpgiwRF1idfLv4OE4bzsb42qaU4rKi1TY+xXIYZ/9DPzTN
|
||||
nQ2AHPWbVxI+m8DZa1DAfPvlZVxAm00E1qPPM30WrU4nAgMBAAGjYDBeMAsGA1Ud
|
||||
DwQEAwIC5DATBgNVHSUEDDAKBggrBgEFBQcDATAmBgNVHREEHzAdghtpcC0xMC0w
|
||||
LTUxLTIwNi5lYzIuaW50ZXJuYWwwEgYDVR0TAQH/BAgwBgEB/wIBATANBgkqhkiG
|
||||
9w0BAQsFAAOCAQEAkkV7/+YhWf1vq//N0Ms0td0WDJnqAlbZUgGkUu/6XiUToFtn
|
||||
OE58KCudP0cAQtvl0ISfw0c7X/Ve11H5YSsVE9afoa0whEO1yntdYQagR0RLJnyo
|
||||
Dj9xhQTEKAk5zXlHS4meIgALi734N2KRu+GJDyb6J0XeYS2V1yQ2Ip7AfCFLdwoY
|
||||
cLtooQugLZ8t+Kkqeopy4pt8l0/FqHDidww1FDoZ+v7PteoYQfx4+R5e8ko/vKAI
|
||||
OCALo9gecCXc9U63l5QL+8z0Y/CU9XYNDfZGNLSKyFTsbQFAqDxnCcIngdnYFbFp
|
||||
mRa1akgfPl+BvAo17AtOiWbhAjipf5kSBpmyJA==
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
Lastly, now is a good time to make a backup of your `install-config.yaml` since the installer will consume (and delete) it:
|
||||
|
||||
```bash
|
||||
cp install-config.yaml install-config.yaml.bak
|
||||
```
|
||||
|
||||
|
||||
## 5.2 Running the installation
|
||||
|
||||
We're ready to run the install! Let's kick off the cluster installation by copying the command below into our web terminal:
|
||||
|
||||
> Note: Once more we can use the `--log-level=DEBUG` flag to get more insight on how the install is progressing.
|
||||
|
||||
```bash
|
||||
/mnt/high-side/openshift-install create cluster --log-level=DEBUG
|
||||
apiVersion: grafana.integreatly.org/v1beta1
|
||||
kind: Grafana
|
||||
metadata:
|
||||
labels:
|
||||
dashboards: grafana
|
||||
folders: grafana
|
||||
name: grafana
|
||||
namespace: userX
|
||||
spec:
|
||||
config:
|
||||
auth:
|
||||
disable_login_form: 'false'
|
||||
log:
|
||||
mode: console
|
||||
security:
|
||||
admin_password: example
|
||||
admin_user: example
|
||||
route:
|
||||
spec:
|
||||
tls:
|
||||
termination: edge
|
||||
host: grafana-userX.apps.cluster-dsmsm.dynamic.opentlc.com
|
||||
```
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-----------------------------------------------------------------------------:|
|
||||
| *Installation overview* |
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Deploying grafana application via the grafana operator* |
|
||||
</Zoom>
|
||||
|
||||
The installation process should take about 30 minutes. If you've done everything correctly, you should see something like the example below at the conclusion:
|
||||
|
||||
```text
|
||||
...
|
||||
INFO Install complete!
|
||||
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
|
||||
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
|
||||
INFO Login to the console with user: "kubeadmin", and password: "password"
|
||||
INFO Time elapsed: 30m49s
|
||||
```
|
||||
## 5.3 Logging into the application
|
||||
|
||||
If you made it this far you have completed all the workshop exercises, well done! 🎉
|
||||
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
|
||||
|
||||
For our first step click on the **Workloads** category on the left hand side menu and then click **Pods**.
|
||||
|
||||
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Confirming the grafana pod is running* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
|
||||
|
||||
Click the **Networking** category on the left hand side menu and then click **Routes**.
|
||||
|
||||
Click the **Route** named `grafana-route` and open the url on the right hand side under the **Location** header.
|
||||
|
||||
Once the new tab opens we should be able to login to Grafana using the credentials we supplied in the previous step in the YAML configuration.
|
||||
|
||||
<Zoom>
|
||||
| |
|
||||
|:-------------------------------------------------------------------:|
|
||||
| *Confirming the grafana route is working* |
|
||||
</Zoom>
|
||||
|
||||
|
||||
## 5.4 - Bonus objective: Grafana dashboards
|
||||
|
||||
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
|
||||
|
||||
|
||||
Well done, you've finished exercise 5! 🎉
|
||||
|
||||
@ -7,57 +7,67 @@
|
||||
<language>en-us</language>
|
||||
<managingEditor>jablair@redhat.com (Red Hat)</managingEditor>
|
||||
<webMaster>jablair@redhat.com (Red Hat)</webMaster>
|
||||
<lastBuildDate>Mon, 18 Dec 2023 00:00:00 GMT</lastBuildDate>
|
||||
<lastBuildDate>Mon, 04 Dec 2023 00:00:00 GMT</lastBuildDate>
|
||||
<atom:link href="https://jmhbnz.github.io/workshops/feed.xml" rel="self" type="application/rss+xml"/>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise1</guid>
|
||||
<title>Understanding our lab environment</title>
|
||||
<title>Getting familiar with OpenShift</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise1</link>
|
||||
<description>Let's get familiar with our lab setup.</description>
|
||||
<pubDate>Mon, 18 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<description>In this first exercise we'll get familiar with OpenShift.</description>
|
||||
<pubDate>Mon, 04 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>disconnected</category>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category>
|
||||
</item>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise2</guid>
|
||||
<title>Preparing our low side</title>
|
||||
<title>Deploying your first application</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise2</link>
|
||||
<description>Downloading content and tooling for sneaker ops 💾</description>
|
||||
<pubDate>Mon, 18 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<description>Time to deploy your first app!</description>
|
||||
<pubDate>Tue, 05 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>disconnected</category>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>deployments</category><category>images</category>
|
||||
</item>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise3</guid>
|
||||
<title>Preparing our high side</title>
|
||||
<title>Scaling and self-healing applications</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise3</link>
|
||||
<description>Setting up a bastion server and transferring content</description>
|
||||
<pubDate>Tue, 19 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<description>Let's scale our application up 📈</description>
|
||||
<pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>disconnected</category>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>deployments</category><category>autoscaling</category>
|
||||
</item>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise4</guid>
|
||||
<title>Deploying a mirror registry</title>
|
||||
<title>Deploying an application via helm chart</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise4</link>
|
||||
<description>Let's start mirroring some content on our high side!</description>
|
||||
<pubDate>Wed, 20 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<description>Exploring alternative deployment approaches.</description>
|
||||
<pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>disconnected</category>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>deployments</category><category>helm</category>
|
||||
</item>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise5</guid>
|
||||
<title>Installing a disconnected OpenShift cluster</title>
|
||||
<title>Deploying an application via operator</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise5</link>
|
||||
<description>Time to install a cluster 🚀</description>
|
||||
<pubDate>Wed, 20 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<description>Exploring alternative deployment approaches.</description>
|
||||
<pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>disconnected</category>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>operator-framework</category>
|
||||
</item>
|
||||
|
||||
<item>
|
||||
<guid>https://jmhbnz.github.io/workshops/workshop/exercise6</guid>
|
||||
<title>Deploying an application from source</title>
|
||||
<link>https://jmhbnz.github.io/workshops/workshop/exercise6</link>
|
||||
<description>Exploring alternative deployment approaches.</description>
|
||||
<pubDate>Thu, 07 Dec 2023 00:00:00 GMT</pubDate>
|
||||
<author>jablair@redhat.com (Red Hat)</author>
|
||||
<category>openshift</category><category>containers</category><category>kubernetes</category><category>s2i</category><category>shipwright</category>
|
||||
</item>
|
||||
|
||||
</channel>
|
||||
|
||||
Reference in New Issue
Block a user