Restore application delivery workshop.

This commit is contained in:
2024-07-24 15:18:13 +12:00
parent ca9a65adf8
commit a407ffcc8e
18 changed files with 1322 additions and 1312 deletions

View File

@ -1,214 +1,131 @@
---
title: Preparing our low side
title: Deploying your first application
exercise: 2
date: '2023-12-18'
tags: ['openshift','containers','kubernetes','disconnected']
date: '2023-12-05'
tags: ['openshift','containers','kubernetes','deployments','images']
draft: false
authors: ['default']
summary: "Downloading content and tooling for sneaker ops 💾"
summary: "Time to deploy your first app!"
---
A disconnected OpenShift installation begins with downloading content and tooling to a prep system that has outbound access to the Internet. This server resides in an environment commonly referred to as the **Low side** due to its low security profile.
In this exercise we will be creating a new [AWS ec2 instance](https://aws.amazon.com/ec2) in our **Low side** that we will carry out all our preparation activities on.
Now that we have had a tour of the OpenShift web console to get familiar, let's use the web console to deploy our first application.
Lets start by doing the simplest thing possible - get a plain old Docker-formatted container image to run on OpenShift. This is incredibly simple to do. With OpenShift it can be done directly from the web console.
Before we begin, if you would like a bit more background on what a container is or why they are important click the following link to learn more: https://www.redhat.com/en/topics/containers#overview
## 2.1 - Creating a security group
## 2.1 - Deploying the container image
We'll start by creating an [AWS security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) and collecting its ID.
We're going to use this shortly for the **Low side** prep system, and later on in the workshop for the **High side** bastion server.
Copy the commands below into your web terminal:
```bash
# Obtain vpc id
VPC_ID=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r)
echo "Virtual private cloud id is: ${VPC_ID}"
# Obtain first public subnet id
PUBLIC_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Public Subnet - disco").SubnetId' -r)
# Create security group
aws ec2 create-security-group --group-name disco-sg --description disco-sg --vpc-id ${VPC_ID} --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=disco-sg}]"
# Store security group id
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
echo "Security group id is: ${SG_ID}"
```
In this exercise, were going to deploy the **web** component of the ParksMap application which uses OpenShift's service discovery mechanism to discover any accompanying backend services deployed and shows their data on the map. Below is a visual overview of the complete ParksMap application.
<Zoom>
|![workshop](/workshops/static/images/disconnected/security-group.gif) |
|:-----------------------------------------------------------------------------:|
| *Creating aws ec2 security group* |
|![parksmap-architecture](/workshops/static/images/parksmap-architecture.png) |
|:-------------------------------------------------------------------:|
| *ParksMap application architecture* |
</Zoom>
Within the **Developer** perspective, click the **+Add** entry on the left hand menu.
Once on the **+Add** page, click **Container images** to open a dialog that will allow you to quickly deploy an image.
In the **Image name** field enter the following:
```text
quay.io/openshiftroadshow/parksmap:latest
```
Leave all other fields at their defaults (but take your time to scroll down and review each one to familiarise yourself! 🎓)
Click **Create** to deploy the application.
OpenShift will pull this container image if it does not exist already on the cluster and then deploy a container based on this image. You will be taken back to the **Topology** view in the **Developer** perspective which will show the new "Parksmap" application.
<Zoom>
|![first-app](/workshops/static/images/first-app.gif) |
|:-------------------------------------------------------------------:|
| *Deploying the container image* |
</Zoom>
## 2.2 - Opening ssh port ingress
## 2.2 - Reviewing our deployed application
We will want to login to our soon to be created **Low side** aws ec2 instance remotely via `ssh` so let's enable ingress on port `22` for this security group now:
If you click on the **parksmap** entry in the **Topology** view, you will see some information about that deployed application.
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
```bash
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
```
The **Resources** tab may be displayed by default. If so, click on the **Details** tab. On that tab, you will see that there is a single **Pod** that was created by your actions.
<Zoom>
|![workshop](/workshops/static/images/disconnected/ssh-port-ingress.gif) |
|:-----------------------------------------------------------------------------:|
| *Opening ssh port ingress* |
|![app-details](/workshops/static/images/app-details.gif) |
|:-------------------------------------------------------------------:|
| *Deploying the container image* |
</Zoom>
> Note: A pod is the smallest deployable unit in Kubernetes and is effectively a grouping of one or more individual containers. Any containers deployed within a pod are guaranteed to run on the same machine. It is very common for pods in kubernetes to only hold a single container, although sometimes auxiliary services can be included as additional containers in a pod when we want them to run alongside our application container.
## 2.2 - Accessing the application
Now that we have the ParksMap application deployed. How do we access it??
This is where OpenShift **Routes** and **Services** come in.
While **Services** provide internal abstraction and load balancing within an OpenShift cluster, sometimes clients outside of the OpenShift cluster need to access an application. The way that external clients are able to access applications running in OpenShift is through an OpenShift **Route**.
You may remember that when we deployed the ParksMap application, there was a checkbox ticked to automatically create a **Route**. Thanks to this, all we need to do to access the application is go the **Resources** tab of the application details pane and click the url shown under the **Routes** header.
<Zoom>
|![app-details](/workshops/static/images/app-route.gif) |
|:-------------------------------------------------------------------:|
| *Opening ParksMap application Route* |
</Zoom>
Clicking the link you should now see the ParksMap application frontend 🎉
> Note: If this is the first time opening this page, the browser will ask permission to get your position. This is needed by the Frontend app to center the world map to your location, if you dont allow it, it will just use a default location.
<Zoom>
|![app-frontend](/workshops/static/images/app-frontend.png) |
|:-------------------------------------------------------------------:|
| *ParksMap application frontend* |
</Zoom>
## 2.3 - Create prep system instance
## 2.3 - Checking application logs
Ready to launch! 🚀 We'll use the `t3.micro` instance type, which offers `1GiB` of RAM and `2` vCPUs, along with a `50GiB` storage volume to ensure we have enough storage for mirrored content:
If we deploy an application and something isn't working the way we expect, reviewing the application logs can often be helpful. OpenShift includes built in support for reviewing application logs.
> Note: As mentioned in [OpenShift documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/installing/disconnected-installation-mirroring) about 12 GB of storage space is required for OpenShift Container Platform 4.14 release images, or additionally about 358 GB for OpenShift Container Platform 4.14 release images and all OpenShift Container Platform 4.14 Red Hat Operator images.
Let's try it now for our ParksMap frontend.
Run the command below in your web terminal to launch the instance. We will specify an Amazon Machine Image (AMI) to use for our prep system which for this lab will be the [Marketplace AMI for RHEL 8](https://access.redhat.com/solutions/15356#us_east_2) in `us-east-2`.
In the **Developer** perspective, open the **Topology** view.
```bash
aws ec2 run-instances --image-id "ami-092b43193629811af" \
--count 1 --instance-type t3.micro \
--key-name disco-key \
--security-group-ids $SG_ID \
--subnet-id $PUBLIC_SUBNET \
--associate-public-ip-address \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-prep-system}]" \
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
```
Click your "Parksmap" application icon then click on the **Resources** tab.
From the **Resources** tab click **View logs**
<Zoom>
|![workshop](/workshops/static/images/disconnected/launch-prep-ec2.gif) |
|:-----------------------------------------------------------------------------:|
| *Launching a prep rhel8 ec2 instance* |
|![app-logs](/workshops/static/images/app-logs.gif) |
|:-------------------------------------------------------------------:|
| *Accessing the ParksMap application logs* |
</Zoom>
## 2.4 - Connecting to the low side
## 2.4 - Checking application resource usage
Now that our prep system is up, let's `ssh` into it and download the content we'll need to support our install on the **High side**.
Another essential element of supporting applications on OpenShift is understanding what resources the application is consuming, for example cpu, memory, network bandwidth and storage io.
Copy the commands below into your web terminal. Let's start by retrieving the IP for the new ec2 instance and then connecting via `ssh`:
OpenShift includes built in support for reviewing application resource usage. Let's take a look at that now.
> Note: If your `ssh` command times out here, your prep system is likely still booting up. Give it a minute and try again.
In the **Developer** perspective, open the **Observe** view.
```bash
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
echo $PREP_SYSTEM_IP
You should see the **Dashboard** tab. Set the time range to the `Last 1 hour` then scroll through the dashboard.
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
```
How much cpu and memory is your ParksMap application currently using?
<Zoom>
|![workshop](/workshops/static/images/disconnected/connect-prep-ec2.gif) |
|:-----------------------------------------------------------------------------:|
| *Connecting to the prep rhel8 ec2 instance* |
|![app-logs](/workshops/static/images/app-resources.gif) |
|:-------------------------------------------------------------------:|
| *Checking the ParksMap application resource usage* |
</Zoom>
## 2.5 - Downloading required tools
For the purposes of this workshop, rather than downloading mirror content to a USB drive as we would likely do in a real SneakerOps situation, we will instead be saving content to an EBS volume which will be mounted to our prep system on the **Low side** and then subsequently synced to our bastion system on the **High side**.
Once your prep system has booted let's mount the EBS volume we attached so we can start downloading content. Copy the commands below into your web terminal:
```bash
sudo mkfs -t xfs /dev/nvme1n1
sudo mkdir /mnt/high-side
sudo mount /dev/nvme1n1 /mnt/high-side
sudo chown ec2-user:ec2-user /mnt/high-side
cd /mnt/high-side
```
With our mount in place let's grab the tools we'll need for the bastion server - we'll use some of them on the prep system too. Life's good on the low side; we can download these from the internet and tuck them into our **High side** gift basket at `/mnt/high-side`.
There are four tools we need, copy the commands into your web terminal to download each one:
1. `oc` OpenShift cli
```bash
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -L -o oc.tar.gz
tar -xzf oc.tar.gz oc && rm -f oc.tar.gz
sudo cp oc /usr/local/bin/
```
2. `oc-mirror` oc plugin for mirorring release, operator, and helm content
```bash
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/oc-mirror.tar.gz -L -o oc-mirror.tar.gz
tar -xzf oc-mirror.tar.gz && rm -f oc-mirror.tar.gz
chmod +x oc-mirror
sudo cp oc-mirror /usr/local/bin/
```
3. `mirror-registry` small-scale Quay registry designed for mirroring
```bash
curl https://mirror.openshift.com/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz -L -o mirror-registry.tar.gz
tar -xzf mirror-registry.tar.gz
rm -f mirror-registry.tar.gz
```
4. `openshift-installer` The OpenShift installer cli
```bash
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -L -o openshift-installer.tar.gz
tar -xzf openshift-installer.tar.gz openshift-install
rm -f openshift-installer.tar.gz
```
<Zoom>
|![workshop](/workshops/static/images/disconnected/download-tools.gif) |
|:-----------------------------------------------------------------------------:|
| *Downloading required tools with curl* |
</Zoom>
## 2.6 - Mirroring content to disk
The `oc-mirror` plugin supports mirroring content directly from upstream sources to a mirror registry, but since there is an air gap between our **Low side** and **High side**, that's not an option for this lab. Instead, we'll mirror content to a tarball on disk that we can then sneakernet into the bastion server on the **High side**. We'll then mirror from the tarball into the mirror registry from there.
> Note: A pre-requisite for this process is an OpenShift pull secret to authenticate to the Red Hat registries. This has already been created for you to avoid the delay of registering for individual Red Hat accounts during this workhop. You can copy this into your newly created prep system by running `scp -pr -i disco_key .docker ec2-user@$PREP_SYSTEM_IP:` in your web terminal. In a real world scenario this pull secret can be downloaded from https://console.redhat.com/openshift/install/pull-secret.
Let's get started by generating an `ImageSetConfiguration` that describes the parameters of our mirror. Run the command below to generate a boilerplate configuration file, it may take a minute:
```bash
oc mirror init > imageset-config.yaml
```
> Note: You can take a look at the default file by running `cat imageset-config.yaml` in your web terminal. Feel free to pause the workshop tasks for a few minutes and read through the [OpenShift documentation](https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.html#oc-mirror-creating-image-set-config_mirroring-ocp-image-repository) for the different options available within the image set configuration.
To save time and storage, we're going to remove the operator catalogs and mirror only the release images for this workshop. We'll still get a fully functional cluster, but OperatorHub will be empty.
To complete this, remove the operators object from your `imageset-config.yaml` by running the command below in your web terminal:
```
cat << EOF > imageset-config.yaml
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
storageConfig:
local:
path: ./
mirror:
platform:
channels:
- name: stable-4.14
type: ocp
additionalImages:
- name: registry.redhat.io/ubi8/ubi:latest
helm: {}
EOF
```
Now we're ready to kick off the mirror! This can take 5-15 minutes so this is a good time to go grab a coffee or take a short break:
> Note: If you're keen to see a bit more verbose output to track the progress of the mirror to disk process you can add the `-v 5` flag to the command below.
```bash
oc mirror --config imageset-config.yaml file:///mnt/high-side
```
Once your content has finished mirroring to disk you've finished exercise 2! 🎉
Well done, you've finished exercise 2! 🎉