Restore application delivery workshop.

This commit is contained in:
2024-07-24 15:18:13 +12:00
parent ca9a65adf8
commit a407ffcc8e
18 changed files with 1322 additions and 1312 deletions

View File

@ -1,219 +1,144 @@
---
title: Installing a disconnected OpenShift cluster
title: Deploying an application via operator
exercise: 5
date: '2023-12-20'
tags: ['openshift','containers','kubernetes','disconnected']
date: '2023-12-06'
tags: ['openshift','containers','kubernetes','operator-framework']
draft: false
authors: ['default']
summary: "Time to install a cluster 🚀"
summary: "Exploring alternative deployment approaches."
---
We're on the home straight now. In this exercise we'll configure and then execute our `openshift-installer`.
Another alternative approach for deploying and managing the lifecycle of more complex applications is via the [Operator Framework](https://operatorframework.io).
The OpenShift installation process is initiated from the bastion server on our **High side**. There are a handful of different ways to install OpenShift, but for this lab we're going to be using installer-provisioned infrastructure (IPI).
The goal of an **Operator** is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations of shell scripts or automation software like Ansible. It was outside of your Kubernetes cluster and hard to integrate. **Operators** change that.
By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters.
**Operators** are the missing piece of the puzzle in Kubernetes to implement and automate common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, update, backup, failover, restore, etc.) activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs.
We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
With Operators you can stop treating an application as a collection␃of primitives like **Pods**, **Deployments**, **Services** or **ConfigMaps**, but instead as a singular, simplified custom object that only exposes the specific configuration values that make sense for the specific application.
## 5.1 - Deploying an operator
Deploying an application via an **Operator** is generally a two step process. The first step is to deploy the **Operator** itself.
Once the **Operator** is installed we can deploy the application.
For this exercise we will install the **Operator** for the [Grafana](https://grafana.com) observability platform.
Let's start in the **Topology** view of the **Developer** perspective.
Copy the following YAML snippet to your clipboard:
```yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: grafana-operator
namespace: userX
spec:
channel: v5
installPlanApproval: Automatic
name: grafana-operator
source: community-operators
sourceNamespace: openshift-marketplace
```
Click the **+** button in the top right corner menu bar of the OpenShift web console. This is a fast way to quickly import snippets of YAML for testing or exploration purposes.
Paste the above snippet of YAML into the editor and replace the instance of `userX` with your assigned user.
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
<Zoom>
|![workshop](/workshops/static/images/disconnected/install-overview.png) |
|:-----------------------------------------------------------------------------:|
| *Installation overview* |
|![operator-deployment](/workshops/static/images/operator-deployment.gif) |
|:-------------------------------------------------------------------:|
| *Deploying grafana operator via static yaml* |
</Zoom>
> Note: You may notice that nodes are provisioned through a process called Ignition. This concept is out of scope for this workshop, but if you're interested to learn more about it, you can read up on it in the documentation [here](https://docs.openshift.com/container-platform/4.14/installing/index.html#about-rhcos).
IPI is the recommended installation method in most cases because it leverages full automation in installation and cluster management, but there are some key considerations to keep in mind when planning a production installation in a real world scenario.
## 5.2 - Deploying an operator driven application
You may not have access to the infrastructure APIs. Our lab is going to live in AWS, which requires connectivity to the `.amazonaws.com` domain. We accomplish this by using an allowed list on a Squid proxy running on the **High side**, but a similar approach may not be achievable or permissible for everyone.
With our Grafana operator now running it will be listening for the creation of a `grafana` custom resource. When one is detected the operator will deploy the Grafana application according to the specifcation we supplied.
You may not have sufficient permissions with your infrastructure provider. Our lab has full admin in our AWS enclave, so that's not a constraint we'll need to deal with. In real world environments, you'll need to ensure your account has the appropriate permissions which sometimes involves negotiating with security teams.
Let's switch over to the **Administrator** perspective for this next task to deploy our Grafana instance.
Once configuration has been completed, we can kick off the OpenShift Installer and it will do all the work for us to provision the infrastructure and install OpenShift.
Under the **Operators** category in the left hand menu click on **Installed Operators**.
In the **Installed Operators** list you should see a **Grafana Operator** entry, click into that.
## 5.1 - Building install-config.yaml
On the **Operator details** screen you will see a list of "Provided APIs". These are custom resource types that we can now deploy with the help of the operator.
Before we run the installer we need to create a configuration file. Let's set up a workspace for it first.
Click on **Create instance** under the provided API titled `Grafana`.
```bash
mkdir /mnt/high-side/install
cd /mnt/high-side/install
```
Next we will generate the ssh key pair for access to cluster nodes:
```bash
ssh-keygen -f ~/.ssh/disco-openshift-key -q -N ""
```
Use the following Python code to minify your mirror container registry pull secret to a single line. Copy this output to your clipboard, since you'll need it in a moment:
```bash
python3 -c $'import json\nimport sys\nwith open(sys.argv[1], "r") as f: print(json.dumps(json.load(f)))' /run/user/1000/containers/auth.json
```
> Note: For connected installations, you'd use the secret from the Hybrid Cloud Console, but for our use case, the mirror registry is the only one OpenShift will need to authenticate to.
Then we can go ahead and generate our `install-config.yaml`:
> Note: We are setting --log-level to get more verbose output.
```bash
/mnt/high-side/openshift-install create install-config --dir /mnt/high-side/install --log-level=DEBUG
```
The OpenShift installer will prompt you for a number of fields; enter the values below:
- SSH Public Key: `/home/ec2-user/.ssh/disco-openshift-key.pub`
> The SSH public key used to access all nodes within the cluster.
- Platform: aws
> The platform on which the cluster will run.
- AWS Access Key ID and Secret Access Key: From `cat ~/.aws/credentials`
- Region: `us-east-2`
- Base Domain: `sandboxXXXX.opentlc.com` This should automatically populate.
> The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
- Cluster Name: `disco`
>The name of the cluster. This will be used when generating sub-domains.
- Pull Secret: Paste the output from minifying this to a single line in Step 3.
That's it! The installer will generate `install-config.yaml` and drop it in `/mnt/high-side/install` for you.
Once the config file is generated take a look through it, we will be making some changes as follows:
- Change `publish` from `External` to `Internal`. We're using private subnets to house the cluster, so it won't be publicly accessible.
- Add the subnet IDs for your private subnets to `platform.aws.subnets`. Otherwise, the installer will create its own VPC and subnets. You can retrieve them by running this command from your workstation:
```bash
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).SubnetId] | unique' -r | yq read - -P
```
Then add them to `platform.aws.subnets` in your `install-config.yaml` so that they look something like this:
On the next **Create Grafana** screen click on **YAML View** radio button and enter the following, replacing the two instances of `userX` with your assigned user then click **Create**.
```yaml
platform:
aws:
region: us-east-1
subnets:
- subnet-00f28bbc11d25d523
- subnet-07b4de5ea3a39c0fd
- subnet-07b4de5ea3a39c0fd
```
- Next we need to modify the `machineNetwork` to match the IPv4 CIDR blocks from the private subnets. Otherwise your control plane and compute nodes will be assigned IP addresses that are out of range and break the install. You can retrieve them by running this command from your workstation:
```bash
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).CidrBlock] | unique | map("cidr: " + .)' | yq read -P - | sed "s/'//g"
```
Then use them to **replace the existing** `networking.machineNetwork` entry in your `install-config.yaml` so that they look something like this:
```yaml
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.48.0/20
- cidr: 10.0.64.0/20
- cidr: 10.0.80.0/20
```
- Next we will add the `imageContentSources` to ensure image mappings happen correctly. You can append them to your `install-config.yaml` by running this command:
```bash
cat << EOF >> install-config.yaml
imageContentSources:
- mirrors:
- $(hostname):8443/ubi8/ubi
source: registry.redhat.io/ubi8/ubi
- mirrors:
- $(hostname):8443/openshift/release-images
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- $(hostname):8443/openshift/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
EOF
```
- Add the root CA of our mirror registry (`/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem`) to the trust bundle using the `additionalTrustBundle` field by running this command:
```bash
cat <<EOF >> install-config.yaml
additionalTrustBundle: |
$(cat /mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem | sed 's/^/ /')
EOF
```
It should look something like this:
```yaml
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
MIID2DCCAsCgAwIBAgIUbL/naWCJ48BEL28wJTvMhJEz/C8wDQYJKoZIhvcNAQEL
BQAwdTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xJDAiBgNVBAMMG2lw
LTEwLTAtNTEtMjA2LmVjMi5pbnRlcm5hbDAeFw0yMzA3MTExODIyMjNaFw0yNjA0
MzAxODIyMjNaMHUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTERMA8GA1UEBwwI
TmV3IFlvcmsxDTALBgNVBAoMBFF1YXkxETAPBgNVBAsMCERpdmlzaW9uMSQwIgYD
VQQDDBtpcC0xMC0wLTUxLTIwNi5lYzIuaW50ZXJuYWwwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQDEz/8Pi4UYf/zanB4GHMlo4nbJYIJsyDWx+dPITTMd
J3pdOo5BMkkUQL8rSFkc3RjY/grdk2jejVPQ8sVnSabsTl+ku7hT0t1w7E0uPY8d
RTeGoa5QvdFOxWz6JsLo+C+JwVOWI088tYX1XZ86TD5FflOEeOwWvs5cmQX6L5O9
QGO4PHBc9FWpmaHvFBiRJN3AQkMK4C9XB82G6mCp3c1cmVwFOo3vX7h5738PKXWg
KYUTGXHxd/41DBhhY7BpgiwRF1idfLv4OE4bzsb42qaU4rKi1TY+xXIYZ/9DPzTN
nQ2AHPWbVxI+m8DZa1DAfPvlZVxAm00E1qPPM30WrU4nAgMBAAGjYDBeMAsGA1Ud
DwQEAwIC5DATBgNVHSUEDDAKBggrBgEFBQcDATAmBgNVHREEHzAdghtpcC0xMC0w
LTUxLTIwNi5lYzIuaW50ZXJuYWwwEgYDVR0TAQH/BAgwBgEB/wIBATANBgkqhkiG
9w0BAQsFAAOCAQEAkkV7/+YhWf1vq//N0Ms0td0WDJnqAlbZUgGkUu/6XiUToFtn
OE58KCudP0cAQtvl0ISfw0c7X/Ve11H5YSsVE9afoa0whEO1yntdYQagR0RLJnyo
Dj9xhQTEKAk5zXlHS4meIgALi734N2KRu+GJDyb6J0XeYS2V1yQ2Ip7AfCFLdwoY
cLtooQugLZ8t+Kkqeopy4pt8l0/FqHDidww1FDoZ+v7PteoYQfx4+R5e8ko/vKAI
OCALo9gecCXc9U63l5QL+8z0Y/CU9XYNDfZGNLSKyFTsbQFAqDxnCcIngdnYFbFp
mRa1akgfPl+BvAo17AtOiWbhAjipf5kSBpmyJA==
-----END CERTIFICATE-----
```
Lastly, now is a good time to make a backup of your `install-config.yaml` since the installer will consume (and delete) it:
```bash
cp install-config.yaml install-config.yaml.bak
```
## 5.2 Running the installation
We're ready to run the install! Let's kick off the cluster installation by copying the command below into our web terminal:
> Note: Once more we can use the `--log-level=DEBUG` flag to get more insight on how the install is progressing.
```bash
/mnt/high-side/openshift-install create cluster --log-level=DEBUG
apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
labels:
dashboards: grafana
folders: grafana
name: grafana
namespace: userX
spec:
config:
auth:
disable_login_form: 'false'
log:
mode: console
security:
admin_password: example
admin_user: example
route:
spec:
tls:
termination: edge
host: grafana-userX.apps.cluster-dsmsm.dynamic.opentlc.com
```
<Zoom>
|![workshop](/workshops/static/images/disconnected/install-cluster.gif) |
|:-----------------------------------------------------------------------------:|
| *Installation overview* |
|![grafana-deployment](/workshops/static/images/grafana-deployment.gif) |
|:-------------------------------------------------------------------:|
| *Deploying grafana application via the grafana operator* |
</Zoom>
The installation process should take about 30 minutes. If you've done everything correctly, you should see something like the example below at the conclusion:
```text
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 30m49s
```
## 5.3 Logging into the application
If you made it this far you have completed all the workshop exercises, well done! 🎉
While we are in the **Administrator** perspective of the web console let's take a look at a couple of sections to confirm our newly deployed Grafana application is running as expected.
For our first step click on the **Workloads** category on the left hand side menu and then click **Pods**.
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
<Zoom>
|![grafana-pod](/workshops/static/images/grafana-pod.png) |
|:-------------------------------------------------------------------:|
| *Confirming the grafana pod is running* |
</Zoom>
Now that we know the Grafana application **Pod** is running let's open the application and confirm we can log in.
Click the **Networking** category on the left hand side menu and then click **Routes**.
Click the **Route** named `grafana-route` and open the url on the right hand side under the **Location** header.
Once the new tab opens we should be able to login to Grafana using the credentials we supplied in the previous step in the YAML configuration.
<Zoom>
|![grafana-route](/workshops/static/images/grafana-route.gif) |
|:-------------------------------------------------------------------:|
| *Confirming the grafana route is working* |
</Zoom>
## 5.4 - Bonus objective: Grafana dashboards
If you have time, take a while to learn about the https://grafana.com/grafana/dashboards and how Grafana can be used to visualise just about anything.
Well done, you've finished exercise 5! 🎉