Restore disconnected workshop content.
Signed-off-by: James Blair <mail@jamesblair.net>
This commit is contained in:
55
data/windows/exercise1.mdx
Normal file
55
data/windows/exercise1.mdx
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
---
|
||||||
|
title: Understanding the workshop environment
|
||||||
|
exercise: 1
|
||||||
|
date: '2024-05-26'
|
||||||
|
tags: ['openshift','windows','kubernetes','containers']
|
||||||
|
draft: false
|
||||||
|
authors: ['default']
|
||||||
|
summary: "Let's get underway with the workshop."
|
||||||
|
---
|
||||||
|
|
||||||
|
Welcome to the OpenShift Windows Containers Workshop! Here you'll have a chance to build your windows container prowess.
|
||||||
|
|
||||||
|
With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform.
|
||||||
|
|
||||||
|
For this workshop you'll be given a fresh OpenShift 4 cluster which currently only runs linux containers. You will complete a series of exercises to transform the cluster to be capable to run Windows containers.
|
||||||
|
|
||||||
|
**Let's get started!**
|
||||||
|
|
||||||
|
|
||||||
|
## 1.1 - Obtaining your environment
|
||||||
|
|
||||||
|
To get underway open your web browser and navigate to the following link to reserve yourself a user https://demo.redhat.com/workshop/98b7pu. You can reserve an environment by entering any email address along with the password provided by your workshop facilitator.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Obtaining a workshop environment* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 1.2 - Logging into your cluster console
|
||||||
|
|
||||||
|
After entering an email and the provided password you'll be presented with a console url and login credentials for your OpenShift cluster.
|
||||||
|
|
||||||
|
Open the console url and login.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Obtaining a workshop environment* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 1.3 - Logging into your bastion host
|
||||||
|
|
||||||
|
Along with the cluster web console we will also use the command line during this workshop. You've been allocated a bastion host that you can ssh to as part of step 1.1.
|
||||||
|
|
||||||
|
Follow the steps below to connect to your environment bastion host:
|
||||||
|
|
||||||
|
1. Open your preferrred terminal application.
|
||||||
|
2. Enter `ssh lab-user@<bastion-hostname>` replacing `<bastion-hostname>` with the hostname listed in your **Bastion Access** environment details page.
|
||||||
|
3. Enter `yes` if you receive a host key verification prompt. This only appears as it is the first time you have connected to this host.
|
||||||
|
4. When prompted enter the password mentioned under **Bastion Access** in your environment details page.
|
||||||
|
|
||||||
|
Congratulations, you're now ready to proceed with the next exercise 🎉.
|
||||||
102
data/windows/exercise2.mdx
Normal file
102
data/windows/exercise2.mdx
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
---
|
||||||
|
title: Installing the windows machine config operator
|
||||||
|
exercise: 2
|
||||||
|
date: '2024-05-26'
|
||||||
|
tags: ['openshift','windows','kubernetes','containers']
|
||||||
|
draft: false
|
||||||
|
authors: ['default']
|
||||||
|
summary: "Preparing our cluster for windows machines."
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
In this first hands on excercise we will prepare our cluster for running Windows nodes by installing an operator and configuring it.
|
||||||
|
|
||||||
|
[Operators](https://docs.openshift.com/container-platform/4.15/operators/index.html) are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing additional cluster services or application.
|
||||||
|
|
||||||
|
To install Operators on OpenShift we use Operator Hub. A simplistic way of thinking about Operator Hub is as the "App Store" for your OpenShift cluster.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *OpenShift Operator Hub* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 2.1 - Enable hybrid networking
|
||||||
|
|
||||||
|
Before installing the windows machine config operator our first step as a cluster administrator is configure a our OpenShift cluster network to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
|
||||||
|
|
||||||
|
This requires enabling a feature called **[hybrid overlay networking](https://docs.openshift.com/container-platform/4.15/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.html#configuring-hybrid-ovnkubernetes)**.
|
||||||
|
|
||||||
|
To configure hybrid overlay networking, run the following command in your bastion host terminal:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc patch networks.operator.openshift.io cluster --type=merge \
|
||||||
|
-p '{
|
||||||
|
"spec":{
|
||||||
|
"defaultNetwork":{
|
||||||
|
"ovnKubernetesConfig":{
|
||||||
|
"hybridOverlayConfig":{
|
||||||
|
"hybridClusterNetwork":[
|
||||||
|
{
|
||||||
|
"cidr": "10.132.0.0/14",
|
||||||
|
"hostPrefix": 23
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Patching an OpenShift cluster network to enable hybrid networking* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 2.2 - Install the windows machine config operator
|
||||||
|
|
||||||
|
If you have a running OpenShift cluster and have enabled hybrid overlay networking, you can then install the optional **Windows Machine Config Operator**. This operator will configure any Windows machines we add to the cluster, enabling Windows container workloads to be run within your OpenShift cluster.
|
||||||
|
|
||||||
|
Windows instances can be added either by creating a `MachineSet`, or by specifying existing instances through a `ConfigMap`. The operator will do all the necessary steps to configure the instance so that it can join the cluster as a worker node.
|
||||||
|
|
||||||
|
Follow the steps below to install the operator:
|
||||||
|
1. Navigate to **Operators** > **OperatorHub** in the left menu.
|
||||||
|
2. Search for `Windows`.
|
||||||
|
3. Click on **Windows Machine Config Operator** provided by Red Hat and click **Install**.
|
||||||
|
4. Leave all settings as the default and click **Install** once more.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Installing the windows machine config operator* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
> Note: The operator installation may take several minutes to complete. Wait for the status of `✅ succeeded` before continuing with the following step.
|
||||||
|
>
|
||||||
|
|
||||||
|
## 2.3 - Create configuration secrets
|
||||||
|
|
||||||
|
The windows machine config operator expects a secret to be present in its namespace called `cloud-private-key` containing a private key. This private key will be used to log into the soon to be provisioned Windows machine and set it up as an OpenShift node.
|
||||||
|
|
||||||
|
Run the commands below from your bastion host to create the required secret.
|
||||||
|
|
||||||
|
1. Generate a new ssh key with `ssh-keygen -t rsa -f ${HOME}/.ssh/winkey -q -N ''`
|
||||||
|
2. Run the command below to create the required secret from the public key you just created.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc create secret generic cloud-private-key \
|
||||||
|
--from-file=private-key.pem=${HOME}/.ssh/winkey \
|
||||||
|
--namespace openshift-windows-machine-config-operator
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Create a private key secret* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
Once your network configuration, operator installation and secret creation are complete you're ready to move on to the next exercise 🎉
|
||||||
135
data/windows/exercise3.mdx
Normal file
135
data/windows/exercise3.mdx
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
title: Provisioning a windows worker node
|
||||||
|
exercise: 3
|
||||||
|
date: '2024-05-26'
|
||||||
|
tags: ['openshift','windows','kubernetes','containers']
|
||||||
|
draft: false
|
||||||
|
authors: ['default']
|
||||||
|
summary: "Auto scaling nodes with machine sets!"
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
Now that our cluster is ready to support Windows nodes lets provision one through the Machine API.
|
||||||
|
|
||||||
|
The Machine API is a combination of primary resources that are based on the upstream [Cluster API](https://github.com/kubernetes-sigs/cluster-api) project and custom OpenShift Container Platform resources.
|
||||||
|
|
||||||
|
The two primary resources are:
|
||||||
|
|
||||||
|
**1. Machines**
|
||||||
|
|
||||||
|
> A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
|
||||||
|
|
||||||
|
**2. MachineSets**
|
||||||
|
|
||||||
|
> Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the **replicas** field on the MachineSet to meet your compute need.
|
||||||
|
|
||||||
|
|
||||||
|
## 3.1 Create a single replica machineset
|
||||||
|
|
||||||
|
In this exercise we will create a `MachineSet`. Once created this will automatically begin provisoning a Windows machine and adding it to our cluster as a worker node.
|
||||||
|
|
||||||
|
Below is a YAML snippet we will use as base to create our `MachineSet`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: machine.openshift.io/v1beta1
|
||||||
|
kind: MachineSet
|
||||||
|
metadata:
|
||||||
|
name: cluster-<id>-windows-ap-southeast-<zone>
|
||||||
|
namespace: openshift-machine-api
|
||||||
|
labels:
|
||||||
|
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
||||||
|
machine.openshift.io/cluster-api-machineset: cluster-<id>-worker-ap-southeast-<zone>
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
||||||
|
machine.openshift.io/cluster-api-machine-role: worker
|
||||||
|
machine.openshift.io/cluster-api-machine-type: worker
|
||||||
|
machine.openshift.io/cluster-api-machineset: cluster-<id>-worker-ap-southeast-<zone>
|
||||||
|
machine.openshift.io/os-id: Windows
|
||||||
|
spec:
|
||||||
|
lifecycleHooks: {}
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
node-role.kubernetes.io/worker: ''
|
||||||
|
providerSpec:
|
||||||
|
value:
|
||||||
|
userDataSecret:
|
||||||
|
name: windows-user-data
|
||||||
|
placement:
|
||||||
|
availabilityZone: ap-southeast-<zone>
|
||||||
|
region: ap-southeast-1
|
||||||
|
credentialsSecret:
|
||||||
|
name: aws-cloud-credentials
|
||||||
|
instanceType: m5a.4xlarge
|
||||||
|
metadata:
|
||||||
|
creationTimestamp: null
|
||||||
|
blockDevices:
|
||||||
|
- ebs:
|
||||||
|
iops: 0
|
||||||
|
kmsKey: {}
|
||||||
|
volumeSize: 120
|
||||||
|
volumeType: gp2
|
||||||
|
securityGroups:
|
||||||
|
- filters:
|
||||||
|
- name: 'tag:Name'
|
||||||
|
values:
|
||||||
|
- cluster-<id>-worker-sg
|
||||||
|
kind: AWSMachineProviderConfig
|
||||||
|
metadataServiceOptions: {}
|
||||||
|
tags:
|
||||||
|
- name: kubernetes.io/cluster/cluster-<id>
|
||||||
|
value: owned
|
||||||
|
deviceIndex: 0
|
||||||
|
ami:
|
||||||
|
id: ami-0e76083a67107f741
|
||||||
|
subnet:
|
||||||
|
filters:
|
||||||
|
- name: 'tag:Name'
|
||||||
|
values:
|
||||||
|
- cluster-<id>-private-ap-southeast-<zone>
|
||||||
|
apiVersion: awsproviderconfig.openshift.io/v1beta1
|
||||||
|
iamInstanceProfile:
|
||||||
|
id: cluster-<id>-worker-profile
|
||||||
|
```
|
||||||
|
|
||||||
|
There are ten references to `<id>` in the sample that we need to find & replace with the actual cluster id for the cluster we have been allocated for the workshop and five references to the availability `<zone>` for our cluster nodes that we also need to update with our actual zone in use.
|
||||||
|
|
||||||
|
Run the following command in your bastion host terminal session to find your cluster id and zone:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
name=$(oc get machineset -A -o jsonpath={.items[0].metadata.name})
|
||||||
|
echo "Cluster id is: ${name:8:11}"
|
||||||
|
echo "Cluster availability zone is: ${name:40:2}"
|
||||||
|
```
|
||||||
|
|
||||||
|
After retrieving your cluster id and zone update the sample `MachineSet` using your preferred text editor, then select and copy all of the text to clipboard.
|
||||||
|
|
||||||
|
Within OpenShift you can then click the ➕ button in the top right hand corner, paste in your yaml and click **Create**.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Create a windows machineset* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 3.2 Verify windows machine status
|
||||||
|
|
||||||
|
After creating the `MachineSet` a new Windows machine will be automatically provisioned and added to our OpenShift cluster, as we set our desired replicas in the YAML to `1`.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Check the status of the new windows machine* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
Creating, provisioning and configuring a new Windows host can be a lengthy process taking 15-30 minutes so now is a good time to take a break ☕.
|
||||||
|
|
||||||
|
You can keep an eye on the status of your Machine in the OpenShift web console. Once it reaches the **✅ Provisioned as node** status you are ready to proceed to the next exercise.
|
||||||
|
|
||||||
90
data/windows/exercise4.mdx
Normal file
90
data/windows/exercise4.mdx
Normal file
@ -0,0 +1,90 @@
|
|||||||
|
---
|
||||||
|
title: Deploying a windows workload
|
||||||
|
exercise: 4
|
||||||
|
date: '2024-05-26'
|
||||||
|
tags: ['openshift','windows','kubernetes','containers']
|
||||||
|
draft: false
|
||||||
|
authors: ['default']
|
||||||
|
summary: "Putting our new cluster windows node to work 🚀"
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
With our cluster now having both Windows and Linux worker nodes, let's deploy a hybrid workload that will make use of both.
|
||||||
|
|
||||||
|
**The NetCandy Store**
|
||||||
|
|
||||||
|
You will be deploying a sample application stack that delivers an eCommerce site, The NetCandy Store. This application is built using Windows Containers working together with Linux Containers.
|
||||||
|
|
||||||
|
This application consists of:
|
||||||
|
|
||||||
|
1. Windows Container running a .NET v4 frontend, which is consuming a backend service.
|
||||||
|
2. Linux Container running a .NET Core backend service, which is using a database.
|
||||||
|
3. Linux Container running a MSSql database 🤯.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Mixed workload architecture diagram* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 4.1 Add helm repository
|
||||||
|
|
||||||
|
In this exercise we will deploy the NetCandy Store application using `helm`. You can deliver your Windows workloads in the same way you deliver your Linux workloads. Since everything is just YAML, the workflow is the same. Whether that be via Helm, an Operator, or via Ansible.
|
||||||
|
|
||||||
|
We'll get started by creating a project and adding a helm repository that our application helm chart will be sourced from.
|
||||||
|
|
||||||
|
Follow the steps below to add the repository:
|
||||||
|
|
||||||
|
1. Switch from **Administrator** to **Developer** view in the top left web console dropdown menu.
|
||||||
|
2. Click on **+Add** in the left menu.
|
||||||
|
3. Click on the **Project** dropdown at the top and click **Create Project**
|
||||||
|
4. Enter the name `netcandystore` and click **Create**.
|
||||||
|
5. Click on **Helm Chart repositories**.
|
||||||
|
6. Enter the name `redhat-demos` and url `https://redhat-developer-demos.github.io/helm-repo` then click **Create**.
|
||||||
|
|
||||||
|
This will allow us to deploy any helm charts available in this repository.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Creating a project and adding a helm repository* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 4.2 Deploy candystore helm chart
|
||||||
|
|
||||||
|
With our helm chart repository added, let's deploy our application! This is as simple as following the three steps below to create a helm release.
|
||||||
|
|
||||||
|
1. Search for `candy` on the **Helm charts** screen.
|
||||||
|
2. Click on **Netcandystore** and then click **Create**.
|
||||||
|
3. Review the chart settings and click **Create** once more.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Create mixed archiecture application via helm* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
> Note: The application can take a few minutes to complete deploying, time for another coffee ☕.
|
||||||
|
|
||||||
|
## 4.3 Review deployed windows application
|
||||||
|
|
||||||
|
After creating our helm release we can see the status of the application from the **Topology** screen in the **Developer** view.
|
||||||
|
|
||||||
|
We can verify our Windows Container is running by:
|
||||||
|
|
||||||
|
1. Clicking on the **netcandystore** frontend Windows Container.
|
||||||
|
2. Selecting the **Resources** tab on the right hand panel and clicking on the pod name.
|
||||||
|
3. Clicking the **Terminal** tab and verifying that a Windows command prompt displays.
|
||||||
|
4. Returning to the **Topology** screen and opening the URL for the **netcandystore** application to confirm the application is running.
|
||||||
|
|
||||||
|
> Note: You may need to change from `https://` to `http://` in your browser address bar when opening the application URL as some browsers now automatically attempt to redirect to HTTPS, however this application route is currently only served as HTTP.
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Confirm Windows container status* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
Congratulations! You've taken an existing OpenShift 4 cluster, set it up for running Windows workloads, then deployed a Windows app 🎉.
|
||||||
@ -1,55 +1,89 @@
|
|||||||
---
|
---
|
||||||
title: Understanding the workshop environment
|
title: Understanding our lab environment
|
||||||
exercise: 1
|
exercise: 1
|
||||||
date: '2024-05-26'
|
date: '2023-12-18'
|
||||||
tags: ['openshift','windows','kubernetes','containers']
|
tags: ['openshift','containers','kubernetes','disconnected']
|
||||||
draft: false
|
draft: false
|
||||||
authors: ['default']
|
authors: ['default']
|
||||||
summary: "Let's get underway with the workshop."
|
summary: "Let's get familiar with our lab setup."
|
||||||
---
|
---
|
||||||
|
|
||||||
Welcome to the OpenShift Windows Containers Workshop! Here you'll have a chance to build your windows container prowess.
|
Welcome to the OpenShift 4 Disconnected Workshop! Here you'll learn about operating an OpenShift 4 cluster in a disconnected network, for our purposes today that will be a network without access to the internet (even through a proxy or firewall).
|
||||||
|
|
||||||
With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform.
|
To level set, Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. OpenShift supports running in disconnected networks, though this does change the way the cluster operates because key ingredients like container images, operator bundles, and helm charts must be brought into the environment from the outside world via mirroring.
|
||||||
|
|
||||||
For this workshop you'll be given a fresh OpenShift 4 cluster which currently only runs linux containers. You will complete a series of exercises to transform the cluster to be capable to run Windows containers.
|
There are of course many different options for installing OpenShift in a restricted network; this workshop will primarily cover one opinionated approach. We'll do our best to point out where there's the potential for variability along the way.
|
||||||
|
|
||||||
**Let's get started!**
|
**Let's get started!**
|
||||||
|
|
||||||
|
|
||||||
## 1.1 - Obtaining your environment
|
## 1.1 - Obtaining your environment
|
||||||
|
|
||||||
To get underway open your web browser and navigate to the following link to reserve yourself a user https://demo.redhat.com/workshop/98b7pu. You can reserve an environment by entering any email address along with the password provided by your workshop facilitator.
|
To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20. You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Obtaining a workshop environment* |
|
| *Etherpad collaborative editor* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 1.2 - Logging into your cluster console
|
## 1.2 - Opening your web terminal
|
||||||
|
|
||||||
After entering an email and the provided password you'll be presented with a console url and login credentials for your OpenShift cluster.
|
Throughout the remainder of the workshop you will be using a number of command line interface tools for example, `aws` to quickly interact with resources in Amazon Web Services, and `ssh` to login to a remote server.
|
||||||
|
|
||||||
Open the console url and login.
|
To save you from needing to install or configure these tools on your own device for the remainder of this workshop a web terminal will be available for you.
|
||||||
|
|
||||||
|
Simply copy the link next to the user your reserved in etherpad and paste into your browser. If you are prompted to login select `htpass` and enter the credentials listed in etherpad.
|
||||||
|
|
||||||
|
|
||||||
|
## 1.3 - Creating an air gap
|
||||||
|
|
||||||
|
According to the [Internet Security Glossary](https://www.rfc-editor.org/rfc/rfc4949), an Air Gap is:
|
||||||
|
|
||||||
|
> "an interface between two systems at which (a) they are not connected physically and (b) any logical connection is not automated (i.e., data is transferred through the interface only manually, under human control)."
|
||||||
|
|
||||||
|
In disconnected OpenShift installations, the air gap exists between the **Low Side** and the **High Side**, so it is between these systems where a manual data transfer, or **sneakernet** is required.
|
||||||
|
|
||||||
|
For the purposes of this workshop we will be operating within Amazon Web Services. You have been allocated a set of credentials for an environment that already has some basic preparation completed. This will be a single VPC with 3 public subnets, which will serve as our **Low Side**, and 3 private subnets, which will serve as our **High Side**.
|
||||||
|
|
||||||
|
The diagram below shows a simplified overview of the networking topology:
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Obtaining a workshop environment* |
|
| *Workshop network topology* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
Let's check the virtual private cloud network is created using the `aws` command line interface by copying the command below into our web terminal:
|
||||||
|
|
||||||
## 1.3 - Logging into your bastion host
|
```bash
|
||||||
|
aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r
|
||||||
|
```
|
||||||
|
|
||||||
Along with the cluster web console we will also use the command line during this workshop. You've been allocated a bastion host that you can ssh to as part of step 1.1.
|
You should see output similar to the example below:
|
||||||
|
|
||||||
Follow the steps below to connect to your environment bastion host:
|
```text
|
||||||
|
vpc-0e6d176c7d9c94412
|
||||||
|
```
|
||||||
|
|
||||||
1. Open your preferrred terminal application.
|
We can also check our three public **Low side** and three private **High side** subnets are ready to go by running the command below in our web terminal:
|
||||||
2. Enter `ssh lab-user@<bastion-hostname>` replacing `<bastion-hostname>` with the hostname listed in your **Bastion Access** environment details page.
|
|
||||||
3. Enter `yes` if you receive a host key verification prompt. This only appears as it is the first time you have connected to this host.
|
|
||||||
4. When prompted enter the password mentioned under **Bastion Access** in your environment details page.
|
|
||||||
|
|
||||||
Congratulations, you're now ready to proceed with the next exercise 🎉.
|
```bash
|
||||||
|
aws ec2 describe-subnets | jq '[.Subnets[].Tags[] | select(.Key=="Name").Value] | sort'
|
||||||
|
```
|
||||||
|
|
||||||
|
We should see output matching this example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[
|
||||||
|
"Private Subnet - disco",
|
||||||
|
"Private Subnet 2 - disco",
|
||||||
|
"Private Subnet 3 - disco",
|
||||||
|
"Public Subnet - disco",
|
||||||
|
"Public Subnet 2 - disco",
|
||||||
|
"Public Subnet 3 - disco"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
If your environment access and topology is all working you've finished exercise 1! 🎉
|
||||||
|
|||||||
@ -1,102 +1,214 @@
|
|||||||
---
|
---
|
||||||
title: Installing the windows machine config operator
|
title: Preparing our low side
|
||||||
exercise: 2
|
exercise: 2
|
||||||
date: '2024-05-26'
|
date: '2023-12-18'
|
||||||
tags: ['openshift','windows','kubernetes','containers']
|
tags: ['openshift','containers','kubernetes','disconnected']
|
||||||
draft: false
|
draft: false
|
||||||
authors: ['default']
|
authors: ['default']
|
||||||
summary: "Preparing our cluster for windows machines."
|
summary: "Downloading content and tooling for sneaker ops 💾"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
A disconnected OpenShift installation begins with downloading content and tooling to a prep system that has outbound access to the Internet. This server resides in an environment commonly referred to as the **Low side** due to its low security profile.
|
||||||
|
|
||||||
In this first hands on excercise we will prepare our cluster for running Windows nodes by installing an operator and configuring it.
|
In this exercise we will be creating a new [AWS ec2 instance](https://aws.amazon.com/ec2) in our **Low side** that we will carry out all our preparation activities on.
|
||||||
|
|
||||||
[Operators](https://docs.openshift.com/container-platform/4.15/operators/index.html) are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing additional cluster services or application.
|
|
||||||
|
|
||||||
To install Operators on OpenShift we use Operator Hub. A simplistic way of thinking about Operator Hub is as the "App Store" for your OpenShift cluster.
|
|
||||||
|
|
||||||
<Zoom>
|
|
||||||
| |
|
|
||||||
|:-----------------------------------------------------------------------------:|
|
|
||||||
| *OpenShift Operator Hub* |
|
|
||||||
</Zoom>
|
|
||||||
|
|
||||||
|
|
||||||
## 2.1 - Enable hybrid networking
|
## 2.1 - Creating a security group
|
||||||
|
|
||||||
Before installing the windows machine config operator our first step as a cluster administrator is configure a our OpenShift cluster network to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
|
We'll start by creating an [AWS security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) and collecting its ID.
|
||||||
|
|
||||||
This requires enabling a feature called **[hybrid overlay networking](https://docs.openshift.com/container-platform/4.15/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.html#configuring-hybrid-ovnkubernetes)**.
|
We're going to use this shortly for the **Low side** prep system, and later on in the workshop for the **High side** bastion server.
|
||||||
|
|
||||||
To configure hybrid overlay networking, run the following command in your bastion host terminal:
|
Copy the commands below into your web terminal:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
oc patch networks.operator.openshift.io cluster --type=merge \
|
# Obtain vpc id
|
||||||
-p '{
|
VPC_ID=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r)
|
||||||
"spec":{
|
echo "Virtual private cloud id is: ${VPC_ID}"
|
||||||
"defaultNetwork":{
|
|
||||||
"ovnKubernetesConfig":{
|
# Obtain first public subnet id
|
||||||
"hybridOverlayConfig":{
|
PUBLIC_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Public Subnet - disco").SubnetId' -r)
|
||||||
"hybridClusterNetwork":[
|
|
||||||
{
|
# Create security group
|
||||||
"cidr": "10.132.0.0/14",
|
aws ec2 create-security-group --group-name disco-sg --description disco-sg --vpc-id ${VPC_ID} --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=disco-sg}]"
|
||||||
"hostPrefix": 23
|
|
||||||
}
|
# Store security group id
|
||||||
]
|
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||||
}
|
echo "Security group id is: ${SG_ID}"
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}'
|
|
||||||
```
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Patching an OpenShift cluster network to enable hybrid networking* |
|
| *Creating aws ec2 security group* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 2.2 - Install the windows machine config operator
|
## 2.2 - Opening ssh port ingress
|
||||||
|
|
||||||
If you have a running OpenShift cluster and have enabled hybrid overlay networking, you can then install the optional **Windows Machine Config Operator**. This operator will configure any Windows machines we add to the cluster, enabling Windows container workloads to be run within your OpenShift cluster.
|
We will want to login to our soon to be created **Low side** aws ec2 instance remotely via `ssh` so let's enable ingress on port `22` for this security group now:
|
||||||
|
|
||||||
Windows instances can be added either by creating a `MachineSet`, or by specifying existing instances through a `ConfigMap`. The operator will do all the necessary steps to configure the instance so that it can join the cluster as a worker node.
|
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||||
|
|
||||||
Follow the steps below to install the operator:
|
|
||||||
1. Navigate to **Operators** > **OperatorHub** in the left menu.
|
|
||||||
2. Search for `Windows`.
|
|
||||||
3. Click on **Windows Machine Config Operator** provided by Red Hat and click **Install**.
|
|
||||||
4. Leave all settings as the default and click **Install** once more.
|
|
||||||
|
|
||||||
<Zoom>
|
|
||||||
| |
|
|
||||||
|:-----------------------------------------------------------------------------:|
|
|
||||||
| *Installing the windows machine config operator* |
|
|
||||||
</Zoom>
|
|
||||||
|
|
||||||
> Note: The operator installation may take several minutes to complete. Wait for the status of `✅ succeeded` before continuing with the following step.
|
|
||||||
>
|
|
||||||
|
|
||||||
## 2.3 - Create configuration secrets
|
|
||||||
|
|
||||||
The windows machine config operator expects a secret to be present in its namespace called `cloud-private-key` containing a private key. This private key will be used to log into the soon to be provisioned Windows machine and set it up as an OpenShift node.
|
|
||||||
|
|
||||||
Run the commands below from your bastion host to create the required secret.
|
|
||||||
|
|
||||||
1. Generate a new ssh key with `ssh-keygen -t rsa -f ${HOME}/.ssh/winkey -q -N ''`
|
|
||||||
2. Run the command below to create the required secret from the public key you just created.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
oc create secret generic cloud-private-key \
|
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||||
--from-file=private-key.pem=${HOME}/.ssh/winkey \
|
|
||||||
--namespace openshift-windows-machine-config-operator
|
|
||||||
```
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Create a private key secret* |
|
| *Opening ssh port ingress* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
Once your network configuration, operator installation and secret creation are complete you're ready to move on to the next exercise 🎉
|
|
||||||
|
## 2.3 - Create prep system instance
|
||||||
|
|
||||||
|
Ready to launch! 🚀 We'll use the `t3.micro` instance type, which offers `1GiB` of RAM and `2` vCPUs, along with a `50GiB` storage volume to ensure we have enough storage for mirrored content:
|
||||||
|
|
||||||
|
> Note: As mentioned in [OpenShift documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/installing/disconnected-installation-mirroring) about 12 GB of storage space is required for OpenShift Container Platform 4.14 release images, or additionally about 358 GB for OpenShift Container Platform 4.14 release images and all OpenShift Container Platform 4.14 Red Hat Operator images.
|
||||||
|
|
||||||
|
Run the command below in your web terminal to launch the instance. We will specify an Amazon Machine Image (AMI) to use for our prep system which for this lab will be the [Marketplace AMI for RHEL 8](https://access.redhat.com/solutions/15356#us_east_2) in `us-east-2`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws ec2 run-instances --image-id "ami-092b43193629811af" \
|
||||||
|
--count 1 --instance-type t3.micro \
|
||||||
|
--key-name disco-key \
|
||||||
|
--security-group-ids $SG_ID \
|
||||||
|
--subnet-id $PUBLIC_SUBNET \
|
||||||
|
--associate-public-ip-address \
|
||||||
|
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-prep-system}]" \
|
||||||
|
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Launching a prep rhel8 ec2 instance* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 2.4 - Connecting to the low side
|
||||||
|
|
||||||
|
Now that our prep system is up, let's `ssh` into it and download the content we'll need to support our install on the **High side**.
|
||||||
|
|
||||||
|
Copy the commands below into your web terminal. Let's start by retrieving the IP for the new ec2 instance and then connecting via `ssh`:
|
||||||
|
|
||||||
|
> Note: If your `ssh` command times out here, your prep system is likely still booting up. Give it a minute and try again.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||||
|
echo $PREP_SYSTEM_IP
|
||||||
|
|
||||||
|
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Connecting to the prep rhel8 ec2 instance* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 2.5 - Downloading required tools
|
||||||
|
|
||||||
|
For the purposes of this workshop, rather than downloading mirror content to a USB drive as we would likely do in a real SneakerOps situation, we will instead be saving content to an EBS volume which will be mounted to our prep system on the **Low side** and then subsequently synced to our bastion system on the **High side**.
|
||||||
|
|
||||||
|
Once your prep system has booted let's mount the EBS volume we attached so we can start downloading content. Copy the commands below into your web terminal:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo mkfs -t xfs /dev/nvme1n1
|
||||||
|
sudo mkdir /mnt/high-side
|
||||||
|
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||||
|
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||||
|
cd /mnt/high-side
|
||||||
|
```
|
||||||
|
|
||||||
|
With our mount in place let's grab the tools we'll need for the bastion server - we'll use some of them on the prep system too. Life's good on the low side; we can download these from the internet and tuck them into our **High side** gift basket at `/mnt/high-side`.
|
||||||
|
|
||||||
|
There are four tools we need, copy the commands into your web terminal to download each one:
|
||||||
|
|
||||||
|
1. `oc` OpenShift cli
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -L -o oc.tar.gz
|
||||||
|
tar -xzf oc.tar.gz oc && rm -f oc.tar.gz
|
||||||
|
sudo cp oc /usr/local/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
2. `oc-mirror` oc plugin for mirorring release, operator, and helm content
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/oc-mirror.tar.gz -L -o oc-mirror.tar.gz
|
||||||
|
tar -xzf oc-mirror.tar.gz && rm -f oc-mirror.tar.gz
|
||||||
|
chmod +x oc-mirror
|
||||||
|
sudo cp oc-mirror /usr/local/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. `mirror-registry` small-scale Quay registry designed for mirroring
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://mirror.openshift.com/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz -L -o mirror-registry.tar.gz
|
||||||
|
tar -xzf mirror-registry.tar.gz
|
||||||
|
rm -f mirror-registry.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
4. `openshift-installer` The OpenShift installer cli
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -L -o openshift-installer.tar.gz
|
||||||
|
tar -xzf openshift-installer.tar.gz openshift-install
|
||||||
|
rm -f openshift-installer.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Downloading required tools with curl* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
|
## 2.6 - Mirroring content to disk
|
||||||
|
|
||||||
|
The `oc-mirror` plugin supports mirroring content directly from upstream sources to a mirror registry, but since there is an air gap between our **Low side** and **High side**, that's not an option for this lab. Instead, we'll mirror content to a tarball on disk that we can then sneakernet into the bastion server on the **High side**. We'll then mirror from the tarball into the mirror registry from there.
|
||||||
|
|
||||||
|
> Note: A pre-requisite for this process is an OpenShift pull secret to authenticate to the Red Hat registries. This has already been created for you to avoid the delay of registering for individual Red Hat accounts during this workhop. You can copy this into your newly created prep system by running `scp -pr -i disco_key .docker ec2-user@$PREP_SYSTEM_IP:` in your web terminal. In a real world scenario this pull secret can be downloaded from https://console.redhat.com/openshift/install/pull-secret.
|
||||||
|
|
||||||
|
Let's get started by generating an `ImageSetConfiguration` that describes the parameters of our mirror. Run the command below to generate a boilerplate configuration file, it may take a minute:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc mirror init > imageset-config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
> Note: You can take a look at the default file by running `cat imageset-config.yaml` in your web terminal. Feel free to pause the workshop tasks for a few minutes and read through the [OpenShift documentation](https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.html#oc-mirror-creating-image-set-config_mirroring-ocp-image-repository) for the different options available within the image set configuration.
|
||||||
|
|
||||||
|
To save time and storage, we're going to remove the operator catalogs and mirror only the release images for this workshop. We'll still get a fully functional cluster, but OperatorHub will be empty.
|
||||||
|
|
||||||
|
To complete this, remove the operators object from your `imageset-config.yaml` by running the command below in your web terminal:
|
||||||
|
|
||||||
|
```
|
||||||
|
cat << EOF > imageset-config.yaml
|
||||||
|
kind: ImageSetConfiguration
|
||||||
|
apiVersion: mirror.openshift.io/v1alpha2
|
||||||
|
storageConfig:
|
||||||
|
local:
|
||||||
|
path: ./
|
||||||
|
mirror:
|
||||||
|
platform:
|
||||||
|
channels:
|
||||||
|
- name: stable-4.14
|
||||||
|
type: ocp
|
||||||
|
additionalImages:
|
||||||
|
- name: registry.redhat.io/ubi8/ubi:latest
|
||||||
|
helm: {}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we're ready to kick off the mirror! This can take 5-15 minutes so this is a good time to go grab a coffee or take a short break:
|
||||||
|
|
||||||
|
> Note: If you're keen to see a bit more verbose output to track the progress of the mirror to disk process you can add the `-v 5` flag to the command below.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc mirror --config imageset-config.yaml file:///mnt/high-side
|
||||||
|
```
|
||||||
|
|
||||||
|
Once your content has finished mirroring to disk you've finished exercise 2! 🎉
|
||||||
|
|||||||
@ -1,135 +1,119 @@
|
|||||||
---
|
---
|
||||||
title: Provisioning a windows worker node
|
title: Preparing our high side
|
||||||
exercise: 3
|
exercise: 3
|
||||||
date: '2024-05-26'
|
date: '2023-12-19'
|
||||||
tags: ['openshift','windows','kubernetes','containers']
|
tags: ['openshift','containers','kubernetes','disconnected']
|
||||||
draft: false
|
draft: false
|
||||||
authors: ['default']
|
authors: ['default']
|
||||||
summary: "Auto scaling nodes with machine sets!"
|
summary: "Setting up a bastion server and transferring content"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
In this exercise, we'll prepare the **High side**. This involves creating a bastion server on the **High side** that will host our mirror registry.
|
||||||
|
|
||||||
Now that our cluster is ready to support Windows nodes lets provision one through the Machine API.
|
> Note: We have an interesting dilemma for this excercise: the Amazon Machine Image we used for the prep system earlier does not have `podman` installed. We need `podman`, since it is a key dependency for `mirror-registry`.
|
||||||
|
>
|
||||||
The Machine API is a combination of primary resources that are based on the upstream [Cluster API](https://github.com/kubernetes-sigs/cluster-api) project and custom OpenShift Container Platform resources.
|
> We could rectify this by running `sudo dnf install -y podman` on the bastion system, but the bastion server won't have Internet access, so we need another option for this lab. To solve this problem, we need to build our own RHEL image with podman pre-installed. Real customer environments will likely already have a solution for this, but one approach is to use the [Image Builder](https://console.redhat.com/insights/image-builder) in the Hybrid Cloud Console, and that's exactly what has been done for this lab.
|
||||||
|
>
|
||||||
The two primary resources are:
|
> [workshop](/workshops/static/images/disconnected/image-builder.png)
|
||||||
|
>
|
||||||
**1. Machines**
|
> In the home directory of your web terminal you will find an `ami.txt` file containng our custom image AMI which will be used by the command that creates our bastion ec2 instance.
|
||||||
|
|
||||||
> A fundamental unit that describes the host for a Node. A machine has a providerSpec, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
|
|
||||||
|
|
||||||
**2. MachineSets**
|
|
||||||
|
|
||||||
> Groups of machines. MachineSets are to machines as ReplicaSets are to Pods. If you need more machines or must scale them down, you change the **replicas** field on the MachineSet to meet your compute need.
|
|
||||||
|
|
||||||
|
|
||||||
## 3.1 Create a single replica machineset
|
## 3.1 - Creating a bastion server
|
||||||
|
|
||||||
In this exercise we will create a `MachineSet`. Once created this will automatically begin provisoning a Windows machine and adding it to our cluster as a worker node.
|
First up for this exercise we'll grab the ID of one of our **High side** private subnets as well as our ec2 security group.
|
||||||
|
|
||||||
Below is a YAML snippet we will use as base to create our `MachineSet`:
|
Copy the commands below into your web terminal:
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: machine.openshift.io/v1beta1
|
|
||||||
kind: MachineSet
|
|
||||||
metadata:
|
|
||||||
name: cluster-<id>-windows-ap-southeast-<zone>
|
|
||||||
namespace: openshift-machine-api
|
|
||||||
labels:
|
|
||||||
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
|
||||||
machine.openshift.io/cluster-api-machineset: cluster-<id>-worker-ap-southeast-<zone>
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
machine.openshift.io/cluster-api-cluster: cluster-<id>
|
|
||||||
machine.openshift.io/cluster-api-machine-role: worker
|
|
||||||
machine.openshift.io/cluster-api-machine-type: worker
|
|
||||||
machine.openshift.io/cluster-api-machineset: cluster-<id>-worker-ap-southeast-<zone>
|
|
||||||
machine.openshift.io/os-id: Windows
|
|
||||||
spec:
|
|
||||||
lifecycleHooks: {}
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
node-role.kubernetes.io/worker: ''
|
|
||||||
providerSpec:
|
|
||||||
value:
|
|
||||||
userDataSecret:
|
|
||||||
name: windows-user-data
|
|
||||||
placement:
|
|
||||||
availabilityZone: ap-southeast-<zone>
|
|
||||||
region: ap-southeast-1
|
|
||||||
credentialsSecret:
|
|
||||||
name: aws-cloud-credentials
|
|
||||||
instanceType: m5a.4xlarge
|
|
||||||
metadata:
|
|
||||||
creationTimestamp: null
|
|
||||||
blockDevices:
|
|
||||||
- ebs:
|
|
||||||
iops: 0
|
|
||||||
kmsKey: {}
|
|
||||||
volumeSize: 120
|
|
||||||
volumeType: gp2
|
|
||||||
securityGroups:
|
|
||||||
- filters:
|
|
||||||
- name: 'tag:Name'
|
|
||||||
values:
|
|
||||||
- cluster-<id>-worker-sg
|
|
||||||
kind: AWSMachineProviderConfig
|
|
||||||
metadataServiceOptions: {}
|
|
||||||
tags:
|
|
||||||
- name: kubernetes.io/cluster/cluster-<id>
|
|
||||||
value: owned
|
|
||||||
deviceIndex: 0
|
|
||||||
ami:
|
|
||||||
id: ami-0e76083a67107f741
|
|
||||||
subnet:
|
|
||||||
filters:
|
|
||||||
- name: 'tag:Name'
|
|
||||||
values:
|
|
||||||
- cluster-<id>-private-ap-southeast-<zone>
|
|
||||||
apiVersion: awsproviderconfig.openshift.io/v1beta1
|
|
||||||
iamInstanceProfile:
|
|
||||||
id: cluster-<id>-worker-profile
|
|
||||||
```
|
|
||||||
|
|
||||||
There are ten references to `<id>` in the sample that we need to find & replace with the actual cluster id for the cluster we have been allocated for the workshop and five references to the availability `<zone>` for our cluster nodes that we also need to update with our actual zone in use.
|
|
||||||
|
|
||||||
Run the following command in your bastion host terminal session to find your cluster id and zone:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
name=$(oc get machineset -A -o jsonpath={.items[0].metadata.name})
|
PRIVATE_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Private Subnet - disco").SubnetId' -r)
|
||||||
echo "Cluster id is: ${name:8:11}"
|
echo $PRIVATE_SUBNET
|
||||||
echo "Cluster availability zone is: ${name:40:2}"
|
|
||||||
|
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||||
|
echo $SG_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
After retrieving your cluster id and zone update the sample `MachineSet` using your preferred text editor, then select and copy all of the text to clipboard.
|
Once we know our subnet and security group ID's we can spin up our **High side** bastion server. Copy the commands below into your web terminal to complete this:
|
||||||
|
|
||||||
Within OpenShift you can then click the ➕ button in the top right hand corner, paste in your yaml and click **Create**.
|
```bash
|
||||||
|
aws ec2 run-instances --image-id $(cat ami.txt) \
|
||||||
|
--count 1 \
|
||||||
|
--instance-type t3.large \
|
||||||
|
--key-name disco-key \
|
||||||
|
--security-group-ids $SG_ID \
|
||||||
|
--subnet-id $PRIVATE_SUBNET \
|
||||||
|
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-bastion-server}]" \
|
||||||
|
--block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
|
||||||
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Create a windows machineset* |
|
| *Launching bastion ec2 instance* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 3.2 Verify windows machine status
|
## 3.2 - Accessing the high side
|
||||||
|
|
||||||
After creating the `MachineSet` a new Windows machine will be automatically provisioned and added to our OpenShift cluster, as we set our desired replicas in the YAML to `1`.
|
Now we need to access our bastion server on the high side. In real customer environments, this might entail use of a VPN, or physical access to a workstation in a secure facility such as a SCIF.
|
||||||
|
|
||||||
|
To make things a bit simpler for our lab, we're going to restrict access to our bastion to its private IP address. So we'll use the prep system as a sort of bastion-to-the-bastion.
|
||||||
|
|
||||||
|
Let's get access by grabbing the bastion's private IP.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
HIGHSIDE_BASTION_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-bastion-server" | jq -r '.Reservations[0].Instances[0].PrivateIpAddress')
|
||||||
|
echo $HIGHSIDE_BASTION_IP
|
||||||
|
```
|
||||||
|
|
||||||
|
Our next step will be to `exit` back to our web terminal and copy our private key to the prep system so that we can `ssh` to the bastion from there. You may have to wait a minute for the VM to finish initializing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
|
||||||
|
|
||||||
|
scp -i disco_key disco_key ec2-user@$PREP_SYSTEM_IP:/home/ec2-user/disco_key
|
||||||
|
```
|
||||||
|
|
||||||
|
To make life a bit easier down the track let's set an environment variable on the prep system so that we can preserve the bastion's IP:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh -i disco_key ec2-user@$PREP_SYSTEM_IP "echo HIGHSIDE_BASTION_IP=$(echo $HIGHSIDE_BASTION_IP) > highside.env"
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally - Let's now connect all the way through to our **High side** bastion 🚀
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||||
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Check the status of the new windows machine* |
|
| *Connecting to our bastion ec2 instance* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
Creating, provisioning and configuring a new Windows host can be a lengthy process taking 15-30 minutes so now is a good time to take a break ☕.
|
|
||||||
|
|
||||||
You can keep an eye on the status of your Machine in the OpenShift web console. Once it reaches the **✅ Provisioned as node** status you are ready to proceed to the next exercise.
|
## 3.3 - Sneakernetting content to the high side
|
||||||
|
|
||||||
|
We'll now deliver the **High side** gift basket to the bastion server. Start by mounting our EBS volume on the bastion server to ensure that we don't run out of space:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo mkfs -t xfs /dev/nvme1n1
|
||||||
|
sudo mkdir /mnt/high-side
|
||||||
|
sudo mount /dev/nvme1n1 /mnt/high-side
|
||||||
|
sudo chown ec2-user:ec2-user /mnt/high-side
|
||||||
|
```
|
||||||
|
|
||||||
|
With the mount in place we can exit back to our base web terminal and send over our gift basket at `/mnt/high-side` using `rsync`. This can take 10-15 minutes depending on the size of the mirror tarball.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "rsync -avP -e 'ssh -i disco_key' /mnt/high-side ec2-user@$HIGHSIDE_BASTION_IP:/mnt"
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Initiating the sneakernet transfer via rsync* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
Once your transfer has finished pushing you are finished with exercise 3, well done! 🎉
|
||||||
|
|||||||
@ -1,90 +1,102 @@
|
|||||||
---
|
---
|
||||||
title: Deploying a windows workload
|
title: Deploying a mirror registry
|
||||||
exercise: 4
|
exercise: 4
|
||||||
date: '2024-05-26'
|
date: '2023-12-20'
|
||||||
tags: ['openshift','windows','kubernetes','containers']
|
tags: ['openshift','containers','kubernetes','disconnected']
|
||||||
draft: false
|
draft: false
|
||||||
authors: ['default']
|
authors: ['default']
|
||||||
summary: "Putting our new cluster windows node to work 🚀"
|
summary: "Let's start mirroring some content on our high side!"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
Images used by operators and platform components must be mirrored from upstream sources into a container registry that is accessible by the **High side**. You can use any registry you like for this as long as it supports Docker `v2-2`, such as:
|
||||||
|
- Red Hat Quay
|
||||||
|
- JFrog Artifactory
|
||||||
|
- Sonatype Nexus Repository
|
||||||
|
- Harbor
|
||||||
|
|
||||||
With our cluster now having both Windows and Linux worker nodes, let's deploy a hybrid workload that will make use of both.
|
An OpenShift subscription includes access to the [mirror registry](https://docs.openshift.com/container-platform/4.14/installing/disconnected_install/installing-mirroring-creating-registry.html#installing-mirroring-creating-registry) for Red Hat OpenShift, which is a small-scale container registry designed specifically for mirroring images in disconnected installations. We'll make use of this option in this lab.
|
||||||
|
|
||||||
**The NetCandy Store**
|
Mirroring all release and operator images can take awhile depending on the network bandwidth. For this lab, recall that we're going to mirror just the release images to save time and resources.
|
||||||
|
|
||||||
You will be deploying a sample application stack that delivers an eCommerce site, The NetCandy Store. This application is built using Windows Containers working together with Linux Containers.
|
We should have the `mirror-registry` binary along with the required container images available on the bastion in `/mnt/high-side`. The `50GB` volume we created should be enough to hold our mirror (without operators) and binaries.
|
||||||
|
|
||||||
This application consists of:
|
|
||||||
|
|
||||||
1. Windows Container running a .NET v4 frontend, which is consuming a backend service.
|
## 4.1 - Opening mirror registry port ingress
|
||||||
2. Linux Container running a .NET Core backend service, which is using a database.
|
|
||||||
3. Linux Container running a MSSql database 🤯.
|
We are getting close to deploying a disconnected OpenShift cluster that will be spread across multiple machines which are in turn spread across our three private subnets.
|
||||||
|
|
||||||
|
Each of the machines in those private subnets will need to talk back to our mirror registry on port `8443` so let's quickly update our aws security group to ensure this will work.
|
||||||
|
|
||||||
|
> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
|
||||||
|
|
||||||
|
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 8443 --cidr 0.0.0.0/0
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 4.2 - Running the registry install
|
||||||
|
|
||||||
|
First, let's `ssh` back into the bastion:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
|
||||||
|
```
|
||||||
|
|
||||||
|
And then we can kick off our install:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /mnt/high-side
|
||||||
|
./mirror-registry install --quayHostname $(hostname) --quayRoot /mnt/high-side/quay/quay-install --quayStorage /mnt/high-side/quay/quay-storage --pgStorage /mnt/high-side/quay/pg-data --initPassword discopass
|
||||||
|
```
|
||||||
|
|
||||||
|
If all goes well, you should see something like:
|
||||||
|
|
||||||
|
```text
|
||||||
|
INFO[2023-07-06 15:43:41] Quay installed successfully, config data is stored in /mnt/quay/quay-install
|
||||||
|
INFO[2023-07-06 15:43:41] Quay is available at https://ip-10-0-51-47.ec2.internal:8443 with credentials (init, discopass)
|
||||||
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Mixed workload architecture diagram* |
|
| *Running the mirror-registry installer* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
|
||||||
## 4.1 Add helm repository
|
## 4.3 Logging into the mirror registry
|
||||||
|
|
||||||
In this exercise we will deploy the NetCandy Store application using `helm`. You can deliver your Windows workloads in the same way you deliver your Linux workloads. Since everything is just YAML, the workflow is the same. Whether that be via Helm, an Operator, or via Ansible.
|
Now that our registry is running let's login with `podman` which will generate an auth file at `/run/user/1000/containers/auth.json`.
|
||||||
|
|
||||||
We'll get started by creating a project and adding a helm repository that our application helm chart will be sourced from.
|
```bash
|
||||||
|
podman login -u init -p discopass --tls-verify=false $(hostname):8443
|
||||||
|
```
|
||||||
|
|
||||||
Follow the steps below to add the repository:
|
We should be greeted with `Login Succeeded!`.
|
||||||
|
|
||||||
1. Switch from **Administrator** to **Developer** view in the top left web console dropdown menu.
|
> Note: We pass `--tls-verify=false` here for simplicity during this workshop, but you can optionally add `/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem` to the system trust store by following the guide in the Quay documentation [here](https://access.redhat.com/documentation/en-us/red_hat_quay/3/html/manage_red_hat_quay/using-ssl-to-protect-quay?extIdCarryOver=true&sc_cid=701f2000001OH74AAG#configuring_the_system_to_trust_the_certificate_authority).
|
||||||
2. Click on **+Add** in the left menu.
|
|
||||||
3. Click on the **Project** dropdown at the top and click **Create Project**
|
|
||||||
4. Enter the name `netcandystore` and click **Create**.
|
|
||||||
5. Click on **Helm Chart repositories**.
|
|
||||||
6. Enter the name `redhat-demos` and url `https://redhat-developer-demos.github.io/helm-repo` then click **Create**.
|
|
||||||
|
|
||||||
This will allow us to deploy any helm charts available in this repository.
|
|
||||||
|
## 4.4 Pushing content into mirror registry
|
||||||
|
|
||||||
|
Now we're ready to mirror images from disk into the registry. Let's add `oc` and `oc-mirror` to the path:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo cp /mnt/high-side/oc /usr/local/bin/
|
||||||
|
sudo cp /mnt/high-side/oc-mirror /usr/local/bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
And now we fire up the mirror process to push our content from disk into the registry ready to be pulled by the OpenShift installation. This can take a similar amount of time to the sneakernet procedure we completed in exercise 3.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc mirror --from=/mnt/high-side/mirror_seq1_000000.tar --dest-skip-tls docker://$(hostname):8443
|
||||||
|
```
|
||||||
|
|
||||||
<Zoom>
|
<Zoom>
|
||||||
| |
|
| |
|
||||||
|:-----------------------------------------------------------------------------:|
|
|:-----------------------------------------------------------------------------:|
|
||||||
| *Creating a project and adding a helm repository* |
|
| *Running the oc mirror process to push content to our registry* |
|
||||||
</Zoom>
|
</Zoom>
|
||||||
|
|
||||||
|
Once your content has finished pushing you are finished with exercise 4, well done! 🎉
|
||||||
## 4.2 Deploy candystore helm chart
|
|
||||||
|
|
||||||
With our helm chart repository added, let's deploy our application! This is as simple as following the three steps below to create a helm release.
|
|
||||||
|
|
||||||
1. Search for `candy` on the **Helm charts** screen.
|
|
||||||
2. Click on **Netcandystore** and then click **Create**.
|
|
||||||
3. Review the chart settings and click **Create** once more.
|
|
||||||
|
|
||||||
<Zoom>
|
|
||||||
| |
|
|
||||||
|:-----------------------------------------------------------------------------:|
|
|
||||||
| *Create mixed archiecture application via helm* |
|
|
||||||
</Zoom>
|
|
||||||
|
|
||||||
> Note: The application can take a few minutes to complete deploying, time for another coffee ☕.
|
|
||||||
|
|
||||||
## 4.3 Review deployed windows application
|
|
||||||
|
|
||||||
After creating our helm release we can see the status of the application from the **Topology** screen in the **Developer** view.
|
|
||||||
|
|
||||||
We can verify our Windows Container is running by:
|
|
||||||
|
|
||||||
1. Clicking on the **netcandystore** frontend Windows Container.
|
|
||||||
2. Selecting the **Resources** tab on the right hand panel and clicking on the pod name.
|
|
||||||
3. Clicking the **Terminal** tab and verifying that a Windows command prompt displays.
|
|
||||||
4. Returning to the **Topology** screen and opening the URL for the **netcandystore** application to confirm the application is running.
|
|
||||||
|
|
||||||
> Note: You may need to change from `https://` to `http://` in your browser address bar when opening the application URL as some browsers now automatically attempt to redirect to HTTPS, however this application route is currently only served as HTTP.
|
|
||||||
|
|
||||||
<Zoom>
|
|
||||||
| |
|
|
||||||
|:-----------------------------------------------------------------------------:|
|
|
||||||
| *Confirm Windows container status* |
|
|
||||||
</Zoom>
|
|
||||||
|
|
||||||
Congratulations! You've taken an existing OpenShift 4 cluster, set it up for running Windows workloads, then deployed a Windows app 🎉.
|
|
||||||
|
|||||||
219
data/workshop/exercise5.mdx
Normal file
219
data/workshop/exercise5.mdx
Normal file
@ -0,0 +1,219 @@
|
|||||||
|
---
|
||||||
|
title: Installing a disconnected OpenShift cluster
|
||||||
|
exercise: 5
|
||||||
|
date: '2023-12-20'
|
||||||
|
tags: ['openshift','containers','kubernetes','disconnected']
|
||||||
|
draft: false
|
||||||
|
authors: ['default']
|
||||||
|
summary: "Time to install a cluster 🚀"
|
||||||
|
---
|
||||||
|
|
||||||
|
We're on the home straight now. In this exercise we'll configure and then execute our `openshift-installer`.
|
||||||
|
|
||||||
|
The OpenShift installation process is initiated from the bastion server on our **High side**. There are a handful of different ways to install OpenShift, but for this lab we're going to be using installer-provisioned infrastructure (IPI).
|
||||||
|
|
||||||
|
By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters.
|
||||||
|
|
||||||
|
We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Installation overview* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
> Note: You may notice that nodes are provisioned through a process called Ignition. This concept is out of scope for this workshop, but if you're interested to learn more about it, you can read up on it in the documentation [here](https://docs.openshift.com/container-platform/4.14/installing/index.html#about-rhcos).
|
||||||
|
|
||||||
|
IPI is the recommended installation method in most cases because it leverages full automation in installation and cluster management, but there are some key considerations to keep in mind when planning a production installation in a real world scenario.
|
||||||
|
|
||||||
|
You may not have access to the infrastructure APIs. Our lab is going to live in AWS, which requires connectivity to the `.amazonaws.com` domain. We accomplish this by using an allowed list on a Squid proxy running on the **High side**, but a similar approach may not be achievable or permissible for everyone.
|
||||||
|
|
||||||
|
You may not have sufficient permissions with your infrastructure provider. Our lab has full admin in our AWS enclave, so that's not a constraint we'll need to deal with. In real world environments, you'll need to ensure your account has the appropriate permissions which sometimes involves negotiating with security teams.
|
||||||
|
|
||||||
|
Once configuration has been completed, we can kick off the OpenShift Installer and it will do all the work for us to provision the infrastructure and install OpenShift.
|
||||||
|
|
||||||
|
|
||||||
|
## 5.1 - Building install-config.yaml
|
||||||
|
|
||||||
|
Before we run the installer we need to create a configuration file. Let's set up a workspace for it first.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir /mnt/high-side/install
|
||||||
|
cd /mnt/high-side/install
|
||||||
|
```
|
||||||
|
|
||||||
|
Next we will generate the ssh key pair for access to cluster nodes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh-keygen -f ~/.ssh/disco-openshift-key -q -N ""
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the following Python code to minify your mirror container registry pull secret to a single line. Copy this output to your clipboard, since you'll need it in a moment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -c $'import json\nimport sys\nwith open(sys.argv[1], "r") as f: print(json.dumps(json.load(f)))' /run/user/1000/containers/auth.json
|
||||||
|
```
|
||||||
|
|
||||||
|
> Note: For connected installations, you'd use the secret from the Hybrid Cloud Console, but for our use case, the mirror registry is the only one OpenShift will need to authenticate to.
|
||||||
|
|
||||||
|
Then we can go ahead and generate our `install-config.yaml`:
|
||||||
|
|
||||||
|
> Note: We are setting --log-level to get more verbose output.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/mnt/high-side/openshift-install create install-config --dir /mnt/high-side/install --log-level=DEBUG
|
||||||
|
```
|
||||||
|
|
||||||
|
The OpenShift installer will prompt you for a number of fields; enter the values below:
|
||||||
|
|
||||||
|
- SSH Public Key: `/home/ec2-user/.ssh/disco-openshift-key.pub`
|
||||||
|
> The SSH public key used to access all nodes within the cluster.
|
||||||
|
|
||||||
|
- Platform: aws
|
||||||
|
> The platform on which the cluster will run.
|
||||||
|
|
||||||
|
- AWS Access Key ID and Secret Access Key: From `cat ~/.aws/credentials`
|
||||||
|
|
||||||
|
- Region: `us-east-2`
|
||||||
|
|
||||||
|
- Base Domain: `sandboxXXXX.opentlc.com` This should automatically populate.
|
||||||
|
> The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
|
||||||
|
|
||||||
|
- Cluster Name: `disco`
|
||||||
|
>The name of the cluster. This will be used when generating sub-domains.
|
||||||
|
|
||||||
|
- Pull Secret: Paste the output from minifying this to a single line in Step 3.
|
||||||
|
|
||||||
|
That's it! The installer will generate `install-config.yaml` and drop it in `/mnt/high-side/install` for you.
|
||||||
|
|
||||||
|
Once the config file is generated take a look through it, we will be making some changes as follows:
|
||||||
|
|
||||||
|
- Change `publish` from `External` to `Internal`. We're using private subnets to house the cluster, so it won't be publicly accessible.
|
||||||
|
|
||||||
|
- Add the subnet IDs for your private subnets to `platform.aws.subnets`. Otherwise, the installer will create its own VPC and subnets. You can retrieve them by running this command from your workstation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).SubnetId] | unique' -r | yq read - -P
|
||||||
|
```
|
||||||
|
|
||||||
|
Then add them to `platform.aws.subnets` in your `install-config.yaml` so that they look something like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
platform:
|
||||||
|
aws:
|
||||||
|
region: us-east-1
|
||||||
|
subnets:
|
||||||
|
- subnet-00f28bbc11d25d523
|
||||||
|
- subnet-07b4de5ea3a39c0fd
|
||||||
|
- subnet-07b4de5ea3a39c0fd
|
||||||
|
```
|
||||||
|
|
||||||
|
- Next we need to modify the `machineNetwork` to match the IPv4 CIDR blocks from the private subnets. Otherwise your control plane and compute nodes will be assigned IP addresses that are out of range and break the install. You can retrieve them by running this command from your workstation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).CidrBlock] | unique | map("cidr: " + .)' | yq read -P - | sed "s/'//g"
|
||||||
|
```
|
||||||
|
|
||||||
|
Then use them to **replace the existing** `networking.machineNetwork` entry in your `install-config.yaml` so that they look something like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
networking:
|
||||||
|
clusterNetwork:
|
||||||
|
- cidr: 10.128.0.0/14
|
||||||
|
hostPrefix: 23
|
||||||
|
machineNetwork:
|
||||||
|
- cidr: 10.0.48.0/20
|
||||||
|
- cidr: 10.0.64.0/20
|
||||||
|
- cidr: 10.0.80.0/20
|
||||||
|
```
|
||||||
|
|
||||||
|
- Next we will add the `imageContentSources` to ensure image mappings happen correctly. You can append them to your `install-config.yaml` by running this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat << EOF >> install-config.yaml
|
||||||
|
imageContentSources:
|
||||||
|
- mirrors:
|
||||||
|
- $(hostname):8443/ubi8/ubi
|
||||||
|
source: registry.redhat.io/ubi8/ubi
|
||||||
|
- mirrors:
|
||||||
|
- $(hostname):8443/openshift/release-images
|
||||||
|
source: quay.io/openshift-release-dev/ocp-release
|
||||||
|
- mirrors:
|
||||||
|
- $(hostname):8443/openshift/release
|
||||||
|
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
- Add the root CA of our mirror registry (`/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem`) to the trust bundle using the `additionalTrustBundle` field by running this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat <<EOF >> install-config.yaml
|
||||||
|
additionalTrustBundle: |
|
||||||
|
$(cat /mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem | sed 's/^/ /')
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
It should look something like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
additionalTrustBundle: |
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIID2DCCAsCgAwIBAgIUbL/naWCJ48BEL28wJTvMhJEz/C8wDQYJKoZIhvcNAQEL
|
||||||
|
BQAwdTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
|
||||||
|
azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xJDAiBgNVBAMMG2lw
|
||||||
|
LTEwLTAtNTEtMjA2LmVjMi5pbnRlcm5hbDAeFw0yMzA3MTExODIyMjNaFw0yNjA0
|
||||||
|
MzAxODIyMjNaMHUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTERMA8GA1UEBwwI
|
||||||
|
TmV3IFlvcmsxDTALBgNVBAoMBFF1YXkxETAPBgNVBAsMCERpdmlzaW9uMSQwIgYD
|
||||||
|
VQQDDBtpcC0xMC0wLTUxLTIwNi5lYzIuaW50ZXJuYWwwggEiMA0GCSqGSIb3DQEB
|
||||||
|
AQUAA4IBDwAwggEKAoIBAQDEz/8Pi4UYf/zanB4GHMlo4nbJYIJsyDWx+dPITTMd
|
||||||
|
J3pdOo5BMkkUQL8rSFkc3RjY/grdk2jejVPQ8sVnSabsTl+ku7hT0t1w7E0uPY8d
|
||||||
|
RTeGoa5QvdFOxWz6JsLo+C+JwVOWI088tYX1XZ86TD5FflOEeOwWvs5cmQX6L5O9
|
||||||
|
QGO4PHBc9FWpmaHvFBiRJN3AQkMK4C9XB82G6mCp3c1cmVwFOo3vX7h5738PKXWg
|
||||||
|
KYUTGXHxd/41DBhhY7BpgiwRF1idfLv4OE4bzsb42qaU4rKi1TY+xXIYZ/9DPzTN
|
||||||
|
nQ2AHPWbVxI+m8DZa1DAfPvlZVxAm00E1qPPM30WrU4nAgMBAAGjYDBeMAsGA1Ud
|
||||||
|
DwQEAwIC5DATBgNVHSUEDDAKBggrBgEFBQcDATAmBgNVHREEHzAdghtpcC0xMC0w
|
||||||
|
LTUxLTIwNi5lYzIuaW50ZXJuYWwwEgYDVR0TAQH/BAgwBgEB/wIBATANBgkqhkiG
|
||||||
|
9w0BAQsFAAOCAQEAkkV7/+YhWf1vq//N0Ms0td0WDJnqAlbZUgGkUu/6XiUToFtn
|
||||||
|
OE58KCudP0cAQtvl0ISfw0c7X/Ve11H5YSsVE9afoa0whEO1yntdYQagR0RLJnyo
|
||||||
|
Dj9xhQTEKAk5zXlHS4meIgALi734N2KRu+GJDyb6J0XeYS2V1yQ2Ip7AfCFLdwoY
|
||||||
|
cLtooQugLZ8t+Kkqeopy4pt8l0/FqHDidww1FDoZ+v7PteoYQfx4+R5e8ko/vKAI
|
||||||
|
OCALo9gecCXc9U63l5QL+8z0Y/CU9XYNDfZGNLSKyFTsbQFAqDxnCcIngdnYFbFp
|
||||||
|
mRa1akgfPl+BvAo17AtOiWbhAjipf5kSBpmyJA==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
```
|
||||||
|
|
||||||
|
Lastly, now is a good time to make a backup of your `install-config.yaml` since the installer will consume (and delete) it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp install-config.yaml install-config.yaml.bak
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 5.2 Running the installation
|
||||||
|
|
||||||
|
We're ready to run the install! Let's kick off the cluster installation by copying the command below into our web terminal:
|
||||||
|
|
||||||
|
> Note: Once more we can use the `--log-level=DEBUG` flag to get more insight on how the install is progressing.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/mnt/high-side/openshift-install create cluster --log-level=DEBUG
|
||||||
|
```
|
||||||
|
|
||||||
|
<Zoom>
|
||||||
|
| |
|
||||||
|
|:-----------------------------------------------------------------------------:|
|
||||||
|
| *Installation overview* |
|
||||||
|
</Zoom>
|
||||||
|
|
||||||
|
The installation process should take about 30 minutes. If you've done everything correctly, you should see something like the example below at the conclusion:
|
||||||
|
|
||||||
|
```text
|
||||||
|
...
|
||||||
|
INFO Install complete!
|
||||||
|
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
|
||||||
|
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
|
||||||
|
INFO Login to the console with user: "kubeadmin", and password: "password"
|
||||||
|
INFO Time elapsed: 30m49s
|
||||||
|
```
|
||||||
|
|
||||||
|
If you made it this far you have completed all the workshop exercises, well done! 🎉
|
||||||
Reference in New Issue
Block a user