Updated asset paths and next config for cname.
This commit is contained in:
		@ -22,7 +22,7 @@ There are of course many different options for installing OpenShift in a restric
 | 
			
		||||
To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20.  You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                | 
 | 
			
		||||
|                | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Etherpad collaborative editor*                                               |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -50,7 +50,7 @@ For the purposes of this workshop we will be operating within Amazon Web Service
 | 
			
		||||
The diagram below shows a simplified overview of the networking topology:
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|               | 
 | 
			
		||||
|               | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Workshop network topology*                                                   |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -38,7 +38,7 @@ echo "Security group id is: ${SG_ID}"
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|          | 
 | 
			
		||||
|          | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Creating aws ec2 security group*                                             |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -55,7 +55,7 @@ aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|          | 
 | 
			
		||||
|          | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Opening ssh port ingress*                                                    |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -81,7 +81,7 @@ aws ec2 run-instances --image-id "ami-092b43193629811af" \
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|         | 
 | 
			
		||||
|         | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Launching a prep rhel8 ec2 instance*                                         |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -103,7 +103,7 @@ ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|        | 
 | 
			
		||||
|        | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Connecting to the prep rhel8 ec2 instance*                                   |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -161,7 +161,7 @@ rm -f openshift-installer.tar.gz
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|       | 
 | 
			
		||||
|       | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Downloading required tools with curl*                                        |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -14,7 +14,7 @@ In this exercise, we'll prepare the **High side**. This involves creating a bast
 | 
			
		||||
> 
 | 
			
		||||
> We could rectify this by running `sudo dnf install -y podman` on the bastion system, but the bastion server won't have Internet access, so we need another option for this lab. To solve this problem, we need to build our own RHEL image with podman pre-installed. Real customer environments will likely already have a solution for this, but one approach is to use the [Image Builder](https://console.redhat.com/insights/image-builder) in the Hybrid Cloud Console, and that's exactly what has been done for this lab.
 | 
			
		||||
>
 | 
			
		||||
> [workshop](/workshops/static/images/disconnected/image-builder.png)
 | 
			
		||||
> [workshop](/static/images/disconnected/image-builder.png)
 | 
			
		||||
> 
 | 
			
		||||
> In the home directory of your web terminal you will find an `ami.txt` file containng our custom image AMI which will be used by the command that creates our bastion ec2 instance.
 | 
			
		||||
 | 
			
		||||
@ -47,7 +47,7 @@ aws ec2 run-instances --image-id $(cat ami.txt) \
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|      | 
 | 
			
		||||
|      | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Launching bastion ec2 instance*                                              |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -87,7 +87,7 @@ ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGH
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|     | 
 | 
			
		||||
|     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Connecting to our bastion ec2 instance*                                      |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -111,7 +111,7 @@ ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "rsync -avP -e 'ssh -i disco_key' /
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|     | 
 | 
			
		||||
|     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Initiating the sneakernet transfer via rsync*                                |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -59,7 +59,7 @@ INFO[2023-07-06 15:43:41] Quay is available at https://ip-10-0-51-47.ec2.interna
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|        | 
 | 
			
		||||
|        | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Running the mirror-registry installer*                                       |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -94,7 +94,7 @@ oc mirror --from=/mnt/high-side/mirror_seq1_000000.tar --dest-skip-tls docker://
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|           | 
 | 
			
		||||
|           | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Running the oc mirror process to push content to our registry*               |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -17,7 +17,7 @@ By default, the installation program acts as an installation wizard, prompting y
 | 
			
		||||
We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|        | 
 | 
			
		||||
|        | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Installation overview*                                                       |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -200,7 +200,7 @@ We're ready to run the install! Let's kick off the cluster installation by copyi
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|         | 
 | 
			
		||||
|         | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Installation overview*                                                       |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -18,7 +18,7 @@ You're in a race to reach the highest score before the session concludes! If mul
 | 
			
		||||
## 1.1 - The hackathon scenario
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                  | 
 | 
			
		||||
|                  | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Acme Financial Services*                                                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -39,7 +39,7 @@ All challenge tasks must be performed on this cluster so your solutions can be g
 | 
			
		||||
You can and are encouraged to use any supporting documentation or other resources in order to tackle each of the challenge tasks.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                     | 
 | 
			
		||||
|                     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *OpenShift bare metal cluster console*                                        |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -54,7 +54,7 @@ To get underway open your web browser and navigate to this link to allocate an e
 | 
			
		||||
Register for an environment using `[team name]@redhat.com` and the password provided by your hackathon organisers. Registering with a team email will mean all your team members will be able to see the same cluster details for your shared team cluster.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                    | 
 | 
			
		||||
|                    | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Hackathon team registration page*                                            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -30,7 +30,7 @@ Documentation you may find helpful is:
 | 
			
		||||
For this challenge you will know you are successful and will be awarded points when your virtual machine boots the given iso and shows the following logo in vnc console:
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                | 
 | 
			
		||||
|                | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Crusty corp financial appliance boot screen.*                                |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -15,7 +15,7 @@ You know KVM & KubeVirt has supported a similar feature called "Live Migration"
 | 
			
		||||
The Acme Financial Services team have put you on the spot, can you pull off a virtual machine live migration? 😅
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|            | 
 | 
			
		||||
|            | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *He's dead Jim...*                                                            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -19,7 +19,7 @@ The Acme team are stuck on how they might implement this goal within their curre
 | 
			
		||||
Your local pre-sales team has offered to setup an example environment for Acme and step through how to enable the feature. No worries right. After all, how hard can it be?
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                 | 
 | 
			
		||||
|                 | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *"We've all said it 😂"*                                                      |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -15,7 +15,7 @@ The Acme team have talked about modernisation throughout the proof of concept so
 | 
			
		||||
This is it. No pressure but we need to nail this!
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|            | 
 | 
			
		||||
|            | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *"The best of both worlds!"*                                                  |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -41,7 +41,7 @@ Once the workloads are deployed your challenge is to create one service named `a
 | 
			
		||||
You'll know if this is working correctly when you can see two pods appearing in your service pod listing:
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|             | 
 | 
			
		||||
|             | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *"One service balancing traffic across a vm and standard pod!"*               |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -4,7 +4,7 @@ const siteMetadata = {
 | 
			
		||||
  headerTitle: 'Red Hat',
 | 
			
		||||
  description: 'Red Hat OpenShift Application Delivery Workshop',
 | 
			
		||||
  language: 'en-us',
 | 
			
		||||
  siteUrl: 'https://jmhbnz.github.io/workshops',
 | 
			
		||||
  siteUrl: 'https://rhdemo.win',
 | 
			
		||||
  siteRepo: 'https://github.com/jmhbnz/workshops',
 | 
			
		||||
  siteLogo: '/static/images/redhat.png',
 | 
			
		||||
  image: '/static/images/avatar.png',
 | 
			
		||||
 | 
			
		||||
@ -22,7 +22,7 @@ For this workshop you'll be given a fresh OpenShift 4 cluster which currently on
 | 
			
		||||
To get underway open your web browser and navigate to the following link to reserve yourself a user https://demo.redhat.com/workshop/98b7pu.  You can reserve an environment by entering any email address along with the password provided by your workshop facilitator.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                     | 
 | 
			
		||||
|                     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Obtaining a workshop environment*                                            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -35,7 +35,7 @@ After entering an email and the provided password you'll be presented with a con
 | 
			
		||||
Open the console url and login. 
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                        | 
 | 
			
		||||
|                        | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Obtaining a workshop environment*                                            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -16,7 +16,7 @@ In this first hands on excercise we will prepare our cluster for running Windows
 | 
			
		||||
To install Operators on OpenShift we use Operator Hub. A simplistic way of thinking about Operator Hub is as the "App Store" for your OpenShift cluster.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                 | 
 | 
			
		||||
|                 | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *OpenShift Operator Hub*                                                      |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -51,7 +51,7 @@ oc patch networks.operator.openshift.io cluster --type=merge \
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|            | 
 | 
			
		||||
|            | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Patching an OpenShift cluster network to enable hybrid networking*           |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -70,7 +70,7 @@ Follow the steps below to install the operator:
 | 
			
		||||
4. Leave all settings as the default and click **Install** once more.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|             | 
 | 
			
		||||
|             | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Installing the windows machine config operator*                              |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -94,7 +94,7 @@ oc create secret generic cloud-private-key \
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                | 
 | 
			
		||||
|                | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Create a private key secret*                                                 |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -113,7 +113,7 @@ After retrieving your cluster id and zone update the sample `MachineSet` using y
 | 
			
		||||
Within OpenShift you can then click the ➕ button in the top right hand corner, paste in your yaml and click **Create**. 
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|            | 
 | 
			
		||||
|            | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Create a windows machineset*                                                 |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -124,7 +124,7 @@ Within OpenShift you can then click the ➕ button in the top right hand corner,
 | 
			
		||||
After creating the `MachineSet` a new Windows machine will be automatically provisioned and added to our OpenShift cluster, as we set our desired replicas in the YAML to `1`.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                | 
 | 
			
		||||
|                | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Check the status of the new windows machine*                                 |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -22,7 +22,7 @@ This application consists of:
 | 
			
		||||
3. Linux Container running a MSSql database 🤯.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|              | 
 | 
			
		||||
|              | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Mixed workload architecture diagram*                                         |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -46,7 +46,7 @@ Follow the steps below to add the repository:
 | 
			
		||||
This will allow us to deploy any helm charts available in this repository.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                | 
 | 
			
		||||
|                | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Creating a project and adding a helm repository*                             |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -61,7 +61,7 @@ With our helm chart repository added, let's deploy our application! This is as s
 | 
			
		||||
3. Review the chart settings and click **Create** once more.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|           | 
 | 
			
		||||
|           | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Create mixed archiecture application via helm*                               |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -82,7 +82,7 @@ We can verify our Windows Container is running by:
 | 
			
		||||
> Note: You may need to change from `https://` to `http://` in your browser address bar when opening the application URL as some browsers now automatically attempt to redirect to HTTPS, however this application route is currently only served as HTTP.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|          | 
 | 
			
		||||
|          | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Confirm Windows container status*                                            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -23,7 +23,7 @@ An OpenShift `4.14` cluster has already been provisioned for you to complete the
 | 
			
		||||
Once the page loads you can login with the details provided by your workshop facilitator.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|             | 
 | 
			
		||||
|             | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Workshop login page*                                                         |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -36,7 +36,7 @@ Once you're logged into the lab environnment we can open up the OpenShift web co
 | 
			
		||||
When first logging in you will be prompted to take a tour of the **Developer** console view, let's do that now. 
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|                     | 
 | 
			
		||||
|                     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Developer perspective web console tour*                                      |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -55,7 +55,7 @@ In this lab environment, you already have access to single project: `userX` (Whe
 | 
			
		||||
Let's click into our `Project` from the left hand panel of the **Developer** web console perspective. We should be able to see that our project has no `Deployments` and there are no compute cpu or memory resources currently being consumed.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|               | 
 | 
			
		||||
|               | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Developer perspective project view*                                          |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -74,7 +74,7 @@ Switch back to the **Developer** perspective. Once the Developer perspective loa
 | 
			
		||||
Right now, there are no applications or components to view in your `userX`  project, but once you begin working on the lab, you’ll be able to visualize and interact with the components in your application here.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|     | 
 | 
			
		||||
|     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Switching web console perspectives*                                          |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -90,7 +90,7 @@ One handy feature of the OpenShift web console is we can launch a web terminal t
 | 
			
		||||
Let's launch a web terminal now by clicking the terminal button in the top right hand corner and then clicking **Start** with our `userX` project selected.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
|     | 
 | 
			
		||||
|     | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Launching your web terminal*                                                 |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -183,7 +183,7 @@ DESCRIPTION:
 | 
			
		||||
That's a quick introduction to the `oc` command line utility. Let's close our web terminal now so we can move on to the next excercise.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-----------------------------------------------------------------------------:|
 | 
			
		||||
| *Closing your web terminal*                                                   |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -21,7 +21,7 @@ Before we begin, if you would like a bit more background on what a container is
 | 
			
		||||
In this exercise, we’re going to deploy the **web** component of the ParksMap application which uses OpenShift's service discovery mechanism to discover any accompanying backend services deployed and shows their data on the map. Below is a visual overview of the complete ParksMap application.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *ParksMap application architecture*                                 |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -43,7 +43,7 @@ Click **Create** to deploy the application.
 | 
			
		||||
OpenShift will pull this container image if it does not exist already on the cluster and then deploy a container based on this image. You will be taken back to the **Topology** view in the **Developer** perspective which will show the new "Parksmap" application.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Deploying the container image*                                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -56,7 +56,7 @@ If you click on the **parksmap** entry in the **Topology** view, you will see so
 | 
			
		||||
The **Resources** tab may be displayed by default. If so, click on the **Details** tab. On that tab, you will see that there is a single **Pod** that was created by your actions.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Deploying the container image*                                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -75,7 +75,7 @@ While **Services** provide internal abstraction and load balancing within an Ope
 | 
			
		||||
You may remember that when we deployed the ParksMap application, there was a checkbox ticked to automatically create a **Route**. Thanks to this, all we need to do to access the application is go the **Resources** tab of the application details pane and click the url shown under the **Routes** header.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Opening ParksMap application Route*                                |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -85,7 +85,7 @@ Clicking the link you should now see the ParksMap application frontend 🎉
 | 
			
		||||
> Note: If this is the first time opening this page, the browser will ask permission to get your position. This is needed by the Frontend app to center the world map to your location, if you don’t allow it, it will just use a default location.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *ParksMap application frontend*                                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -104,7 +104,7 @@ Click your "Parksmap" application icon then click on the **Resources** tab.
 | 
			
		||||
From the **Resources** tab click **View logs**
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Accessing the ParksMap application logs*                           |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -123,7 +123,7 @@ You should see the **Dashboard** tab. Set the time range to the `Last 1 hour` th
 | 
			
		||||
How much cpu and memory is your ParksMap application currently using?
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Checking the ParksMap application resource usage*                  |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -31,7 +31,7 @@ spec:
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *ParksMap application deployment replicas*                          |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -58,7 +58,7 @@ kill 1
 | 
			
		||||
The pod will automatically be restarted by OpenShift however if you refresh your second browser tab with the application **Route** you should be able to see the application is momentarily unavailable.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Intentionally crashing the ParksMap application*                   |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -79,7 +79,7 @@ In the **Details** tab of the information pane click the **^ Increase the pod co
 | 
			
		||||
Once the new pod is ready, repeat the steps from task `3.2` to crash one of the pods. You should see that the application continues to serve traffic thanks to our OpenShift **Service** load balancing traffic to the second **Pod**.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Scaling up the ParksMap application*                               |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -39,7 +39,7 @@ tlsRoute: true
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Gitea application deployment via helm chart*                       |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -63,7 +63,7 @@ Next, if we click on the overall gitea **Helm release** by clicking on the shade
 | 
			
		||||
> Note: Feel free to try out a `oc explain <resource>` command in your web terminal to learn more about each of the resource types mentioned above, for example `oc explain service`.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Gitea helm release resources created*                              |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -86,7 +86,7 @@ We will be returned to the **Helm releases** view. Notice how the release status
 | 
			
		||||
From here it is trivial to perform a **Rollback** to remove our misconfigured update. We'll do that in the next step.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Attempting a gitea helm upgrade*                                   |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -105,7 +105,7 @@ Click the three dot menu to the right hand side of the that helm release and cli
 | 
			
		||||
Select the radio button for revision `1` which should be showing a status of `Deployed`, then click **Rollback**.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Rolling back to a previous gitea helm release*                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -126,7 +126,7 @@ Click the three dot menu to the right hand side of the that helm release and cli
 | 
			
		||||
Enter the `gitea` confirmation at the prompt and click **Delete**. If you now return to the **Topology** view you will see the gitea application deleting.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Deleting the gitea application helm release*                       |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -52,7 +52,7 @@ Paste the above snippet of YAML into the editor and replace the instance of `use
 | 
			
		||||
Click **Create**. In a minute or so you should see the Grafana operator installed and running in your project.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Deploying grafana operator via static yaml*                        |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -100,7 +100,7 @@ spec:
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Deploying grafana application via the grafana operator*            |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -115,7 +115,7 @@ For our first step click on the **Workloads** category on the left hand side men
 | 
			
		||||
We should see that a `grafana-deployment-<id>` pod with a **Status** of `Running`.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Confirming the grafana pod is running*                             |
 | 
			
		||||
</Zoom>
 | 
			
		||||
@ -130,7 +130,7 @@ Click the **Route** named `grafana-route` and open the url on the right hand sid
 | 
			
		||||
Once the new tab opens we should be able to login to Grafana using the credentials we supplied in the previous step in the YAML configuration.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Confirming the grafana route is working*                           |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -46,7 +46,7 @@ Scroll down and under the **General** header click the **Application** drop down
 | 
			
		||||
Scroll down reviewing the other options then click **Create**.
 | 
			
		||||
 | 
			
		||||
<Zoom>
 | 
			
		||||
| | 
 | 
			
		||||
| | 
 | 
			
		||||
|:-------------------------------------------------------------------:|
 | 
			
		||||
| *Creating a source to image build in OpenShift*                     |
 | 
			
		||||
</Zoom>
 | 
			
		||||
 | 
			
		||||
@ -11,8 +11,8 @@ module.exports = withBundleAnalyzer({
 | 
			
		||||
  images: {
 | 
			
		||||
    unoptimized: true
 | 
			
		||||
  },
 | 
			
		||||
  basePath: '/workshops',
 | 
			
		||||
  assetPrefix: '/workshops/',
 | 
			
		||||
  basePath: '',
 | 
			
		||||
  assetPrefix: '',
 | 
			
		||||
  experimental: { esmExternals: true },
 | 
			
		||||
  webpack: (config, { dev, isServer }) => {
 | 
			
		||||
    config.module.rules.push({
 | 
			
		||||
 | 
			
		||||
@ -2,18 +2,18 @@
 | 
			
		||||
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
 | 
			
		||||
    <channel>
 | 
			
		||||
      <title>Red Hat OpenShift Application Delivery Workshop</title>
 | 
			
		||||
      <link>https://jmhbnz.github.io/workshops/workshop</link>
 | 
			
		||||
      <link>https://rhdemo.win/workshop</link>
 | 
			
		||||
      <description>Red Hat OpenShift Application Delivery Workshop</description>
 | 
			
		||||
      <language>en-us</language>
 | 
			
		||||
      <managingEditor>jablair@redhat.com (Red Hat)</managingEditor>
 | 
			
		||||
      <webMaster>jablair@redhat.com (Red Hat)</webMaster>
 | 
			
		||||
      <lastBuildDate>Mon, 04 Dec 2023 00:00:00 GMT</lastBuildDate>
 | 
			
		||||
      <atom:link href="https://jmhbnz.github.io/workshops/feed.xml" rel="self" type="application/rss+xml"/>
 | 
			
		||||
      <atom:link href="https://rhdemo.win/feed.xml" rel="self" type="application/rss+xml"/>
 | 
			
		||||
      
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise1</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise1</guid>
 | 
			
		||||
    <title>Getting familiar with OpenShift</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise1</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise1</link>
 | 
			
		||||
    <description>In this first exercise we'll get familiar with OpenShift.</description>
 | 
			
		||||
    <pubDate>Mon, 04 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
@ -21,9 +21,9 @@
 | 
			
		||||
  </item>
 | 
			
		||||
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise2</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise2</guid>
 | 
			
		||||
    <title>Deploying your first application</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise2</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise2</link>
 | 
			
		||||
    <description>Time to deploy your first app!</description>
 | 
			
		||||
    <pubDate>Tue, 05 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
@ -31,9 +31,9 @@
 | 
			
		||||
  </item>
 | 
			
		||||
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise3</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise3</guid>
 | 
			
		||||
    <title>Scaling and self-healing applications</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise3</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise3</link>
 | 
			
		||||
    <description>Let's scale our application up 📈</description>
 | 
			
		||||
    <pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
@ -41,9 +41,9 @@
 | 
			
		||||
  </item>
 | 
			
		||||
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise4</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise4</guid>
 | 
			
		||||
    <title>Deploying an application via helm chart</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise4</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise4</link>
 | 
			
		||||
    <description>Exploring alternative deployment approaches.</description>
 | 
			
		||||
    <pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
@ -51,9 +51,9 @@
 | 
			
		||||
  </item>
 | 
			
		||||
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise5</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise5</guid>
 | 
			
		||||
    <title>Deploying an application via operator</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise5</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise5</link>
 | 
			
		||||
    <description>Exploring alternative deployment approaches.</description>
 | 
			
		||||
    <pubDate>Wed, 06 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
@ -61,9 +61,9 @@
 | 
			
		||||
  </item>
 | 
			
		||||
 | 
			
		||||
  <item>
 | 
			
		||||
    <guid>https://jmhbnz.github.io/workshops/workshop/exercise6</guid>
 | 
			
		||||
    <guid>https://rhdemo.win/workshop/exercise6</guid>
 | 
			
		||||
    <title>Deploying an application from source</title>
 | 
			
		||||
    <link>https://jmhbnz.github.io/workshops/workshop/exercise6</link>
 | 
			
		||||
    <link>https://rhdemo.win/workshop/exercise6</link>
 | 
			
		||||
    <description>Exploring alternative deployment approaches.</description>
 | 
			
		||||
    <pubDate>Thu, 07 Dec 2023 00:00:00 GMT</pubDate>
 | 
			
		||||
    <author>jablair@redhat.com (Red Hat)</author>
 | 
			
		||||
 | 
			
		||||
		Reference in New Issue
	
	Block a user