diff --git a/data/workshop/exercise1.mdx b/data/workshop/exercise1.mdx
deleted file mode 100644
index 7607534..0000000
--- a/data/workshop/exercise1.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: Understanding our lab environment
-exercise: 1
-date: '2023-12-18'
-tags: ['openshift','containers','kubernetes','disconnected']
-draft: false
-authors: ['default']
-summary: "Let's get familiar with our lab setup."
----
-
-Welcome to the OpenShift 4 Disconnected Workshop! Here you'll learn about operating an OpenShift 4 cluster in a disconnected network, for our purposes today that will be a network without access to the internet (even through a proxy or firewall).
-
-To level set, Red Hat [OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a unified platform to build, modernize, and deploy applications at scale. OpenShift supports running in disconnected networks, though this does change the way the cluster operates because key ingredients like container images, operator bundles, and helm charts must be brought into the environment from the outside world via mirroring.
-
-There are of course many different options for installing OpenShift in a restricted network; this workshop will primarily cover one opinionated approach. We'll do our best to point out where there's the potential for variability along the way.
-
-**Let's get started!**
-
-
-## 1.1 - Obtaining your environment
-
-To get underway open your web browser and navigate to this etherpad link to reserve yourself a user https://etherpad.wikimedia.org/p/OpenShiftDisco_2023_12_20. You can reserve a user by noting your name or initials next to a user that has not yet been claimed.
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Etherpad collaborative editor* |
-
-
-
-## 1.2 - Opening your web terminal
-
-Throughout the remainder of the workshop you will be using a number of command line interface tools for example, `aws` to quickly interact with resources in Amazon Web Services, and `ssh` to login to a remote server.
-
-To save you from needing to install or configure these tools on your own device for the remainder of this workshop a web terminal will be available for you.
-
-Simply copy the link next to the user your reserved in etherpad and paste into your browser. If you are prompted to login select `htpass` and enter the credentials listed in etherpad.
-
-
-## 1.3 - Creating an air gap
-
-According to the [Internet Security Glossary](https://www.rfc-editor.org/rfc/rfc4949), an Air Gap is:
-
-> "an interface between two systems at which (a) they are not connected physically and (b) any logical connection is not automated (i.e., data is transferred through the interface only manually, under human control)."
-
-In disconnected OpenShift installations, the air gap exists between the **Low Side** and the **High Side**, so it is between these systems where a manual data transfer, or **sneakernet** is required.
-
-For the purposes of this workshop we will be operating within Amazon Web Services. You have been allocated a set of credentials for an environment that already has some basic preparation completed. This will be a single VPC with 3 public subnets, which will serve as our **Low Side**, and 3 private subnets, which will serve as our **High Side**.
-
-The diagram below shows a simplified overview of the networking topology:
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Workshop network topology* |
-
-
-Let's check the virtual private cloud network is created using the `aws` command line interface by copying the command below into our web terminal:
-
-```bash
-aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r
-```
-
-You should see output similar to the example below:
-
-```text
-vpc-0e6d176c7d9c94412
-```
-
-We can also check our three public **Low side** and three private **High side** subnets are ready to go by running the command below in our web terminal:
-
-```bash
-aws ec2 describe-subnets | jq '[.Subnets[].Tags[] | select(.Key=="Name").Value] | sort'
-```
-
-We should see output matching this example:
-
-```bash
-[
- "Private Subnet - disco",
- "Private Subnet 2 - disco",
- "Private Subnet 3 - disco",
- "Public Subnet - disco",
- "Public Subnet 2 - disco",
- "Public Subnet 3 - disco"
-]
-```
-
-If your environment access and topology is all working you've finished exercise 1! 🎉
diff --git a/data/workshop/exercise2.mdx b/data/workshop/exercise2.mdx
deleted file mode 100644
index 380311f..0000000
--- a/data/workshop/exercise2.mdx
+++ /dev/null
@@ -1,214 +0,0 @@
----
-title: Preparing our low side
-exercise: 2
-date: '2023-12-18'
-tags: ['openshift','containers','kubernetes','disconnected']
-draft: false
-authors: ['default']
-summary: "Downloading content and tooling for sneaker ops 💾"
----
-
-A disconnected OpenShift installation begins with downloading content and tooling to a prep system that has outbound access to the Internet. This server resides in an environment commonly referred to as the **Low side** due to its low security profile.
-
-In this exercise we will be creating a new [AWS ec2 instance](https://aws.amazon.com/ec2) in our **Low side** that we will carry out all our preparation activities on.
-
-
-## 2.1 - Creating a security group
-
-We'll start by creating an [AWS security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) and collecting its ID.
-
-We're going to use this shortly for the **Low side** prep system, and later on in the workshop for the **High side** bastion server.
-
-Copy the commands below into your web terminal:
-
-```bash
-# Obtain vpc id
-VPC_ID=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select(.Tags[].Value=="disco").VpcId' -r)
-echo "Virtual private cloud id is: ${VPC_ID}"
-
-# Obtain first public subnet id
-PUBLIC_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Public Subnet - disco").SubnetId' -r)
-
-# Create security group
-aws ec2 create-security-group --group-name disco-sg --description disco-sg --vpc-id ${VPC_ID} --tag-specifications "ResourceType=security-group,Tags=[{Key=Name,Value=disco-sg}]"
-
-# Store security group id
-SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
-echo "Security group id is: ${SG_ID}"
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Creating aws ec2 security group* |
-
-
-
-## 2.2 - Opening ssh port ingress
-
-We will want to login to our soon to be created **Low side** aws ec2 instance remotely via `ssh` so let's enable ingress on port `22` for this security group now:
-
-> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
-
-```bash
-aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 22 --cidr 0.0.0.0/0
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Opening ssh port ingress* |
-
-
-
-## 2.3 - Create prep system instance
-
-Ready to launch! 🚀 We'll use the `t3.micro` instance type, which offers `1GiB` of RAM and `2` vCPUs, along with a `50GiB` storage volume to ensure we have enough storage for mirrored content:
-
-> Note: As mentioned in [OpenShift documentation](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/installing/disconnected-installation-mirroring) about 12 GB of storage space is required for OpenShift Container Platform 4.14 release images, or additionally about 358 GB for OpenShift Container Platform 4.14 release images and all OpenShift Container Platform 4.14 Red Hat Operator images.
-
-Run the command below in your web terminal to launch the instance. We will specify an Amazon Machine Image (AMI) to use for our prep system which for this lab will be the [Marketplace AMI for RHEL 8](https://access.redhat.com/solutions/15356#us_east_2) in `us-east-2`.
-
-```bash
-aws ec2 run-instances --image-id "ami-092b43193629811af" \
- --count 1 --instance-type t3.micro \
- --key-name disco-key \
- --security-group-ids $SG_ID \
- --subnet-id $PUBLIC_SUBNET \
- --associate-public-ip-address \
- --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-prep-system}]" \
- --block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Launching a prep rhel8 ec2 instance* |
-
-
-
-## 2.4 - Connecting to the low side
-
-Now that our prep system is up, let's `ssh` into it and download the content we'll need to support our install on the **High side**.
-
-Copy the commands below into your web terminal. Let's start by retrieving the IP for the new ec2 instance and then connecting via `ssh`:
-
-> Note: If your `ssh` command times out here, your prep system is likely still booting up. Give it a minute and try again.
-
-```bash
-PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
-echo $PREP_SYSTEM_IP
-
-ssh -i disco_key ec2-user@$PREP_SYSTEM_IP
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Connecting to the prep rhel8 ec2 instance* |
-
-
-
-## 2.5 - Downloading required tools
-
-For the purposes of this workshop, rather than downloading mirror content to a USB drive as we would likely do in a real SneakerOps situation, we will instead be saving content to an EBS volume which will be mounted to our prep system on the **Low side** and then subsequently synced to our bastion system on the **High side**.
-
-Once your prep system has booted let's mount the EBS volume we attached so we can start downloading content. Copy the commands below into your web terminal:
-
-```bash
-sudo mkfs -t xfs /dev/nvme1n1
-sudo mkdir /mnt/high-side
-sudo mount /dev/nvme1n1 /mnt/high-side
-sudo chown ec2-user:ec2-user /mnt/high-side
-cd /mnt/high-side
-```
-
-With our mount in place let's grab the tools we'll need for the bastion server - we'll use some of them on the prep system too. Life's good on the low side; we can download these from the internet and tuck them into our **High side** gift basket at `/mnt/high-side`.
-
-There are four tools we need, copy the commands into your web terminal to download each one:
-
-1. `oc` OpenShift cli
-
-```bash
-curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -L -o oc.tar.gz
-tar -xzf oc.tar.gz oc && rm -f oc.tar.gz
-sudo cp oc /usr/local/bin/
-```
-
-2. `oc-mirror` oc plugin for mirorring release, operator, and helm content
-
-```bash
-curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/oc-mirror.tar.gz -L -o oc-mirror.tar.gz
-tar -xzf oc-mirror.tar.gz && rm -f oc-mirror.tar.gz
-chmod +x oc-mirror
-sudo cp oc-mirror /usr/local/bin/
-```
-
-3. `mirror-registry` small-scale Quay registry designed for mirroring
-
-```bash
-curl https://mirror.openshift.com/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz -L -o mirror-registry.tar.gz
-tar -xzf mirror-registry.tar.gz
-rm -f mirror-registry.tar.gz
-```
-
-4. `openshift-installer` The OpenShift installer cli
-
-```bash
-curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -L -o openshift-installer.tar.gz
-tar -xzf openshift-installer.tar.gz openshift-install
-rm -f openshift-installer.tar.gz
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Downloading required tools with curl* |
-
-
-
-## 2.6 - Mirroring content to disk
-
-The `oc-mirror` plugin supports mirroring content directly from upstream sources to a mirror registry, but since there is an air gap between our **Low side** and **High side**, that's not an option for this lab. Instead, we'll mirror content to a tarball on disk that we can then sneakernet into the bastion server on the **High side**. We'll then mirror from the tarball into the mirror registry from there.
-
-> Note: A pre-requisite for this process is an OpenShift pull secret to authenticate to the Red Hat registries. This has already been created for you to avoid the delay of registering for individual Red Hat accounts during this workhop. You can copy this into your newly created prep system by running `scp -pr -i disco_key .docker ec2-user@$PREP_SYSTEM_IP:` in your web terminal. In a real world scenario this pull secret can be downloaded from https://console.redhat.com/openshift/install/pull-secret.
-
-Let's get started by generating an `ImageSetConfiguration` that describes the parameters of our mirror. Run the command below to generate a boilerplate configuration file, it may take a minute:
-
-```bash
-oc mirror init > imageset-config.yaml
-```
-
-> Note: You can take a look at the default file by running `cat imageset-config.yaml` in your web terminal. Feel free to pause the workshop tasks for a few minutes and read through the [OpenShift documentation](https://docs.openshift.com/container-platform/4.14/updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.html#oc-mirror-creating-image-set-config_mirroring-ocp-image-repository) for the different options available within the image set configuration.
-
-To save time and storage, we're going to remove the operator catalogs and mirror only the release images for this workshop. We'll still get a fully functional cluster, but OperatorHub will be empty.
-
-To complete this, remove the operators object from your `imageset-config.yaml` by running the command below in your web terminal:
-
-```
-cat << EOF > imageset-config.yaml
-kind: ImageSetConfiguration
-apiVersion: mirror.openshift.io/v1alpha2
-storageConfig:
- local:
- path: ./
-mirror:
- platform:
- channels:
- - name: stable-4.14
- type: ocp
- additionalImages:
- - name: registry.redhat.io/ubi8/ubi:latest
- helm: {}
-EOF
-```
-
-Now we're ready to kick off the mirror! This can take 5-15 minutes so this is a good time to go grab a coffee or take a short break:
-
-> Note: If you're keen to see a bit more verbose output to track the progress of the mirror to disk process you can add the `-v 5` flag to the command below.
-
-```bash
-oc mirror --config imageset-config.yaml file:///mnt/high-side
-```
-
-Once your content has finished mirroring to disk you've finished exercise 2! 🎉
diff --git a/data/workshop/exercise3.mdx b/data/workshop/exercise3.mdx
deleted file mode 100644
index d47fad0..0000000
--- a/data/workshop/exercise3.mdx
+++ /dev/null
@@ -1,119 +0,0 @@
----
-title: Preparing our high side
-exercise: 3
-date: '2023-12-19'
-tags: ['openshift','containers','kubernetes','disconnected']
-draft: false
-authors: ['default']
-summary: "Setting up a bastion server and transferring content"
----
-
-In this exercise, we'll prepare the **High side**. This involves creating a bastion server on the **High side** that will host our mirror registry.
-
-> Note: We have an interesting dilemma for this excercise: the Amazon Machine Image we used for the prep system earlier does not have `podman` installed. We need `podman`, since it is a key dependency for `mirror-registry`.
->
-> We could rectify this by running `sudo dnf install -y podman` on the bastion system, but the bastion server won't have Internet access, so we need another option for this lab. To solve this problem, we need to build our own RHEL image with podman pre-installed. Real customer environments will likely already have a solution for this, but one approach is to use the [Image Builder](https://console.redhat.com/insights/image-builder) in the Hybrid Cloud Console, and that's exactly what has been done for this lab.
->
-> [workshop](/workshops/static/images/disconnected/image-builder.png)
->
-> In the home directory of your web terminal you will find an `ami.txt` file containng our custom image AMI which will be used by the command that creates our bastion ec2 instance.
-
-
-## 3.1 - Creating a bastion server
-
-First up for this exercise we'll grab the ID of one of our **High side** private subnets as well as our ec2 security group.
-
-Copy the commands below into your web terminal:
-
-```bash
-PRIVATE_SUBNET=$(aws ec2 describe-subnets | jq '.Subnets[] | select(.Tags[].Value=="Private Subnet - disco").SubnetId' -r)
-echo $PRIVATE_SUBNET
-
-SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
-echo $SG_ID
-```
-
-Once we know our subnet and security group ID's we can spin up our **High side** bastion server. Copy the commands below into your web terminal to complete this:
-
-```bash
-aws ec2 run-instances --image-id $(cat ami.txt) \
- --count 1 \
- --instance-type t3.large \
- --key-name disco-key \
- --security-group-ids $SG_ID \
- --subnet-id $PRIVATE_SUBNET \
- --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=disco-bastion-server}]" \
- --block-device-mappings "DeviceName=/dev/sdh,Ebs={VolumeSize=50}"
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Launching bastion ec2 instance* |
-
-
-
-## 3.2 - Accessing the high side
-
-Now we need to access our bastion server on the high side. In real customer environments, this might entail use of a VPN, or physical access to a workstation in a secure facility such as a SCIF.
-
-To make things a bit simpler for our lab, we're going to restrict access to our bastion to its private IP address. So we'll use the prep system as a sort of bastion-to-the-bastion.
-
-Let's get access by grabbing the bastion's private IP.
-
-```bash
-HIGHSIDE_BASTION_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-bastion-server" | jq -r '.Reservations[0].Instances[0].PrivateIpAddress')
-echo $HIGHSIDE_BASTION_IP
-```
-
-Our next step will be to `exit` back to our web terminal and copy our private key to the prep system so that we can `ssh` to the bastion from there. You may have to wait a minute for the VM to finish initializing:
-
-```bash
-PREP_SYSTEM_IP=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=disco-prep-system" | jq -r '.Reservations[0].Instances[0].PublicIpAddress')
-
-scp -i disco_key disco_key ec2-user@$PREP_SYSTEM_IP:/home/ec2-user/disco_key
-```
-
-To make life a bit easier down the track let's set an environment variable on the prep system so that we can preserve the bastion's IP:
-
-```bash
-ssh -i disco_key ec2-user@$PREP_SYSTEM_IP "echo HIGHSIDE_BASTION_IP=$(echo $HIGHSIDE_BASTION_IP) > highside.env"
-```
-
-Finally - Let's now connect all the way through to our **High side** bastion 🚀
-
-```bash
-ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Connecting to our bastion ec2 instance* |
-
-
-
-## 3.3 - Sneakernetting content to the high side
-
-We'll now deliver the **High side** gift basket to the bastion server. Start by mounting our EBS volume on the bastion server to ensure that we don't run out of space:
-
-```bash
-sudo mkfs -t xfs /dev/nvme1n1
-sudo mkdir /mnt/high-side
-sudo mount /dev/nvme1n1 /mnt/high-side
-sudo chown ec2-user:ec2-user /mnt/high-side
-```
-
-With the mount in place we can exit back to our base web terminal and send over our gift basket at `/mnt/high-side` using `rsync`. This can take 10-15 minutes depending on the size of the mirror tarball.
-
-```bash
-ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "rsync -avP -e 'ssh -i disco_key' /mnt/high-side ec2-user@$HIGHSIDE_BASTION_IP:/mnt"
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Initiating the sneakernet transfer via rsync* |
-
-
-Once your transfer has finished pushing you are finished with exercise 3, well done! 🎉
diff --git a/data/workshop/exercise4.mdx b/data/workshop/exercise4.mdx
deleted file mode 100644
index 4013313..0000000
--- a/data/workshop/exercise4.mdx
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title: Deploying a mirror registry
-exercise: 4
-date: '2023-12-20'
-tags: ['openshift','containers','kubernetes','disconnected']
-draft: false
-authors: ['default']
-summary: "Let's start mirroring some content on our high side!"
----
-
-Images used by operators and platform components must be mirrored from upstream sources into a container registry that is accessible by the **High side**. You can use any registry you like for this as long as it supports Docker `v2-2`, such as:
-- Red Hat Quay
-- JFrog Artifactory
-- Sonatype Nexus Repository
-- Harbor
-
-An OpenShift subscription includes access to the [mirror registry](https://docs.openshift.com/container-platform/4.14/installing/disconnected_install/installing-mirroring-creating-registry.html#installing-mirroring-creating-registry) for Red Hat OpenShift, which is a small-scale container registry designed specifically for mirroring images in disconnected installations. We'll make use of this option in this lab.
-
-Mirroring all release and operator images can take awhile depending on the network bandwidth. For this lab, recall that we're going to mirror just the release images to save time and resources.
-
-We should have the `mirror-registry` binary along with the required container images available on the bastion in `/mnt/high-side`. The `50GB` volume we created should be enough to hold our mirror (without operators) and binaries.
-
-
-## 4.1 - Opening mirror registry port ingress
-
-We are getting close to deploying a disconnected OpenShift cluster that will be spread across multiple machines which are in turn spread across our three private subnets.
-
-Each of the machines in those private subnets will need to talk back to our mirror registry on port `8443` so let's quickly update our aws security group to ensure this will work.
-
-> Note: We're going to allow traffic from all sources for simplicity (`0.0.0.0/0`), but this is likely to be more restrictive in real world environments:
-
-```bash
-SG_ID=$(aws ec2 describe-security-groups --filters "Name=tag:Name,Values=disco-sg" | jq -r '.SecurityGroups[0].GroupId')
-
-aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 8443 --cidr 0.0.0.0/0
-```
-
-
-## 4.2 - Running the registry install
-
-First, let's `ssh` back into the bastion:
-
-```bash
-ssh -t -i disco_key ec2-user@$PREP_SYSTEM_IP "ssh -t -i disco_key ec2-user@$HIGHSIDE_BASTION_IP"
-```
-
-And then we can kick off our install:
-
-```bash
-cd /mnt/high-side
-./mirror-registry install --quayHostname $(hostname) --quayRoot /mnt/high-side/quay/quay-install --quayStorage /mnt/high-side/quay/quay-storage --pgStorage /mnt/high-side/quay/pg-data --initPassword discopass
-```
-
-If all goes well, you should see something like:
-
-```text
-INFO[2023-07-06 15:43:41] Quay installed successfully, config data is stored in /mnt/quay/quay-install
-INFO[2023-07-06 15:43:41] Quay is available at https://ip-10-0-51-47.ec2.internal:8443 with credentials (init, discopass)
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Running the mirror-registry installer* |
-
-
-
-## 4.3 Logging into the mirror registry
-
-Now that our registry is running let's login with `podman` which will generate an auth file at `/run/user/1000/containers/auth.json`.
-
-```bash
-podman login -u init -p discopass --tls-verify=false $(hostname):8443
-```
-
-We should be greeted with `Login Succeeded!`.
-
-> Note: We pass `--tls-verify=false` here for simplicity during this workshop, but you can optionally add `/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem` to the system trust store by following the guide in the Quay documentation [here](https://access.redhat.com/documentation/en-us/red_hat_quay/3/html/manage_red_hat_quay/using-ssl-to-protect-quay?extIdCarryOver=true&sc_cid=701f2000001OH74AAG#configuring_the_system_to_trust_the_certificate_authority).
-
-
-## 4.4 Pushing content into mirror registry
-
-Now we're ready to mirror images from disk into the registry. Let's add `oc` and `oc-mirror` to the path:
-
-```bash
-sudo cp /mnt/high-side/oc /usr/local/bin/
-sudo cp /mnt/high-side/oc-mirror /usr/local/bin/
-```
-
-And now we fire up the mirror process to push our content from disk into the registry ready to be pulled by the OpenShift installation. This can take a similar amount of time to the sneakernet procedure we completed in exercise 3.
-
-```bash
-oc mirror --from=/mnt/high-side/mirror_seq1_000000.tar --dest-skip-tls docker://$(hostname):8443
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Running the oc mirror process to push content to our registry* |
-
-
-Once your content has finished pushing you are finished with exercise 4, well done! 🎉
diff --git a/data/workshop/exercise5.mdx b/data/workshop/exercise5.mdx
deleted file mode 100644
index 224b645..0000000
--- a/data/workshop/exercise5.mdx
+++ /dev/null
@@ -1,219 +0,0 @@
----
-title: Installing a disconnected OpenShift cluster
-exercise: 5
-date: '2023-12-20'
-tags: ['openshift','containers','kubernetes','disconnected']
-draft: false
-authors: ['default']
-summary: "Time to install a cluster 🚀"
----
-
-We're on the home straight now. In this exercise we'll configure and then execute our `openshift-installer`.
-
-The OpenShift installation process is initiated from the bastion server on our **High side**. There are a handful of different ways to install OpenShift, but for this lab we're going to be using installer-provisioned infrastructure (IPI).
-
-By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters.
-
-We'll then customize the `install-config.yaml` file that is produced to specify advanced configuration for our disconnected installation. The installation program then provisions the underlying infrastructure for the cluster. Here's a diagram describing the inputs and outputs of the installation configuration process:
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Installation overview* |
-
-
-> Note: You may notice that nodes are provisioned through a process called Ignition. This concept is out of scope for this workshop, but if you're interested to learn more about it, you can read up on it in the documentation [here](https://docs.openshift.com/container-platform/4.14/installing/index.html#about-rhcos).
-
-IPI is the recommended installation method in most cases because it leverages full automation in installation and cluster management, but there are some key considerations to keep in mind when planning a production installation in a real world scenario.
-
-You may not have access to the infrastructure APIs. Our lab is going to live in AWS, which requires connectivity to the `.amazonaws.com` domain. We accomplish this by using an allowed list on a Squid proxy running on the **High side**, but a similar approach may not be achievable or permissible for everyone.
-
-You may not have sufficient permissions with your infrastructure provider. Our lab has full admin in our AWS enclave, so that's not a constraint we'll need to deal with. In real world environments, you'll need to ensure your account has the appropriate permissions which sometimes involves negotiating with security teams.
-
-Once configuration has been completed, we can kick off the OpenShift Installer and it will do all the work for us to provision the infrastructure and install OpenShift.
-
-
-## 5.1 - Building install-config.yaml
-
-Before we run the installer we need to create a configuration file. Let's set up a workspace for it first.
-
-```bash
-mkdir /mnt/high-side/install
-cd /mnt/high-side/install
-```
-
-Next we will generate the ssh key pair for access to cluster nodes:
-
-```bash
-ssh-keygen -f ~/.ssh/disco-openshift-key -q -N ""
-```
-
-Use the following Python code to minify your mirror container registry pull secret to a single line. Copy this output to your clipboard, since you'll need it in a moment:
-
-```bash
-python3 -c $'import json\nimport sys\nwith open(sys.argv[1], "r") as f: print(json.dumps(json.load(f)))' /run/user/1000/containers/auth.json
-```
-
-> Note: For connected installations, you'd use the secret from the Hybrid Cloud Console, but for our use case, the mirror registry is the only one OpenShift will need to authenticate to.
-
-Then we can go ahead and generate our `install-config.yaml`:
-
-> Note: We are setting --log-level to get more verbose output.
-
-```bash
-/mnt/high-side/openshift-install create install-config --dir /mnt/high-side/install --log-level=DEBUG
-```
-
-The OpenShift installer will prompt you for a number of fields; enter the values below:
-
-- SSH Public Key: `/home/ec2-user/.ssh/disco-openshift-key.pub`
-> The SSH public key used to access all nodes within the cluster.
-
-- Platform: aws
-> The platform on which the cluster will run.
-
-- AWS Access Key ID and Secret Access Key: From `cat ~/.aws/credentials`
-
-- Region: `us-east-2`
-
-- Base Domain: `sandboxXXXX.opentlc.com` This should automatically populate.
-> The base domain of the cluster. All DNS records will be sub-domains of this base and will also include the cluster name.
-
-- Cluster Name: `disco`
->The name of the cluster. This will be used when generating sub-domains.
-
-- Pull Secret: Paste the output from minifying this to a single line in Step 3.
-
-That's it! The installer will generate `install-config.yaml` and drop it in `/mnt/high-side/install` for you.
-
-Once the config file is generated take a look through it, we will be making some changes as follows:
-
-- Change `publish` from `External` to `Internal`. We're using private subnets to house the cluster, so it won't be publicly accessible.
-
-- Add the subnet IDs for your private subnets to `platform.aws.subnets`. Otherwise, the installer will create its own VPC and subnets. You can retrieve them by running this command from your workstation:
-
-```bash
-aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).SubnetId] | unique' -r | yq read - -P
-```
-
-Then add them to `platform.aws.subnets` in your `install-config.yaml` so that they look something like this:
-
-```yaml
-platform:
- aws:
- region: us-east-1
- subnets:
- - subnet-00f28bbc11d25d523
- - subnet-07b4de5ea3a39c0fd
- - subnet-07b4de5ea3a39c0fd
-```
-
-- Next we need to modify the `machineNetwork` to match the IPv4 CIDR blocks from the private subnets. Otherwise your control plane and compute nodes will be assigned IP addresses that are out of range and break the install. You can retrieve them by running this command from your workstation:
-
-```bash
-aws ec2 describe-subnets | jq '[.Subnets[] | select(.Tags[].Value | contains ("Private")).CidrBlock] | unique | map("cidr: " + .)' | yq read -P - | sed "s/'//g"
-```
-
-Then use them to **replace the existing** `networking.machineNetwork` entry in your `install-config.yaml` so that they look something like this:
-
-```yaml
-networking:
- clusterNetwork:
- - cidr: 10.128.0.0/14
- hostPrefix: 23
- machineNetwork:
- - cidr: 10.0.48.0/20
- - cidr: 10.0.64.0/20
- - cidr: 10.0.80.0/20
-```
-
-- Next we will add the `imageContentSources` to ensure image mappings happen correctly. You can append them to your `install-config.yaml` by running this command:
-
-```bash
-cat << EOF >> install-config.yaml
-imageContentSources:
- - mirrors:
- - $(hostname):8443/ubi8/ubi
- source: registry.redhat.io/ubi8/ubi
- - mirrors:
- - $(hostname):8443/openshift/release-images
- source: quay.io/openshift-release-dev/ocp-release
- - mirrors:
- - $(hostname):8443/openshift/release
- source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
-EOF
-```
-
-- Add the root CA of our mirror registry (`/mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem`) to the trust bundle using the `additionalTrustBundle` field by running this command:
-
-```bash
-cat <> install-config.yaml
-additionalTrustBundle: |
-$(cat /mnt/high-side/quay/quay-install/quay-rootCA/rootCA.pem | sed 's/^/ /')
-EOF
-```
-
-It should look something like this:
-
-```yaml
-additionalTrustBundle: |
- -----BEGIN CERTIFICATE-----
- MIID2DCCAsCgAwIBAgIUbL/naWCJ48BEL28wJTvMhJEz/C8wDQYJKoZIhvcNAQEL
- BQAwdTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAlZBMREwDwYDVQQHDAhOZXcgWW9y
- azENMAsGA1UECgwEUXVheTERMA8GA1UECwwIRGl2aXNpb24xJDAiBgNVBAMMG2lw
- LTEwLTAtNTEtMjA2LmVjMi5pbnRlcm5hbDAeFw0yMzA3MTExODIyMjNaFw0yNjA0
- MzAxODIyMjNaMHUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTERMA8GA1UEBwwI
- TmV3IFlvcmsxDTALBgNVBAoMBFF1YXkxETAPBgNVBAsMCERpdmlzaW9uMSQwIgYD
- VQQDDBtpcC0xMC0wLTUxLTIwNi5lYzIuaW50ZXJuYWwwggEiMA0GCSqGSIb3DQEB
- AQUAA4IBDwAwggEKAoIBAQDEz/8Pi4UYf/zanB4GHMlo4nbJYIJsyDWx+dPITTMd
- J3pdOo5BMkkUQL8rSFkc3RjY/grdk2jejVPQ8sVnSabsTl+ku7hT0t1w7E0uPY8d
- RTeGoa5QvdFOxWz6JsLo+C+JwVOWI088tYX1XZ86TD5FflOEeOwWvs5cmQX6L5O9
- QGO4PHBc9FWpmaHvFBiRJN3AQkMK4C9XB82G6mCp3c1cmVwFOo3vX7h5738PKXWg
- KYUTGXHxd/41DBhhY7BpgiwRF1idfLv4OE4bzsb42qaU4rKi1TY+xXIYZ/9DPzTN
- nQ2AHPWbVxI+m8DZa1DAfPvlZVxAm00E1qPPM30WrU4nAgMBAAGjYDBeMAsGA1Ud
- DwQEAwIC5DATBgNVHSUEDDAKBggrBgEFBQcDATAmBgNVHREEHzAdghtpcC0xMC0w
- LTUxLTIwNi5lYzIuaW50ZXJuYWwwEgYDVR0TAQH/BAgwBgEB/wIBATANBgkqhkiG
- 9w0BAQsFAAOCAQEAkkV7/+YhWf1vq//N0Ms0td0WDJnqAlbZUgGkUu/6XiUToFtn
- OE58KCudP0cAQtvl0ISfw0c7X/Ve11H5YSsVE9afoa0whEO1yntdYQagR0RLJnyo
- Dj9xhQTEKAk5zXlHS4meIgALi734N2KRu+GJDyb6J0XeYS2V1yQ2Ip7AfCFLdwoY
- cLtooQugLZ8t+Kkqeopy4pt8l0/FqHDidww1FDoZ+v7PteoYQfx4+R5e8ko/vKAI
- OCALo9gecCXc9U63l5QL+8z0Y/CU9XYNDfZGNLSKyFTsbQFAqDxnCcIngdnYFbFp
- mRa1akgfPl+BvAo17AtOiWbhAjipf5kSBpmyJA==
- -----END CERTIFICATE-----
-```
-
-Lastly, now is a good time to make a backup of your `install-config.yaml` since the installer will consume (and delete) it:
-
-```bash
-cp install-config.yaml install-config.yaml.bak
-```
-
-
-## 5.2 Running the installation
-
-We're ready to run the install! Let's kick off the cluster installation by copying the command below into our web terminal:
-
-> Note: Once more we can use the `--log-level=DEBUG` flag to get more insight on how the install is progressing.
-
-```bash
-/mnt/high-side/openshift-install create cluster --log-level=DEBUG
-```
-
-
-| |
-|:-----------------------------------------------------------------------------:|
-| *Installation overview* |
-
-
-The installation process should take about 30 minutes. If you've done everything correctly, you should see something like the example below at the conclusion:
-
-```text
-...
-INFO Install complete!
-INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
-INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
-INFO Login to the console with user: "kubeadmin", and password: "password"
-INFO Time elapsed: 30m49s
-```
-
-If you made it this far you have completed all the workshop exercises, well done! 🎉
diff --git a/data/hackathon/scenario1.mdx b/data/workshop/scenario1.mdx
similarity index 100%
rename from data/hackathon/scenario1.mdx
rename to data/workshop/scenario1.mdx
diff --git a/data/hackathon/scenario2.mdx b/data/workshop/scenario2.mdx
similarity index 100%
rename from data/hackathon/scenario2.mdx
rename to data/workshop/scenario2.mdx
diff --git a/data/hackathon/scenario3.mdx b/data/workshop/scenario3.mdx
similarity index 100%
rename from data/hackathon/scenario3.mdx
rename to data/workshop/scenario3.mdx
diff --git a/data/hackathon/scenario4.mdx b/data/workshop/scenario4.mdx
similarity index 100%
rename from data/hackathon/scenario4.mdx
rename to data/workshop/scenario4.mdx
diff --git a/data/hackathon/scenario5.mdx b/data/workshop/scenario5.mdx
similarity index 100%
rename from data/hackathon/scenario5.mdx
rename to data/workshop/scenario5.mdx
diff --git a/data/hackathon/scenario6.mdx b/data/workshop/scenario6.mdx
similarity index 100%
rename from data/hackathon/scenario6.mdx
rename to data/workshop/scenario6.mdx