Files
talks/2024-01-09-openshift-acm-sno-o11y

OpenShift Advanced Cluster Management Observability

Introduction

This document captures the environment setup steps for a ~30 minute live demo of the Red Hat Advanced Cluster Management observability feature for Openshift.

Pre-requisites

This guide assumes you:

  • Have access to an Amazon Web Services account with permissions to be able to create resources including s3 buckets and ec2 instances. In my case I have an AWS Blank Open Environment provisioned through the Red Hat demo system.
  • Already have the aws and oc cli utilities installed.
  • Have registered for a Red Hat account (required for obtaining an OpenShift install image pull secret).

1 - Logging into aws locally

Our first step is to login to our aws account locally via the aws cli which will prompt for four values:

aws configure

2 - Creating s3 bucket

After logging into aws lets confirm our permissions are working by creating the s3 bucket we will need later on.

aws s3 mb "s3://open-cluster-management-observability" --region "$(aws configure get region)"

3 - Install openshift clusters

With our aws credentials working let's move on to deploying the hub and single node openshift cluster required for the live demo.

3.1 Download installer tools

Our first step will be to ensure we have the openshift-install cli tool. We can download it as follows:

# Download the installer
wget "https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-install-linux.tar.gz"

# Extract the archive
tar xf openshift-install-linux.tar.gz

3.2 Obtain install pull secret

Next we have a manual step to login to the Red Hat Hybrid Cloud Console and obtain our Pull Secret which will be required for our installation configuration.

Open the Console and click Download pull secret. This will download a file called pull-secret.txt which will be used later on.

3.3 Create ssh key

For access to our soon to be created clusters we need an ssh key, let's generate those now via ssh-keygen.

ssh-keygen -t rsa -b 4096 -f ~/.ssh/hubkey -q -N ""
ssh-keygen -t rsa -b 4096 -f ~/.ssh/snokey -q -N ""

3.3 Initiate the hub cluster install

Once our install tooling is available let's kick off the installation of our hub cluster by creating a configuration file and then running openshift-install.

cat << EOF > hub/install-config.yaml
additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: $(aws route53 list-hosted-zones | jq '.HostedZones[].Name' -r | sed 's/.$//')
compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: hub
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: $(aws configure get region)
publish: External
pullSecret: |
  $(cat pull-secret.txt)
sshKey: |
  $(cat ~/.ssh/hubkey.pub)
EOF

Once the configuration file is created we can kick off the install with openshift-install as follows:

./openshift-install create cluster --dir hub --log-level info

3.4 Initiate the sno cluster install

We can run our single node openshift cluster install at the same time in a separate terminal to speed things up. The process is the same we will first create an install-config.yaml file, then run openshift-install.

cat << EOF > sno/install-config.yaml
additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: $(aws route53 list-hosted-zones | jq '.HostedZones[].Name' -r | sed 's/.$//')
compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 0
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 1
metadata:
  creationTimestamp: null
  name: sno
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: $(aws configure get region)
publish: External
pullSecret: |
  $(cat pull-secret.txt)
sshKey: |
  $(cat ~/.ssh/snokey.pub)
EOF

Once the configuration file is created we can kick off the install with openshift-install as follows:

./openshift-install create cluster --dir sno --log-level info

4 - Install advanced cluster management

To make use of the Red Hat Advanced Cluster Management Observability feature we need to first install advanced cluster management on our hub cluster via the acm operator.

Let's get started by creating an OperatorGroup and Subscription which will install the operator.

oc create namespace open-cluster-management

cat << EOF | oc apply --filename -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: acm-operator-group
  namespace: open-cluster-management
spec:
  targetNamespaces:
    - open-cluster-management

---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: acm-operator-subscription
  namespace: open-cluster-management
spec:
  sourceNamespace: openshift-marketplace
  source: redhat-operators
  channel: release-2.9
  installPlanApproval: Automatic
  name: advanced-cluster-management
EOF

Once the operator is installed we can create the MultiClusterHub resource to install Advanced Cluster Management.

Note: It can take up to ten minutes for this to complete.

cat << EOF | oc apply --filename -
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
  spec: {}
EOF

5 - Enable acm observability

Now, with our clusters deployed and acm installed we can enable the observability service by creating a MultiClusterObservability custom resource instance on the hub cluster.

Our first step towards this is to create two secrets.

oc create namespace open-cluster-management-observability

DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`

oc create secret generic multiclusterhub-operator-pull-secret \
    -n open-cluster-management-observability \
    --from-literal=.dockerconfigjson="$DOCKER_CONFIG_JSON" \
    --type=kubernetes.io/dockerconfigjson


cat << EOF | oc apply --filename -
apiVersion: v1
kind: Secret
metadata:
  name: thanos-object-storage
  namespace: open-cluster-management-observability
type: Opaque
stringData:
  thanos.yaml: |
    type: s3
    config:
      bucket: open-cluster-management-observability
      endpoint: s3.$(aws configure get region).amazonaws.com
      insecure: true
      access_key: $(aws configure get aws_access_key_id)
      secret_key: $(aws configure get aws_secret_access_key)
EOF

Once the two required secrets exist we can create the MultiClusterObservability resource as follows:

cat << EOF | oc apply --filename -
apiVersion: observability.open-cluster-management.io/v1beta2
kind: MultiClusterObservability
metadata:
  name: observability
spec:
  observabilityAddonSpec: {}
  storageConfig:
    metricObjectStorage:
      name: thanos-object-storage
      key: thanos.yaml
EOF

After creating the resource and waiting briefyl we can access the grafana console via the Route to confirm everything is running:

echo "https://$(oc get route -n open-cluster-management-observability grafana -o jsonpath={.spec.host})"

6 - Import the single node openshift cluster into acm

oc new-project sno
oc label namespace sno cluster.open-cluster-management.io/managedCluster=sno
cat << EOF | oc apply --filename -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  name: sno
spec:
  hubAcceptsClient: true

---
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  name: sno
  namespace: sno
spec:
  clusterName: sno
  clusterNamespace: sno
  applicationManager:
    enabled: true
  certPolicyController:
    enabled: true
  clusterLabels:
    cloud: auto-detect
    vendor: auto-detect
  iamPolicyController:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: true
  version: 2.0.0
EOF

The ManagedCluster-Import-Controller will generate a secret named sno-import. The sno-import secret contains the import.yaml that the user applies to a managed cluster to install klusterlet.

oc get secret sno-import -n sno -o jsonpath={.data.crds\\.yaml} | base64 --decode > klusterlet-crd.yaml
oc get secret sno-import -n sno -o jsonpath={.data.import\\.yaml} | base64 --decode > import.yaml

oc --kubeconfig sno/auth/kubeconfig apply --filename klusterlet-crd.yaml
oc --kubeconfig sno/auth/kubeconfig apply --filename import.yaml