Files
talks/2023-07-31-acs-workflows/README.org

216 lines
8.3 KiB
Org Mode

#+TITLE: RHACS Workflows & Integration
#+AUTHOR: James Blair
#+DATE: <2023-07-29 Sat 23:15>
This is a short demo I gave on [[https://www.redhat.com/en/technologies/cloud-computing/openshift/advanced-cluster-security-kubernetes][Red Hat Advanced Cluster Security]].
* Pre-requisites
This demo setup process assumes you already have an OpenShift 4.12+ cluster running, and are logged into the ~oc~ cli locally with cluster administration privileges.
For this demo I have an OpenShift ~4.12.12~ cluster running on AWS provisioned through the [[https://demo.redhat.com/catalog?item=babylon-catalog-prod/sandboxes-gpte.elt-ocp4-hands-on-acs.prod&utm_source=webapp&utm_medium=share-link][Red Hat Demo system]].
#+NAME: Check oc status
#+begin_src bash :results silent
export $(cat .env)
oc login --token="${openshift_token}" --server="${openshift_apiserver}" --insecure-skip-tls-verify=true
oc version | grep Server
oc status
#+end_src
* Developer workflow integration
A key element of any cloud native security platform is how it can be incorporated into software development workflows to enable security teams to gain visibility of emerging security issues and also empower developers to understand the security posture of what they are building.
For this demonstration we will be using [[https://developers.redhat.com/products/openshift-dev-spaces/overview][OpenShift Dev Spaces]] as a cloud based development environment, and [[https://marketplace.visualstudio.com/items?itemName=redhat.vscode-tekton-pipelines][OpenShift Pipelines]] for a continuous integration environment.
** Install dev spaces operator
The first step to prepare the demo is to install the dev spaces operator so our cluster will be able to create cloud based development environments. We can install the operator programmatically by creating a ~subscription~ resource:
#+begin_src bash :results silent
cat << EOF | oc apply --filename -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: devspaces
namespace: openshift-operators
spec:
channel: stable
installPlanApproval: Automatic
name: devspaces
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
#+end_src
** Create devspaces controller
Once the operator is installed we can create a devspaces controller instance, this will be what is actually responsible for instantiating new individual developer workspaces.
Once again we can do this programmatically by creating a ~checluster~ resource:
#+begin_src bash :results silent
cat << EOF | oc apply --filename -
apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
name: devspaces
namespace: openshift-operators
spec:
components:
cheServer:
debug: false
logLevel: INFO
dashboard: {}
database:
externalDb: false
devWorkspace: {}
devfileRegistry: {}
imagePuller:
enable: false
spec: {}
metrics:
enable: true
pluginRegistry: {}
containerRegistry: {}
devEnvironments:
containerBuildConfiguration:
openShiftSecurityContextConstraint: container-build
defaultNamespace:
autoProvision: true
template: <username>-devspaces
maxNumberOfWorkspacesPerUser: -1
secondsOfInactivityBeforeIdling: 36000
secondsOfRunBeforeIdling: -1
startTimeoutSeconds: 300
storage:
pvcStrategy: per-user
gitServices: {}
networking:
auth:
gateway:
configLabels:
app: che
component: che-gateway-config
EOF
#+end_src
** Create individual dev space
Once the dev workspace operator and controller are ready we can create our individual developer workspace.
#+begin_src bash :results silent
oc new-project opentlc-mgr-devspaces
cat << EOF | oc apply --filename -
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha2
metadata:
name: vscode
namespace: opentlc-mgr-devspaces
spec:
started: true
template:
projects:
- name: talks
git:
remotes:
origin: "https://github.com/jmhbnz/talks.git"
components:
- name: dev
container:
image: quay.io/devfile/universal-developer-image:latest
commands:
- id: build
exec:
component: dev
commandLine: make build
workingDir: ${PROJECT_SOURCE}/2023-07-31-acs-workflows/guestbook/
- id: test
exec:
component: dev
commandLine: make test
workingDir: ${PROJECT_SOURCE}/2023-07-31-acs-workflows/guestbook/
contributions:
- name: che-code
uri: https://eclipse-che.github.io/che-plugin-registry/main/v3/plugins/che-incubator/che-code/latest/devfile.yaml
components:
- name: che-code-runtime-description
container:
env:
- name: CODE_HOST
value: 0.0.0.0
EOF
#+end_src
** Configure rhacs ocp registry
The pipeline we will shortly be running below for deploying our sample application includes steps for scanning a built image with ~roxctl~ command line utility for Red Hat Advanced Cluster Security. In order for these scans to work we need to configure Red Had Advanced Cluster Security with an integration for the [[https://docs.openshift.com/acs/4.1/integration/integrate-with-image-registries.html#manual-configuration-image-registry-ocp_integrate-with-image-registries][openshift internal image registry]] which is used by the pipeline.
We can configure that automatically using the ~imageintegrations~ api:
#+begin_src bash :results silent
export $(cat .env)
curl -v "https://${rox_central_endpoint}/v1/imageintegrations" \
--user "admin:${rox_admin_password}" \
--header 'content-type: application/json' \
--data-raw '{"id":"","name":"ocp-internal","categories":["REGISTRY"],"docker":{"endpoint":"image-registry.openshift-image-registry.svc:5000","username":"opentlc-mgr","password":"'"$(oc whoami --show-token)"'","insecure":true},"autogenerated":false,"clusterId":"","clusters":[],"skipTestIntegration":false,"type":"docker"}' \
--insecure
#+end_src
** Deploy sample application
In order to showcase incorporating ~roxctl~ into developer workflows we need a sample application to tinker with. For our purposes included in a subdirectory here is a small version of the classic kubernetes guestbook app.
We can deploy the application to our OpenShift cluster using the collection of yaml manifests in ~manifests/~ subdirectory. These will create a new ~deployment~, ~imagestream~, ~pipeline~ that in conjunction will deploy our application. We then trigger the deployment with the included ~pipelinerun~ resource.
The pipeline we run does rely on a secret containing our ~roxctl~ credentials so let's create that now as well.
#+begin_src bash :results silent
export $(cat .env)
oc new-project guestbook
oc create secret generic roxsecrets \
--from-literal=rox_api_token="${rox_api_token}" \
--from-literal=rox_central_endpoint="${rox_central_endpoint}" \
--dry-run=client --output=yaml \
| oc apply --filename -
oc apply --filename guestbook/manifests/imagestream.yaml
oc apply --filename guestbook/manifests/build-pipeline.yaml
oc apply --filename guestbook/manifests/deploy-pipeline.yaml
oc apply --filename guestbook/manifests/build-pipelinerun.yaml
#+end_src
** Add jira integration
To help reduce manual burden for security teams we can automate the process of creating jira issues for teams by adding a jira integration.
For jira we can use the ~notifiers~ api to add the new integration, note the payload inclusion of project, issue types and priority mappings:
#+begin_src bash :results silent
export $(cat .env)
curl "https://${rox_central_endpoint}/v1/notifiers" \
--user "admin:${rox_admin_password}" \
-H 'content-type: application/json' \
--data-raw '{"id":"","name":"jira-cloud","jira":{"username":"'"${jira_username}"'","password":"'"${jira_api_token}:"'","issueType":"Task","url":"https://jablairdemo.atlassian.net","priorityMappings":[{"severity":"CRITICAL_SEVERITY","priorityName":"Highest"},{"severity":"HIGH_SEVERITY","priorityName":"High"},{"severity":"MEDIUM_SEVERITY","priorityName":"Medium"},{"severity":"LOW_SEVERITY","priorityName":"Low"}],"defaultFieldsJson":""},"labelDefault":"DEV","labelKey":"","uiEndpoint":"https://central-stackrox.apps.cluster-7228t.7228t.sandbox2400.opentlc.com","type":"jira"}' \
--insecure
#+end_src
Once a jira integration has been created this can then be attached to specific policies.