Complete jira and app initial deployments for ansible meetup talk.
This commit is contained in:
		@ -1,36 +0,0 @@
 | 
				
			|||||||
#+TITLE: Deploying demo infrastructure
 | 
					 | 
				
			||||||
#+AUTHOR: James Blair <jablair@redhat.com>
 | 
					 | 
				
			||||||
#+DATE: <2023-03-10 Fri 10:15>
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
This guide will outline the steps to follow to deploy the infrastructure required to run the demo for this talk. Infrastructure provisioning is performed via [[https://www.ansible.com/][ansible]] using the [[https://www.terraform.io/][terraform]] collection.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
To run the demo we need one rhel virtual machine, these machines will run our ~microshoft~ kubernetes cluster which will have our ansible automation platform and jira pods deployed.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
To get started we need to define some credentials into an ~.env~ file. Note that these credentials are ignored in the repo ~.gitignore~ file for security reasons.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
#+NAME: Create secret env file
 | 
					 | 
				
			||||||
#+begin_src tmate
 | 
					 | 
				
			||||||
cat << EOF > .env
 | 
					 | 
				
			||||||
export TF_VAR_subscription_pw=placeholder
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
export TF_VAR_aws_region=ap-southeast-2
 | 
					 | 
				
			||||||
export TF_VAR_aws_access_key=placeholder
 | 
					 | 
				
			||||||
export TF_VAR_aws_secret_key=placeholder
 | 
					 | 
				
			||||||
EOF
 | 
					 | 
				
			||||||
#+end_src
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
Once secrets have been defined run the code block below to install our dependencies and run the ansible playbook that will deploy our infrastructure.
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
#+NAME: Install dependencies and run
 | 
					 | 
				
			||||||
#+begin_src tmate
 | 
					 | 
				
			||||||
# Source secrets
 | 
					 | 
				
			||||||
source ../.env
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
# Install certified terraform collection
 | 
					 | 
				
			||||||
ansible-galaxy collection install cloud.terraform
 | 
					 | 
				
			||||||
ansible-galaxy collection install awx.awx
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
# Run the deploy playbook
 | 
					 | 
				
			||||||
ansible-playbook -i localhost demo-infra-deploy.yaml
 | 
					 | 
				
			||||||
#+end_src
 | 
					 | 
				
			||||||
@ -0,0 +1,70 @@
 | 
				
			|||||||
 | 
					#+TITLE: Deploying demo jira instance
 | 
				
			||||||
 | 
					#+AUTHOR: James Blair <jablair@redhat.com>
 | 
				
			||||||
 | 
					#+DATE: <2023-03-10 Fri 10:15>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					This guide will outline the steps to follow to deploy a demo jira instance to an existing kubernetes cluster. For our purposes that cluster will be an existing [[https://aws.amazon.com/rosa/][ROSA]] cluster running in AWS ~ap-southeast-1~.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Login to cluster
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					As mentioned above we have an existing OpenShift cluster to use for this demo, we will need to login to the cli to automate the remainder of the jira setup.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Login to openshift
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					oc login --kubeconfig ~/.kube/rosa --token=<token> --server=<server>
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Create kubernetes namespace
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Our first step is to create a kubernetes [[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/][namespace]] for our jira deployment.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Create jira namespace
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					kubectl --kubeconfig ~/.kube/rosa create namespace jira
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Build and deploy jira
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Once we have a namespace we can use a one line ~oc~ command to create a build process in OpenShift based on our github repository containing a Dockerfile.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					This will build a container image within OpenShift and then create a Deployment of that image which will give us a single running jira pod.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Note: This deployment will not be backed by persistent storage but for demo purposes this is fine. Do not use this in production...
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Build and deploy jira
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					# Initiate the build from github
 | 
				
			||||||
 | 
					oc --kubeconfig ~/.kube/rosa --namespace jira --name jira new-app https://github.com/jmhbnz/docker-atlassian-jira
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# Watch the progress
 | 
				
			||||||
 | 
					oc --kubeconfig ~/.kube/rosa --namespace jira logs --follow buildconfig/jira
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Once the container image has built successfully we can verify the jira instance is running by checking the pod status.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Check pod status
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					kubectl --kubeconfig ~/.kube/rosa --namespace jira get pods
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Expose jira deployment
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					With our jira instance now running within our cluster we can create a ~route~ to expose it outside the cluster.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Expose jira deployment
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					oc --kubeconfig ~/.kube/rosa --namespace jira expose service jira
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					With our route created lets retrieve that and perform the first time setup for jira. This is currently a manual process involving obtaining a trial license from [[https://my.atlassian.com/product][atlassian]].
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Retrieve jira route
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					echo http://$(oc --kubeconfig ~/.kube/
 | 
				
			||||||
 | 
					rosa --namespace jira get route | grep apps.com | awk '{print $2}')
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
@ -0,0 +1,85 @@
 | 
				
			|||||||
 | 
					#+TITLE: Deploying demo aap instance
 | 
				
			||||||
 | 
					#+AUTHOR: James Blair <jablair@redhat.com>
 | 
				
			||||||
 | 
					#+DATE: <2023-03-10 Fri 10:15>
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					This guide will outline the steps to follow to deploy a demo ansible automation platform instance to an existing kubernetes cluster. For our purposes that cluster will be an existing [[https://aws.amazon.com/rosa/][ROSA]] cluster running in AWS ~ap-southeast-1~.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Login to cluster
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					As mentioned above we have an existing OpenShift cluster to use for this demo, we will need to login to the cli to automate the remainder of the jira setup.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Login to openshift
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					oc login --kubeconfig ~/.kube/rosa --token=<token> --server=<server>
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Create kubernetes namespace
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Our first step is to create a kubernetes [[https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/][namespace]] for our aap deployment.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Create aap namespace
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					kubectl --kubeconfig ~/.kube/rosa create namespace aap
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Subscribe to aap operator
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Once we have a namespace we can create a ~Subscription~ custom resource to install the latest version of the Ansible Automation Platform [[https://kubernetes.io/docs/concepts/extend-kubernetes/operator/][operator]].
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					cat << EOF | kubectl --kubeconfig ~/.kube/rosa apply -f -
 | 
				
			||||||
 | 
					apiVersion: operators.coreos.com/v1alpha1
 | 
				
			||||||
 | 
					kind: Subscription
 | 
				
			||||||
 | 
					metadata:
 | 
				
			||||||
 | 
					  name: ansible-automation-platform-operator
 | 
				
			||||||
 | 
					  namespace: aap
 | 
				
			||||||
 | 
					spec:
 | 
				
			||||||
 | 
					  channel: stable-2.3
 | 
				
			||||||
 | 
					  installPlanApproval: Automatic
 | 
				
			||||||
 | 
					  name: ansible-automation-platform-operator
 | 
				
			||||||
 | 
					  source: redhat-operators
 | 
				
			||||||
 | 
					  sourceNamespace: openshift-marketplace
 | 
				
			||||||
 | 
					  startingCSV: aap-operator.v2.3.0-0.1677639985
 | 
				
			||||||
 | 
					EOF
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					* Create aap custom resource
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					Once the operator is installed we can create an ~AutomationController~ custom resource as outlined below:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					cat << EOF | kubectl --kubeconfig ~/.kube/rosa apply -f -
 | 
				
			||||||
 | 
					apiVersion: automationcontroller.ansible.com/v1beta1
 | 
				
			||||||
 | 
					kind: AutomationController
 | 
				
			||||||
 | 
					metadata:
 | 
				
			||||||
 | 
					  name: aap-demo
 | 
				
			||||||
 | 
					  namespace: aap
 | 
				
			||||||
 | 
					spec:
 | 
				
			||||||
 | 
					  create_preload_data: false
 | 
				
			||||||
 | 
					  route_tls_termination_mechanism: Edge
 | 
				
			||||||
 | 
					  garbage_collect_secrets: false
 | 
				
			||||||
 | 
					  ingress_type: Route
 | 
				
			||||||
 | 
					  loadbalancer_port: 80
 | 
				
			||||||
 | 
					  image_pull_policy: IfNotPresent
 | 
				
			||||||
 | 
					  projects_storage_size: 8Gi
 | 
				
			||||||
 | 
					  task_privileged: false
 | 
				
			||||||
 | 
					  projects_storage_access_mode: ReadWriteMany
 | 
				
			||||||
 | 
					  projects_persistence: false
 | 
				
			||||||
 | 
					  replicas: 1
 | 
				
			||||||
 | 
					  admin_user: admin
 | 
				
			||||||
 | 
					  loadbalancer_protocol: http
 | 
				
			||||||
 | 
					  nodeport_port: 30080
 | 
				
			||||||
 | 
					EOF
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					We can obtain the route to access the instance with the command below:
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					#+NAME: Retrieve aap route
 | 
				
			||||||
 | 
					#+begin_src tmate
 | 
				
			||||||
 | 
					echo https://$(oc --kubeconfig ~/.kube/rosa --namespace aap get route | grep apps.com | awk '{print $2}')
 | 
				
			||||||
 | 
					#+end_src
 | 
				
			||||||
		Reference in New Issue
	
	Block a user