Openshift Posts

Deploying Sonarqube to Openshift 4 by Using Openshift Template

Sonarqube is an automatic code review tool to detect bugs, vulnerabilities, and code smells in your code. It can fits with existing CI/CD tools as a static code analysis tools, and providing a quality gate to make sure that your code and delivery can fits in with a specific quality standard.

Deploying Sonarqube to Openshift should be a pretty much straightforward activity which have multiple approaches. But now, im trying to deploy Sonarqube by using a simple Openshift Template. In short, Openshift Template is a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. Which means, we are deploying Sonarqube with its configuration, and surrounding infrastructures.

On this article, im trying to create two different Sonarqube deployments. One is by using an in-memory database, which is recommended for a non-production environments, and another way is by having a Postgresql database which is suitable for production usage.

So, on below YML i have the first Sonarqube template deployment. It would deploy Sonarqube, setting up its Router, provisioning PersistentVolume, and setting up its configurations. Lets name it sonarqube-h2-db-template.yml

apiVersion: v1
kind: Template
labels:
  template: sonarqube-h2-db-template
message: A Sonarqube service has been created in your project. You can access using admin/admin.
metadata:
  annotations:
    description: |-
      Sonarqube service, with H2 DB.
      NOTE: Data will not be gone despite restarts, but dont use this for production usage.
    openshift.io/display-name: SonarQube (H2 DB)
    openshift.io/documentation-url: https://docs.sonarqube.org/
    openshift.io/long-description: This template deploys a SonarQube server with an embeddable H2 DB.
    tags: instant-app,sonarqube
  creationTimestamp: null
  name: sonarqube-h2-db
objects:
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      template.openshift.io/expose-uri: http://{.spec.host}{.spec.path}
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    to:
      kind: Service
      name: ${SONARQUBE_SERVICE_NAME}
    tls:
      termination: edge
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}
      spec:
        containers:
        - capabilities: {}
          image: 'sonarqube:latest'
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
              protocol: TCP
          livenessProbe:
            failureThreshold: 30
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 420
            timeoutSeconds: 3
          name: sonarqube
          readinessProbe:
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 3
            timeoutSeconds: 3
          volumeMounts:
            - mountPath: /opt/sonarqube/data
              name: ${SONARQUBE_SERVICE_NAME}-data
            - mountPath: /opt/sonarqube/logs
              name: ${SONARQUBE_SERVICE_NAME}-logs
            - mountPath: /opt/sonarqube/extensions
              name: ${SONARQUBE_SERVICE_NAME}-extensions
          resources:
            requests:
              memory: ${SONARQUBE_MEMORY_LIMITS}
            limits:
              memory: ${SONARQUBE_MEMORY_LIMITS}
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-data-pv
        - name: ${SONARQUBE_SERVICE_NAME}-logs
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-logs-pv
        - name: ${SONARQUBE_SERVICE_NAME}-extensions
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-extensions-pv
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: Service
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    sessionAffinity: None
    type: ClusterIP
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-data-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-logs-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-extensions-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
parameters:
- description: The name of the OpenShift Service exposed for the SonarQube container.
  displayName: SonarQube Service Name
  name: SONARQUBE_SERVICE_NAME
  value: sonar
- description: SonarQube container memory limits.
  displayName: Memory Limits
  name: SONARQUBE_MEMORY_LIMITS
  required: true
  value: 2Gi

Deploy the template to OCP by using below command,

$ oc create -f sonarqube-h2-db-template.yml

And we can initiate the template from Openshift web console,

Or, we can initiate it by using an OC command instead,

$ oc new-app --template sonarqube-h2-db

The second template deployment is deploying Sonarqube with a separate Postgresql database installed. So we need to prepare the Sonarqube and Postgresql database configurations, Services, and PersistentVolume.

apiVersion: v1
kind: Template
labels:
  template: sonarqube-pgsql-db-template
message: A Sonarqube service has been created in your project. You can access using admin/admin.
metadata:
  annotations:
    description: |-
      Sonarqube service, with Postgresql DB.
      NOTE: Data will not be gone despite restarts, preferable for production usages.
    openshift.io/display-name: SonarQube (Postgresql DB)
    openshift.io/documentation-url: https://docs.sonarqube.org/
    openshift.io/long-description: This template deploys a SonarQube server with a standalone Postgresql DB.
    tags: instant-app,sonarqube
  creationTimestamp: null
  name: sonarqube-pgsql-db
objects:
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      template.openshift.io/expose-uri: http://{.spec.host}{.spec.path}
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    to:
      kind: Service
      name: ${SONARQUBE_SERVICE_NAME}
    tls:
      termination: edge
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}
      spec:
        containers:
        - capabilities: {}
          image: 'sonarqube:latest'
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
              protocol: TCP
          livenessProbe:
            failureThreshold: 30
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 420
            timeoutSeconds: 3
          env:
            - name: SONAR_JDBC_URL
              value: jdbc:postgresql://${SONARQUBE_SERVICE_NAME}-pgsql:5432/sonar
            - name: SONAR_JDBC_USERNAME
              value: sonar
            - name: SONAR_JDBC_PASSWORD
              value: sonar
          name: sonarqube
          readinessProbe:
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 3
            timeoutSeconds: 3
          volumeMounts:
            - mountPath: /opt/sonarqube/data
              name: ${SONARQUBE_SERVICE_NAME}-data
            - mountPath: /opt/sonarqube/logs
              name: ${SONARQUBE_SERVICE_NAME}-logs
            - mountPath: /opt/sonarqube/extensions
              name: ${SONARQUBE_SERVICE_NAME}-extensions
          resources:
            requests:
              memory: ${SONARQUBE_MEMORY_LIMITS}
            limits:
              memory: ${SONARQUBE_MEMORY_LIMITS}
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-data-pv
        - name: ${SONARQUBE_SERVICE_NAME}-logs
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-logs-pv
        - name: ${SONARQUBE_SERVICE_NAME}-extensions
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-extensions-pv
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: Service
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    sessionAffinity: None
    type: ClusterIP
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-data-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-logs-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-extensions-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce

# postgresql starts here
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}-pgsql
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}-pgsql
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}-pgsql
      spec:
        containers:
        - capabilities: {}
          env:
          - name: POSTGRESQL_USER
            value: sonar
          - name: POSTGRESQL_PASSWORD
            value: sonar
          - name: POSTGRESQL_DATABASE
            value: sonar
          image: 'image-registry.openshift-image-registry.svc:5000/openshift/postgresql:12'
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
              - /usr/libexec/check-container
              - --live
            initialDelaySeconds: 120
            timeoutSeconds: 10
          name: postgresql
          ports:
          - containerPort: 5432
            protocol: TCP
          readinessProbe:
            exec:
              command:
              - /usr/libexec/check-container
            initialDelaySeconds: 5
            timeoutSeconds: 1
          resources:
            limits:
              memory: 512Mi
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
          volumeMounts:
          - mountPath: /var/lib/pgsql/data
            name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-pgsql-data
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      template.openshift.io/expose-uri: postgres://{.spec.clusterIP}:{.spec.ports[?(.name=="postgresql")].port}
    name: ${SONARQUBE_SERVICE_NAME}-pgsql
  spec:
    ports:
    - name: postgresql
      nodePort: 0
      port: 5432
      protocol: TCP
      targetPort: 5432
    selector:
      name: ${SONARQUBE_SERVICE_NAME}-pgsql
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}

parameters:
- description: The name of the OpenShift Service exposed for the SonarQube container.
  displayName: SonarQube Service Name
  name: SONARQUBE_SERVICE_NAME
  value: sonar
- description: SonarQube container memory limits.
  displayName: Memory Limits
  name: SONARQUBE_MEMORY_LIMITS
  required: true
  value: 2Gi

Deploy above template to OCP by using below command,

$ oc create -f sonarqube-pgsql-db-template.yml

The code for above template can be found on below Github url,

https://github.com/edwin/sonarqube-on-openshift-4

Hope it helps, and have fun using Sonarqube.

Deploy A Java Application to Openshift by Using Tekton Pipeline

Tekton is one of CI/CD tools that we can use for building and deploying application, it provides a lightweight yet powerful and flexible cloud native CI/CD. For this sample, i am planning to demonstrate a build and deployment for a java Spring Boot application into Openshift.

First we need to create two different Openshift Namespace,

oc new-project edwin-pipeline
oc new-project edwin-deploy

So lets start with creating a new PVC for storing our build artifacts. This PVC is going to be needed betweek Task, and we’ll store those artifact under a different folder based on Pipeline’s uid variable to prevent overlapping one and another.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tekton-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi

And a pipeline YML file, for orchestrating our build and deployment steps.

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: hello-world-java-build-and-deploy
spec:
  params:
    - name: uid
      description: the uid
  workspaces:
    - name: task-pvc
  tasks:
  - name: git-clone
    taskRef:
      name: git-clone
    params:
    - name: app-git
      value: https://github.com/edwin/spring-boot-hello-world
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: build
    taskRef:
      name: mvn
    runAfter: ["git-clone"]
    params:
    - name: goal
      value: "package"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: test
    taskRef:
      name: mvn
    runAfter: ["build"]
    params:
    - name: goal
      value: "test"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: integration-test
    taskRef:
      name: mvn
    runAfter: ["build"]
    params:
    - name: goal
      value: "test"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc      
  - name: deploy
    taskRef:
      name: deploy-and-clean
    runAfter: ["integration-test","test"]
    params:
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc

Based on above Pipeline, we have several Tasks which are involved within it. Lets start with a Task to clone a code from Github.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: git-clone
spec:
  params:
    - name: app-git
      description: the git repo
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: git-clone
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          echo "git clone";
          mkdir /workspace/source/$(params.uid) && cd /workspace/source/$(params.uid);
          git clone $(params.app-git) /workspace/source/$(params.uid);
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

And a simple Maven build Task,

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: mvn
spec:
  params:
    - name: goal
      description: mvn goal
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: mvn
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          echo "mvn something";
          cd /workspace/source/$(params.uid);
          mvn $(params.goal) -Dmaven.repo.local=/workspace/source/m2;
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

The last task involved is a task to do deployment and removing build folder,

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: deploy-and-clean
spec:
  params:
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: git-clone
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          cd /workspace/source/$(params.uid) ;
          mkdir build-folder ;
          cp target/*.jar build-folder/ ;
          oc login --insecure-skip-tls-verify --token=my-openshift-token --server=https://api.my-openshift-url.com:6443 ;
          oc new-build  --name hello-world --binary=true -n edwin-deploy --image-stream=openjdk-11  || true ;
          oc start-build hello-world --from-dir=build-folder/. -n edwin-deploy --follow --wait ;
          oc new-app hello-world -n edwin-deploy || true ;
          oc expose svc/hello-world -n edwin-deploy || true ;
          cd / ;
          rm -Rf /workspace/source/$(params.uid) ;
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

And the last would be a PipelineRun yaml file for running the whole Pipeline,

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  generateName: hello-world-run
spec:
  params:
    - name: uid
      value: $(context.pipelineRun.uid)
  pipelineRef:
    name: hello-world-java-build-and-deploy
  workspaces:
    - name: task-pvc
      persistentVolumeClaim:
        claimName: tekton-pvc

And run it by using below command,

oc create -f 06.pipeline-run.yml -n edwin-pipeline

Code for this sample can be found on below Github link,

https://github.com/edwin/java-app-and-tekton-pipeline-sample

We are using UBI8 – OpenJDK11 for the base image that is going to be use for running our java apps. We can import it into our OCP cluster by using below command,

oc import-image openjdk-11 --from=registry.access.redhat.com/ubi8/openjdk-11 --confirm

Finally the result will looks like this,

Thanks

ps. Openshift version used is 4.8.2, and im using Openshift’s jenkins-agent-maven Image to do all build and deployments. Feel free to use other image in case needed.

Exposing Openshift Prometheus API and Display it on External Monitoring Tools

Theres one question comes up during discussion with my colleague regarding on how we can monitor our application which are being deployed on top of Openshift. Actually Openshift has its own monitoring tools, but sometimes we need an external monitoring tools for monitor our distributed application especially when deployed in a multiple different clusters of Openshift.

In the end, the high level concept is pretty much like this.

But first in order to achieve it we need to make sure that our thanos-querier are both accesible by External Grafana, and also secure.

Before we go there, lets start by creating an “mw” namespace first and deploying a simple java apps there.

oc new-project mw

oc new-app registry.access.redhat.com/ubi8/openjdk-8~https://github.com/edwin/hello-world-fuse-on-ocp -n mw

Create a new serviceaccount,

oc create sa ext-monitor -n mw

And gives a “cluster-monitoring-view” role to it,

oc adm policy add-cluster-role-to-user cluster-monitoring-view -z ext-monitor -n mw

Next step is getting the ServiceAccount JWT token by using below command,

oc sa get-token ext-monitor -n mw

It will generate something like this, and save it somewhere.

Next is setuping our own External Monitoring tools by using grafana, and login with admin/admin credential.

docker pull grafana/grafana

docker run -d -p 3000:3000 grafana/grafana

Create a new Data sources, and select Prometheus as our new Datasource.

Fill in some data, and put our thanos-querier as our HTTP URL.

Create new HTTP Header, and put Authorization as the Key. And put “Bearer (your ServiceAccount JWT token)” as the value. We can add some custom query parameters for defining which namespace to be monitored.

Press Save and Test button after.

Next step is creating a dashboard,

And an empty Panel,

Change our Data source into our newly created Data source, and run below query in Metric Browser field

sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate{namespace='mw'}) by (pod)

The result should be seen on below image,

Fyi, on this tutorial Im using Openshift 4.8.

How to Solve Openshift “Failed to pull image, unauthorized: authentication required”

Just recently got an unique error, this happens when my application is pulling an image within a different Openshift namespace. In this example, im creating my application in “xyz-project” and try to pull image from “abc-project”. Here’s the complete error detail,

Failed to pull image "image-registry.openshift-image-registry.svc:5000/abc-project/image01@sha256:xxxxxxxxxxxx": 
rpc error: code = Unknown desc = Error reading manifest sha256:xxxxxxxxxxxx in 
image-registry.openshift-image-registry.svc:5000/abc-project/image01: unauthorized: authentication required

Solution for this is quite easy, actually we need to give a specific access right in order for “xyz-project” to be able to pull image from “abc-project”.

oc policy add-role-to-user system:image-puller system:serviceaccount:xyz-project:default -n abc-project

Hope it helps.

Get ImageStream Name and SHA from All DeploymentConfig within a Namespace on Openshift 4

There are times where we want to display list of DC within one Namespace, and want to see what are the images involved within it. We can do that easily by using a simple OC command like below,

oc get dc -n  <namespace> --no-headers -o template \
     --template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{.metadata.name}}{{" - "}}
     {{(index .spec.template.spec.containers 0).image}}{{"\n"}}{{end}}'