Programming Posts

Deploying Sonarqube to Openshift 4 by Using Openshift Template

Sonarqube is an automatic code review tool to detect bugs, vulnerabilities, and code smells in your code. It can fits with existing CI/CD tools as a static code analysis tools, and providing a quality gate to make sure that your code and delivery can fits in with a specific quality standard.

Deploying Sonarqube to Openshift should be a pretty much straightforward activity which have multiple approaches. But now, im trying to deploy Sonarqube by using a simple Openshift Template. In short, Openshift Template is a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. Which means, we are deploying Sonarqube with its configuration, and surrounding infrastructures.

On this article, im trying to create two different Sonarqube deployments. One is by using an in-memory database, which is recommended for a non-production environments, and another way is by having a Postgresql database which is suitable for production usage.

So, on below YML i have the first Sonarqube template deployment. It would deploy Sonarqube, setting up its Router, provisioning PersistentVolume, and setting up its configurations. Lets name it sonarqube-h2-db-template.yml

apiVersion: v1
kind: Template
labels:
  template: sonarqube-h2-db-template
message: A Sonarqube service has been created in your project. You can access using admin/admin.
metadata:
  annotations:
    description: |-
      Sonarqube service, with H2 DB.
      NOTE: Data will not be gone despite restarts, but dont use this for production usage.
    openshift.io/display-name: SonarQube (H2 DB)
    openshift.io/documentation-url: https://docs.sonarqube.org/
    openshift.io/long-description: This template deploys a SonarQube server with an embeddable H2 DB.
    tags: instant-app,sonarqube
  creationTimestamp: null
  name: sonarqube-h2-db
objects:
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      template.openshift.io/expose-uri: http://{.spec.host}{.spec.path}
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    to:
      kind: Service
      name: ${SONARQUBE_SERVICE_NAME}
    tls:
      termination: edge
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}
      spec:
        containers:
        - capabilities: {}
          image: 'sonarqube:latest'
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
              protocol: TCP
          livenessProbe:
            failureThreshold: 30
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 420
            timeoutSeconds: 3
          name: sonarqube
          readinessProbe:
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 3
            timeoutSeconds: 3
          volumeMounts:
            - mountPath: /opt/sonarqube/data
              name: ${SONARQUBE_SERVICE_NAME}-data
            - mountPath: /opt/sonarqube/logs
              name: ${SONARQUBE_SERVICE_NAME}-logs
            - mountPath: /opt/sonarqube/extensions
              name: ${SONARQUBE_SERVICE_NAME}-extensions
          resources:
            requests:
              memory: ${SONARQUBE_MEMORY_LIMITS}
            limits:
              memory: ${SONARQUBE_MEMORY_LIMITS}
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-data-pv
        - name: ${SONARQUBE_SERVICE_NAME}-logs
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-logs-pv
        - name: ${SONARQUBE_SERVICE_NAME}-extensions
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-extensions-pv
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: Service
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    sessionAffinity: None
    type: ClusterIP
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-data-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-logs-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-extensions-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
parameters:
- description: The name of the OpenShift Service exposed for the SonarQube container.
  displayName: SonarQube Service Name
  name: SONARQUBE_SERVICE_NAME
  value: sonar
- description: SonarQube container memory limits.
  displayName: Memory Limits
  name: SONARQUBE_MEMORY_LIMITS
  required: true
  value: 2Gi

Deploy the template to OCP by using below command,

$ oc create -f sonarqube-h2-db-template.yml

And we can initiate the template from Openshift web console,

Or, we can initiate it by using an OC command instead,

$ oc new-app --template sonarqube-h2-db

The second template deployment is deploying Sonarqube with a separate Postgresql database installed. So we need to prepare the Sonarqube and Postgresql database configurations, Services, and PersistentVolume.

apiVersion: v1
kind: Template
labels:
  template: sonarqube-pgsql-db-template
message: A Sonarqube service has been created in your project. You can access using admin/admin.
metadata:
  annotations:
    description: |-
      Sonarqube service, with Postgresql DB.
      NOTE: Data will not be gone despite restarts, preferable for production usages.
    openshift.io/display-name: SonarQube (Postgresql DB)
    openshift.io/documentation-url: https://docs.sonarqube.org/
    openshift.io/long-description: This template deploys a SonarQube server with a standalone Postgresql DB.
    tags: instant-app,sonarqube
  creationTimestamp: null
  name: sonarqube-pgsql-db
objects:
- apiVersion: v1
  kind: Route
  metadata:
    annotations:
      template.openshift.io/expose-uri: http://{.spec.host}{.spec.path}
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    to:
      kind: Service
      name: ${SONARQUBE_SERVICE_NAME}
    tls:
      termination: edge
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}
      spec:
        containers:
        - capabilities: {}
          image: 'sonarqube:latest'
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
              protocol: TCP
          livenessProbe:
            failureThreshold: 30
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 420
            timeoutSeconds: 3
          env:
            - name: SONAR_JDBC_URL
              value: jdbc:postgresql://${SONARQUBE_SERVICE_NAME}-pgsql:5432/sonar
            - name: SONAR_JDBC_USERNAME
              value: sonar
            - name: SONAR_JDBC_PASSWORD
              value: sonar
          name: sonarqube
          readinessProbe:
            httpGet:
              path: /
              port: 9000
            initialDelaySeconds: 3
            timeoutSeconds: 3
          volumeMounts:
            - mountPath: /opt/sonarqube/data
              name: ${SONARQUBE_SERVICE_NAME}-data
            - mountPath: /opt/sonarqube/logs
              name: ${SONARQUBE_SERVICE_NAME}-logs
            - mountPath: /opt/sonarqube/extensions
              name: ${SONARQUBE_SERVICE_NAME}-extensions
          resources:
            requests:
              memory: ${SONARQUBE_MEMORY_LIMITS}
            limits:
              memory: ${SONARQUBE_MEMORY_LIMITS}
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-data-pv
        - name: ${SONARQUBE_SERVICE_NAME}-logs
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-logs-pv
        - name: ${SONARQUBE_SERVICE_NAME}-extensions
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-extensions-pv
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: Service
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}
  spec:
    ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
    selector:
      name: ${SONARQUBE_SERVICE_NAME}
    sessionAffinity: None
    type: ClusterIP
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-data-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-logs-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-extensions-pv
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce

# postgresql starts here
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      template.alpha.openshift.io/wait-for-ready: "true"
    name: ${SONARQUBE_SERVICE_NAME}-pgsql
  spec:
    replicas: 1
    selector:
      name: ${SONARQUBE_SERVICE_NAME}-pgsql
    strategy:
      type: Recreate
    template:
      metadata:
        labels:
          name: ${SONARQUBE_SERVICE_NAME}-pgsql
      spec:
        containers:
        - capabilities: {}
          env:
          - name: POSTGRESQL_USER
            value: sonar
          - name: POSTGRESQL_PASSWORD
            value: sonar
          - name: POSTGRESQL_DATABASE
            value: sonar
          image: 'image-registry.openshift-image-registry.svc:5000/openshift/postgresql:12'
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
              - /usr/libexec/check-container
              - --live
            initialDelaySeconds: 120
            timeoutSeconds: 10
          name: postgresql
          ports:
          - containerPort: 5432
            protocol: TCP
          readinessProbe:
            exec:
              command:
              - /usr/libexec/check-container
            initialDelaySeconds: 5
            timeoutSeconds: 1
          resources:
            limits:
              memory: 512Mi
          securityContext:
            capabilities: {}
            privileged: false
          terminationMessagePath: /dev/termination-log
          volumeMounts:
          - mountPath: /var/lib/pgsql/data
            name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
          persistentVolumeClaim:
            claimName: ${SONARQUBE_SERVICE_NAME}-pgsql-data
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: ${SONARQUBE_SERVICE_NAME}-pgsql-data
  spec:
    resources:
      requests:
        storage: 1Gi
    accessModes:
    - ReadWriteOnce
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      template.openshift.io/expose-uri: postgres://{.spec.clusterIP}:{.spec.ports[?(.name=="postgresql")].port}
    name: ${SONARQUBE_SERVICE_NAME}-pgsql
  spec:
    ports:
    - name: postgresql
      nodePort: 0
      port: 5432
      protocol: TCP
      targetPort: 5432
    selector:
      name: ${SONARQUBE_SERVICE_NAME}-pgsql
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}

parameters:
- description: The name of the OpenShift Service exposed for the SonarQube container.
  displayName: SonarQube Service Name
  name: SONARQUBE_SERVICE_NAME
  value: sonar
- description: SonarQube container memory limits.
  displayName: Memory Limits
  name: SONARQUBE_MEMORY_LIMITS
  required: true
  value: 2Gi

Deploy above template to OCP by using below command,

$ oc create -f sonarqube-pgsql-db-template.yml

The code for above template can be found on below Github url,

https://github.com/edwin/sonarqube-on-openshift-4

Hope it helps, and have fun using Sonarqube.

Using Containerized Nexus as Image Registry for Storing Docker Images

There are alot of image registries when we are talking about docker images, such as Quay, Docker Hub, or Nexus. And on this writing, we are trying to create a docker image repository by using Nexus.

Lets start by installing Nexus to our system,

docker run -d -p 8081:8081 -p 7000:7000 --name nexus sonatype/nexus3

After logging in by using admin credentials, and yes we need to read the generated password which is located at /nexus-data/admin.password, we need to update our admin password and after that we can create a new docker image repository.

and create a new repository by using a docker (hosted) recipe. Open port 7000 for http connection

we can test login to our Nexus image repository by using below docker command,

$ docker login localhost:7000

try pulling a new image from external, and push it into our newly created Nexus

$ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
59bf1c3509f3: Pull complete
Digest: sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest

tag it, and push into Nexus image repository,

$ docker tag alpine localhost:7000/dev/alpine

$ docker push localhost:7000/dev/alpine

and finally we can see our images in Nexus.

Scrapping Data from BPS Website using Java

BPS (Badan Pusat Statistik) is a non-departmental government institute of Indonesia that is responsible for conducting statistical surveys. On its website, we can see that there are a lot of data available, especially regarding to spatial and regional such as number of provinces, zipcodes, cities, and others.

One BPS website which is contains interesting data is https://sig.bps.go.id/, an on this sample im trying to crawl and read the zipcode data from it.

Lets start with pom file,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>ScrappingBPS</artifactId>
    <version>1.0</version>

    <properties>
        <java.version>11</java.version>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.squareup.okhttp3</groupId>
            <artifactId>okhttp</artifactId>
            <version>4.9.3</version>
        </dependency>
        <dependency>
            <groupId>org.mybatis</groupId>
            <artifactId>mybatis</artifactId>
            <version>3.5.5</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.25</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <version>2.13.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
            <version>2.15.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>2.15.0</version>
        </dependency>

        <dependency>
            <groupId>com.lmax</groupId>
            <artifactId>disruptor</artifactId>
            <version>3.4.4</version>
        </dependency>

    </dependencies>
</project>

Since im using MyBatis, so lets create a SessionFactory class.

package com.edw.config;

import com.edw.mapper.KodeposMapper;
import com.edw.mapper.RegionsMapper;
import org.apache.ibatis.io.Resources;
import org.apache.ibatis.session.SqlSessionFactory;
import org.apache.ibatis.session.SqlSessionFactoryBuilder;

import java.io.IOException;
import java.io.Reader;

public class MyBatisSqlSessionFactory {
    private static final SqlSessionFactory FACTORY;

    static {
        try {
            Reader reader = Resources.getResourceAsReader("configuration.xml");
            FACTORY = new SqlSessionFactoryBuilder().build(reader);
            FACTORY.getConfiguration().addMapper(KodeposMapper.class);
            
        } catch (IOException e) {
            throw new RuntimeException("Fatal Error. Cause: " + e, e);
        }
    }

    public static SqlSessionFactory getSqlSessionFactory() {
        return FACTORY;
    }
}

And a bean for table representation, and a mapper interface.

package com.edw.bean;

import java.io.Serializable;

public class Kodepos implements Serializable {
    private Long id;
    private String kelurahan;
    private String kecamatan;
    private String kabupaten;
    private String provinsi;
    private String kodepos;

    public Kodepos() {
    }

    // other setter and getter
}
package com.edw.mapper;

import com.edw.bean.Kodepos;
import org.apache.ibatis.annotations.Insert;

public interface KodeposMapper {

    @Insert("INSERT INTO `db_kodepos`.`tbl_kodepos` " +
            "(`kelurahan`, `kecamatan`, `kabupaten`, `provinsi`, `kodepos`) " +
            "VALUES (#{kelurahan}, #{kecamatan}, #{kabupaten}, #{provinsi}, #{kodepos})")
    Integer insert(Kodepos kodepos);
}

And last is our main class, im using okhttp to do http request and jackson for parsing json to java object

package com.edw;

import com.edw.bean.Kodepos;
import com.edw.config.MyBatisSqlSessionFactory;
import com.edw.mapper.KodeposMapper;
import com.fasterxml.jackson.databind.ObjectMapper;
import okhttp3.Call;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
import org.apache.ibatis.session.SqlSession;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import java.io.IOException;
import java.util.*;
import java.util.concurrent.TimeUnit;

public class KodeposScrapping {

    private SqlSession sqlSession = null;
    private KodeposMapper kodeposMapper  = null;

    private static final String BASE_URL = "https://sig.bps.go.id";

    private Logger logger = LogManager.getLogger(KodeposScrapping.class);

    public KodeposScrapping() {
    }

    public static void main(String[] args) throws IOException {
        KodeposScrapping kodeposScrapping = new KodeposScrapping();
        kodeposScrapping.doScrapping();
    }

    private void doScrapping() throws IOException {
        getProvinsis();
    }

    /**
     *  <pre> curl "https://sig.bps.go.id/rest-drop-down/getwilayah" </pre>
     */
    private void getProvinsis() throws IOException {
        Request request = new Request.Builder()
                .url(BASE_URL + "/rest-drop-down/getwilayah")
                .get()
                .build();

        List<HashMap> hashMaps = doHttpCall(request);

        for (HashMap hashMap : hashMaps) {
            Kodepos kodepos = new Kodepos();
            kodepos.setProvinsi(hashMap.get("nama").toString());

            logger.info("start processing {}", hashMap.get("nama").toString());

            // get Kabupatens from Provinsi
            getKabupatens(hashMap.get("kode").toString(), kodepos);

            logger.info("done processing {}", hashMap.get("nama").toString());
        }
    }

    /**
     *  <pre> curl "https://sig.bps.go.id/rest-drop-down/getwilayah?level=kabupaten&parent=11" </pre>
     */
    private void getKabupatens(String parent, Kodepos kodepos) throws IOException {
        Request request = new Request.Builder()
                .url(BASE_URL + "/rest-drop-down/getwilayah?level=kabupaten&parent="+parent)
                .get()
                .build();

        List<HashMap> hashMaps = doHttpCall(request);

        for (HashMap hashMap : hashMaps) {
            kodepos.setKabupaten(hashMap.get("nama").toString());

            // get Kecamatans from Kabupaten
            getKecamatans(hashMap.get("kode").toString(), kodepos);
        }
    }

    /**
     *  <pre> curl "https://sig.bps.go.id/rest-drop-down/getwilayah?level=kecamatan&parent=1101" </pre>
     */
    private void getKecamatans(String parent, Kodepos kodepos) throws IOException {
        Request request = new Request.Builder()
                .url(BASE_URL + "/rest-drop-down/getwilayah?level=kecamatan&parent="+parent)
                .get()
                .build();

        List<HashMap> hashMaps = Collections.synchronizedList(doHttpCall(request));

        for (HashMap hashMap : hashMaps) {
            kodepos.setKecamatan(hashMap.get("nama").toString());
            getKelurahans(hashMap.get("kode").toString(), kodepos);
        }
    }

    /**
     *  <pre> curl "https://sig.bps.go.id/rest-bridging-pos/getwilayah?level=desa&parent=1101050" </pre>
     */
    private void getKelurahans(String parent, Kodepos kodepos) throws IOException {
        Request request = new Request.Builder()
                .url(BASE_URL + "/rest-bridging-pos/getwilayah?level=desa&parent="+parent)
                .get()
                .build();

        List<HashMap> hashMaps = Collections.synchronizedList(doHttpCall(request));

        for (final HashMap hashMap : hashMaps) {
            kodepos.setKelurahan(hashMap.get("nama_bps").toString());
            kodepos.setKodepos(hashMap.get("kode_pos").toString());

            insert(kodepos);
        }
    }

    private void insert(Kodepos kodepos) {
        try {
            sqlSession = MyBatisSqlSessionFactory.getSqlSessionFactory().openSession(true);
            kodeposMapper = sqlSession.getMapper(KodeposMapper.class);

            kodeposMapper.insert(kodepos);
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (sqlSession != null) {
                sqlSession.close();
            }
        }
    }

    private List<HashMap> doHttpCall(Request request) throws IOException {
        Call call = new OkHttpClient().newBuilder()
                .retryOnConnectionFailure(true)
                .connectTimeout(300, TimeUnit.SECONDS)
                .readTimeout(300, TimeUnit.SECONDS)
                .writeTimeout(300, TimeUnit.SECONDS).build().newCall(request);
        Response response = call.execute();

        ObjectMapper objectMapper = new ObjectMapper();
        List<HashMap> hashMaps = objectMapper.readValue(response.body().string(), List.class);

        response.close();
        return hashMaps;
    }
}

Code can be found in my github repository,

https://github.com/edwin/bps-data-scrapper

Running JUnit Testing Sequentially

One problem that keep showing when im doing unit test is how can i make my unit test runs sequentially across multiple unit test classes. Usually i need this for having one unit test to start first for initializing all the data, and one unit test that run last to delete all the generated data.

Usually run multiple unit testing classes is like below image, it is something that is unpredictable and sometimes can be different between executions.

In order to make it sequential, the trick is to use JUnit with version 5.8.0 minimum.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>sequential-unit-test</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <java.version>11</java.version>
        <spring-boot-bom.version>2.3.6.Final-redhat-00001</spring-boot-bom.version>
        <junit-version>5.8.0</junit-version>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>

		<!-- ....  -->

        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>${junit-version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
            <version>${junit-version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.platform</groupId>
            <artifactId>junit-platform-commons</artifactId>
            <version>1.8.1</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>json-path</artifactId>
            <scope>test</scope>
        </dependency>
		
		<!-- ....  -->

    </dependencies>
</project>

And create a junit-platform.properties file,

junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$DisplayName
junit.jupiter.testmethod.order.default=org.junit.jupiter.api.MethodOrderer$DisplayName

And the result is something like below image,

Full code for this sample can be found in below github link.

https://github.com/edwin/sequential-unit-testing

ps.
Im using unit testing’s Display Name as orderer so all my unit test classes are ordered Ascending by Display Name.

@DisplayName("03. Class Run Third")
public class SomewhatUnitTest {


    @Test
    @DisplayName("01. Test First")
    public void testSomething() throws Exception {
        assertTrue(true);
    }
}

however there are multiple other ways of doing ordering, such as by Class name or by Order annotation. Detail can be seen on below url,

https://junit.org/junit5/docs/snapshot/api/org.junit.jupiter.api/org/junit/jupiter/api/ClassOrderer.html

Deploy A Java Application to Openshift by Using Tekton Pipeline

Tekton is one of CI/CD tools that we can use for building and deploying application, it provides a lightweight yet powerful and flexible cloud native CI/CD. For this sample, i am planning to demonstrate a build and deployment for a java Spring Boot application into Openshift.

First we need to create two different Openshift Namespace,

oc new-project edwin-pipeline
oc new-project edwin-deploy

So lets start with creating a new PVC for storing our build artifacts. This PVC is going to be needed betweek Task, and we’ll store those artifact under a different folder based on Pipeline’s uid variable to prevent overlapping one and another.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tekton-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi

And a pipeline YML file, for orchestrating our build and deployment steps.

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: hello-world-java-build-and-deploy
spec:
  params:
    - name: uid
      description: the uid
  workspaces:
    - name: task-pvc
  tasks:
  - name: git-clone
    taskRef:
      name: git-clone
    params:
    - name: app-git
      value: https://github.com/edwin/spring-boot-hello-world
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: build
    taskRef:
      name: mvn
    runAfter: ["git-clone"]
    params:
    - name: goal
      value: "package"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: test
    taskRef:
      name: mvn
    runAfter: ["build"]
    params:
    - name: goal
      value: "test"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc
  - name: integration-test
    taskRef:
      name: mvn
    runAfter: ["build"]
    params:
    - name: goal
      value: "test"
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc      
  - name: deploy
    taskRef:
      name: deploy-and-clean
    runAfter: ["integration-test","test"]
    params:
    - name: uid
      value: $(params.uid)
    workspaces:
    - name: task-pvc
      workspace: task-pvc

Based on above Pipeline, we have several Tasks which are involved within it. Lets start with a Task to clone a code from Github.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: git-clone
spec:
  params:
    - name: app-git
      description: the git repo
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: git-clone
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          echo "git clone";
          mkdir /workspace/source/$(params.uid) && cd /workspace/source/$(params.uid);
          git clone $(params.app-git) /workspace/source/$(params.uid);
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

And a simple Maven build Task,

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: mvn
spec:
  params:
    - name: goal
      description: mvn goal
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: mvn
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          echo "mvn something";
          cd /workspace/source/$(params.uid);
          mvn $(params.goal) -Dmaven.repo.local=/workspace/source/m2;
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

The last task involved is a task to do deployment and removing build folder,

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: deploy-and-clean
spec:
  params:
    - name: uid
      description: uid
  workspaces:
    - name: task-pvc
      mountPath: /workspace/source 
  steps:
    - name: git-clone
      command: ["/bin/sh", "-c"]
      args:
        - | 
          set -e -o
          cd /workspace/source/$(params.uid) ;
          mkdir build-folder ;
          cp target/*.jar build-folder/ ;
          oc login --insecure-skip-tls-verify --token=my-openshift-token --server=https://api.my-openshift-url.com:6443 ;
          oc new-build  --name hello-world --binary=true -n edwin-deploy --image-stream=openjdk-11  || true ;
          oc start-build hello-world --from-dir=build-folder/. -n edwin-deploy --follow --wait ;
          oc new-app hello-world -n edwin-deploy || true ;
          oc expose svc/hello-world -n edwin-deploy || true ;
          cd / ;
          rm -Rf /workspace/source/$(params.uid) ;
      image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-maven

And the last would be a PipelineRun yaml file for running the whole Pipeline,

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  generateName: hello-world-run
spec:
  params:
    - name: uid
      value: $(context.pipelineRun.uid)
  pipelineRef:
    name: hello-world-java-build-and-deploy
  workspaces:
    - name: task-pvc
      persistentVolumeClaim:
        claimName: tekton-pvc

And run it by using below command,

oc create -f 06.pipeline-run.yml -n edwin-pipeline

Code for this sample can be found on below Github link,

https://github.com/edwin/java-app-and-tekton-pipeline-sample

We are using UBI8 – OpenJDK11 for the base image that is going to be use for running our java apps. We can import it into our OCP cluster by using below command,

oc import-image openjdk-11 --from=registry.access.redhat.com/ubi8/openjdk-11 --confirm

Finally the result will looks like this,

Thanks

ps. Openshift version used is 4.8.2, and im using Openshift’s jenkins-agent-maven Image to do all build and deployments. Feel free to use other image in case needed.