Java Posts

Securing Quarkus Metric API

Usually im creating a metrics API for displaying statistics and various metrics which are being use for measuring application healthiness. There are various tools that being use to capturing this statistics and displaying it, one example is using Prometheus and Grafana.

But on this example, we are not talking too much detail about Prometheus and Grafana, but more on how we providing those metrics on top of Quaskus while securing it so that no malicious user can access this metric API easily.

Lets start with a simple pom file,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>QuarkusPrometheus</groupId>
    <artifactId>com.edw</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <quarkus.version>1.6.1.Final</quarkus.version>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <surefire-plugin.version>2.22.1</surefire-plugin.version>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
    </properties>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>io.quarkus</groupId>
                <artifactId>quarkus-bom</artifactId>
                <version>${quarkus.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

    <dependencies>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-resteasy</artifactId>
        </dependency>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-resteasy</artifactId>
        </dependency>

        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-metrics</artifactId>
        </dependency>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-smallrye-health</artifactId>
        </dependency>
        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-elytron-security-properties-file</artifactId>
        </dependency>

        <dependency>
            <groupId>io.quarkus</groupId>
            <artifactId>quarkus-junit5</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>io.rest-assured</groupId>
            <artifactId>rest-assured</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>${surefire-plugin.version}</version>
                <configuration>
                    <systemProperties>
                        <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
                    </systemProperties>
                </configuration>
            </plugin>
            <plugin>
                <groupId>io.quarkus</groupId>
                <artifactId>quarkus-maven-plugin</artifactId>
                <version>${quarkus.version}</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>build</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

And a very simple application.properties for storing all my configurations, in here we can see that we are defining a specific role for accessing our metrics endpoint.

#port
quarkus.http.port=8082

#security
quarkus.security.users.embedded.enabled=true
quarkus.security.users.embedded.plain-text=true
quarkus.security.users.embedded.users.admin=password
quarkus.security.users.embedded.roles.admin=prom-role

quarkus.http.auth.policy.prom-policy.roles-allowed=prom-role

quarkus.http.auth.permission.prom-roles.policy=prom-policy
quarkus.http.auth.permission.prom-roles.paths=/metrics

run by using below command,

compile quarkus:dev

Try opening our metrics url directly,

We can try to login by using a specific username and password,

admin / password

If successfully login, we can see this view

One interesting thing is that we can use a different url for Kubernetes’s health and liveness probe check, without have to use any credential at all.

Sourcecode for this example can be downloaded on below url,

https://github.com/edwin/quarkus-secure-prometheus-api

Have fun with Quarkus :-)

Google+

A Simple Load Testing Pipeline on Openshift 4 and Jenkins

Theres one thing needed to be done before deploying your app to production environment, and that is ensuring that your app able to perform well under a high load of transaction. One way to achieve that is by doing a load testing and stress testing internally before deploying to production, but there are times when load testing are being done at the end of development phase with not many time left for developer to do performance tuning. Therefore the better approach is by “shifting left” both load and stress testing phase to an earlier phase, and that is since development phase.

The concept on this blog is doing a load testing on a temporary deployed application, with a maximum one percent acceptable fail. Why i need to deploy the application first before doing a load testing? Because im trying to simulate the exact same condition with production, where each application is a standalone pod, with a specific memory and cpu allocation.

Everything is automated, monitored and managed thru jenkins pipeline with a specific load testing scenario created separatedly in a regular JMeter desktop ui, saved and mapped to a specific application. The detail can be see on on below image where scenario 1 is a scenario created for application 1.

The hard part is creating a JMeter application that is able to consume different scenario, with a parameterized thread and testing endpoint. Thats why im leveraging jmeter-maven-plugin for this, because it’s so lightweight and have a dynamic configuration.

It consist only a one pom file with multiple parameterized fields,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>JMeterLoadTesting</artifactId>
    <version>1.0-SNAPSHOT</version>
    <name>JMeterLoadTesting</name>
    <description>A Load Testing tool</description>

    <properties>
        <java.version>11</java.version>
    </properties>

    <dependencies>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>com.lazerycode.jmeter</groupId>
                <artifactId>jmeter-maven-plugin</artifactId>
                <version>3.1.0</version>
                <executions>
                    <execution>
                        <id>configuration</id>
                        <goals>
                            <goal>configure</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>jmeter-tests</id>
                        <phase>integration-test</phase>
                        <goals>
                            <goal>jmeter</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>jmeter-check-results</id>
                        <goals>
                            <goal>results</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <testFilesIncluded>
                        <jMeterTestFile>${testfile}</jMeterTestFile>
                    </testFilesIncluded>
                    <propertiesJMeter>
                        <threads>${threads}</threads>
                        <rampup>${rampup}</rampup>
                        <loops>${loops}</loops>
                        <url>${url}</url>
                        <port>${port}</port>
                    </propertiesJMeter>
                    <errorRateThresholdInPercent>1</errorRateThresholdInPercent>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Next is we need to create a JMeter test scenario, a simple http GET to root url. And save it to test01.jmx, and put it on /test/jmeter folder so that jmeter-maven-plugin can detect this scenario.

We can test our JMeter script with below command, below example we are running test01.jmx which is doing a 25 hit testing within a 5 seconds timeframe.

mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 \
 -Durl=localhost -Dport=8080 -Dtestfile=test01.jmx

The next complicated task is to create a simple Jenkins pipeline script to run this. It needs to have the ability to build and deploy an apps on a temporary pod, do load testing, and clean all resources once load testing is done.

node('maven2') {
    def appname = "app-loadtest-${env.BUILD_NUMBER}"
    try {
        stage ('pull code') {
            sh "git clone https://github.com/edwin/app-loadtest.git source"
        }
        stage ('deploy to ocp') {
            dir("source") {
                sh "oc new-build jenkins2/openjdk-11-rhel7 --name=${appname} --binary "
                sh "oc start-build ${appname} --from-dir=. --follow --wait"
                sh "oc new-app --docker-image=image-registry.openshift-image-registry.svc:5000/jenkins2/${appname}:latest --name=${appname} || true"
                sh "oc set resources dc ${appname} --limits=cpu=500m,memory=1024Mi --requests=cpu=200m,memory=256Mi"
            }
        }
        stage ('do load test') {
            sh "git clone https://github.com/edwin/jmeter-loadtesting.git load"
            dir("load") {
                // 5 threads x 5 loops in 5 seconds
                sh "mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 -Durl=${appname} -Dport=8080 -Dtestfile=test01.jmx"
            }
        }
    } catch (error) {
       throw error
    } finally {
        stage('housekeeping') {
            sh "oc delete svc ${appname}"
            sh "oc delete bc ${appname}"
            sh "oc delete is ${appname}"
            sh "oc delete dc ${appname}"
        }
    }
}

If we run the pipeline, we can see that it will spawn an appication pod. We can check whether application runs perfectly or not, by running terminal directly inside it.

The result on Jenkins Dashboard will be like this,

As for the loadtest result, we can see those on our Jenkins logs

All codes are available on github,

https://github.com/edwin/app-loadtest

https://github.com/edwin/jmeter-loadtesting

So, have fun with Jenkins and JMeter :)

Google+

Creating A Simple Java Database Integration Test with Openshift 4 and Jenkins Pipeline

During testing phase, there are time when we want to do an automated testing against a real temporary database. For example, if my database in production environment is MySql means i need to have the exact same MySql for testing, with the same version and structure. And one of the most important thing is the test database lifespan is only as long as the test case lifespan which means that once test is done, either success or failed, the temporary database shall be destroyed.

There are multiple ways of achieving this, we can use database sidecar pattern, install a MySql service on our jenkins slave base image, or create a temporary MySql pod on our Openshift cluster specifically for testing purpose. The last approach is the one i choose and i will share how to achieve it on this blog.

Lets start by creating a very simple java web apps with Spring Boot and JUnit, it is basically a simple java apps but the only difference is the database url for testing is not hardcoded, but parameterized.

spring.datasource.url=jdbc:mysql://${MYSQL_URL}:3306/db_test
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.username=user1
spring.datasource.password=password
spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect

spring.jpa.hibernate.ddl-auto=update

and a simple integration test,

package com.edw.controller;

import com.edw.entity.Account;
import com.edw.repository.AccountRepository;
import io.restassured.RestAssured;
import org.apache.http.HttpStatus;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.web.server.LocalServerPort;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import static io.restassured.RestAssured.given;

@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@DirtiesContext
public class AccountControllerIT {
    @LocalServerPort
    private int port;

    @Autowired
    private AccountRepository accountRepository;

    @Before
    public void setup() {
        RestAssured.port = this.port;

        accountRepository.delete(new Account(10));

        Account account = new Account();
        account.setId(10);
        account.setAccountNumber("ten ten");
        accountRepository.save(account);
    }

    @Test
    public void getSuccess() throws Exception {
        given()
                .when()
                .get("/10")
                .then()
                .assertThat()
                .statusCode(HttpStatus.SC_OK);
    }

    @Test
    public void getFailed() throws Exception {
        given()
                .when()
                .get("/7")
                .then()
                .assertThat()
                .statusCode(HttpStatus.SC_INTERNAL_SERVER_ERROR);
    }
}

Once created, next step is to create a Jenkins setup for code build and deployment pipeline. Im using a simple a simple maven image which comes with OCP,

Next step is to create a pipeline to spawn database, do integration testing, build apps and destroy database once everything is completed. One think that i need to do is to create a unique database service between build, it will prevent database overlapping between different build and maintain a testing isolation. That is the reason why im appending build number on every temporary database services. And i will inject the database url thru maven build parameter in order to make sure that testing database are pointing to my newly created database.

node('maven') {
    try {
        stage ('pull code') {
            sh "git clone https://github.com/edwin/springboot-integration-test.git source"
        }
        stage('spawn db') {
            sh "oc new-app mysql-ephemeral --name mysql -p MYSQL_USER=user1 -p MYSQL_PASSWORD=password -p MYSQL_DATABASE=db_test -p DATABASE_SERVICE_NAME=mysql${env.BUILD_NUMBER}"
            
            // wait until db is ready
            sh """
            sleep 10
            while oc get po --field-selector=status.phase=Running | grep mysql${env.BUILD_NUMBER}-1-deploy; do
                sleep 2
            done
            """
        }
        stage ('test') {
            dir("source") {
                sh "mvn verify -DMYSQL_URL=mysql${env.BUILD_NUMBER}"
            }
        }
        stage ('build') {
            dir("source") {
                sh "mvn clean package -DskipTests=true"
            }
        }
    } catch (error) {
       throw error
    } finally {
        stage('destroy db') {
            sh "oc delete dc mysql${env.BUILD_NUMBER}"
            sh "oc delete svc mysql${env.BUILD_NUMBER}"
            sh "oc delete secret mysql${env.BUILD_NUMBER}"
        }
    }    
}

It will generates this output on Jenkins dashboard,

If we check the content of our database while testing is happen, we can see that a table is created and a data is automatically inserted there for testing purpose.

And after test and build is done, we can see that database is deleted automatically from our Openshift cluster.

So basically it is very simple to do an integration test on Openshift 4 and Jenkins, and the code for testing is available on my github repository.

https://github.com/edwin/springboot-integration-test
Google+

Creating a Rest API Call with WorkItemHandler on Red Hat Process Automation Manager

Usually we are using tasks for showing steps of process on top of Red Hat Process Automation Manager (RHPAM). But most of the time we need a customized task involved, thats where WorkItemHandler comes in handy.
For this demo, im trying to create a simple WorkItemHandler to create a custom task on top of RHPAM which is getting some response from a third party api provider.

So here is pretty much the raw concept,

As usual, for beginning we need to create a pom xml file, and it is important to put “scope” variable as provided.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>AnimeWorkItemHandler</artifactId>
    <version>1.0.4</version>
    <description>a simple rest api to be used within RHPAM</description>
    <packaging>jar</packaging>


    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <distributionManagement>
        <repository>
            <id>repo-custom</id>
            <url>http://nexus:8081/repository/pam/</url>
        </repository>
        <snapshotRepository>
            <id>repo-custom</id>
            <url>http://nexus:8081/repository/pam/</url>
        </snapshotRepository>
    </distributionManagement>

    <dependencies>
        <dependency>
            <groupId>org.jbpm</groupId>
            <artifactId>jbpm-flow</artifactId>
            <version>7.38.0.Final</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-core-asl</artifactId>
            <version>1.9.9</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-mapper-asl</artifactId>
            <version>1.9.9</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-xc</artifactId>
            <version>1.9.9</version>
            <scope>provided</scope>
        </dependency>

    </dependencies>
</project>

and simple java class to do a simple REST Api call,

package com.edw;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.util.HashMap;
import java.util.List;

import org.codehaus.jackson.map.ObjectMapper;
import org.drools.core.process.instance.WorkItemHandler;
import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemManager;

public class AnimeWorkItemHandler implements WorkItemHandler {
    public void abortWorkItem(WorkItem wi, WorkItemManager wim) {
        wim.abortWorkItem(wi.getId());
    }

    // will try fire to https://jikan.moe/ api
    public void executeWorkItem(WorkItem wi, WorkItemManager wim) {
        String name = (String) wi.getParameter("name");

        String nameResponse = "";
        String imageUrl = "";

        try {
            // api endpoint = https://api.jikan.moe/v3/search/character?q=???&limit=1
            URL url = new URL(String.format("https://api.jikan.moe/v3/search/character?q=%s&limit=1", name));

            URLConnection urlConnection = url.openConnection();
            BufferedReader in = new BufferedReader(
                    new InputStreamReader(
                            urlConnection.getInputStream()));
            String inputLine;
            StringBuffer stringBuffer = new StringBuffer();
            while ((inputLine = in.readLine()) != null) {
                stringBuffer.append(inputLine);
            }

            ObjectMapper objectMapper = new ObjectMapper();
            HashMap jsonResult = objectMapper.readValue(stringBuffer.toString(), HashMap.class);
            List<HashMap> results = (List<HashMap>) jsonResult.get("results");

            nameResponse = (String) results.get(0).get("name");
            imageUrl = (String) results.get(0).get("image_url");

            in.close();
        } catch (Exception e) {
            e.printStackTrace();
        }

        HashMap result = new HashMap();
        result.put("nameResponse", nameResponse);
        result.put("imageUrl", imageUrl);
        wim.completeWorkItem(wi.getId(), result);
    }
}

Once created, we need to build it by using an mvn command

mvn clean package

After jar has been build, next step is to upload that jar thru your Business Central,

If uploading success, we can see the uploaded jar file within our Nexus

Next is lets go to RHPAM dashboard, and create a work item definition

And put this values, make sure that the content of “parameters” and “result” is the same as on your java class, which in my case is “name”, “nameResponse” and “imageUrl”.

[
  [
    "name" : "AnimeWorkItemHandler", 
    "parameters" : [ 
        "name" : new StringDataType() 
    ], 
    "results" : [ 
        "nameResponse" : new StringDataType(),
        "imageUrl" : new StringDataType() 
    ], 
    "displayName" : "AnimeWorkItemHandler", 
    "icon" : "" 
  ]
]

The next step, and pehaps the most important is, include our newly created jar file into RHPAM project. We can do so by going thru “Settings” tab, and create a new Work Item Handlers,

And import the library and dependency needed

The last is adding our new WorkItemHandler to our existing workflow,

So as you can see, it’s not that difficult to create a custom task on RHPAM. And for the full code for this project can be accessed on my github page,

https://github.com/edwin/anime-work-item-handler
Google+

Securing Connection Between Pods in Openshift with SSL

On this post, im trying to create a simple microservices application on top of Openshift 3.11 and each services will do a simple secure connection between it by using a self-sign SSL which are managed by Openshift.

The goal of why Openshift are managing SSL certificate thru Openshift Secret is to have a rolling or rotating certificate feature on each services but can be triggered by Openshift without have to replace SSL on each services manually.

First is generate a p12 certificate by using keytool

cert>keytool -genkey -alias edw 
	-keystore edw.p12 -storetype PKCS12 
	-keyalg RSA -storepass password 
	-validity 730 -keysize 4096
What is your first and last name?
  [Unknown]:  Edwin
What is the name of your organizational unit?
  [Unknown]:  Company 01
What is the name of your organization?
  [Unknown]:  IT
What is the name of your City or Locality?
  [Unknown]:  Jakarta
What is the name of your State or Province?
  [Unknown]:  Jakarta
What is the two-letter country code for this unit?
  [Unknown]:  ID
Is CN=Edwin, OU=Company 01, O=IT, L=Jakarta, ST=Jakarta, C=ID correct?
  [no]:  yes

Next is creating two java projects which are connected one and another,

https://github.com/edwin/ssl-pods-example
https://github.com/edwin/ssl-pods-example-2

There are several part of the code that need mentioning,

First is making sure https option is active on application.properties, include our p12 certificate and make certificate password as parameterized. This parameter later on will be injected as environment variables on Openshift.

server.ssl.key-store-type=PKCS12
server.ssl.key-store=cert/edw.p12
server.ssl.key-store-password=${SSLPASSWORD}
server.ssl.key-alias=edw

server.port=8443
server.ssl.enabled=true

And the next is because we are using a custom certificate, dont forget to include it on RestTemplate.

@Configuration
public class MyRestTemplate {

    @Value("${server.ssl.key-store}")
    private String sslKeyStore;

    @Value("${server.ssl.key-store-password}")
    private String sslPassword;

    @Bean
    public RestTemplate restTemplate() throws Exception {
        KeyStore clientStore = KeyStore.getInstance("PKCS12");
        clientStore.load(new FileInputStream(sslKeyStore), sslPassword.toCharArray());

        SSLContext sslContext = SSLContextBuilder
                .create()
                .loadTrustMaterial(clientStore, new TrustSelfSignedStrategy())
                .build();
        SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE);
        HttpClient httpClient = HttpClients.custom()
                .setSSLSocketFactory(socketFactory)
                .build();
        HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory(httpClient);

        return new RestTemplate(factory);
    }
}

Deploy those two application to Openshift,

oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~https://github.com/edwin/ssl-pods-example

oc new-app registry.access.redhat.com/openjdk/openjdk-11-rhel7~https://github.com/edwin/ssl-pods-example-2

Deploy certificate as OCP Secret and mount it as a volume on our application,

oc create secret generic cert --from-file=cert\edw.p12

oc set volume dc ssl-pods-example --add -t secret -m /deployments/cert --name cert --secret-name cert
oc set volume dc ssl-pods-example-2 --add -t secret -m /deployments/cert --name cert --secret-name cert

And our certificate password as OCP Secret and inject it as environment variable to our application

oc create secret generic sslpassword --from-literal=SSLPASSWORD=password

oc set env dc ssl-pods-example --from=secret/sslpassword 
oc set env dc ssl-pods-example-2 --from=secret/sslpassword 

After all deployed on OCP, next is give a route for our application. Im using re-encrypt method for ensuring an end to end encryption within the app. In order to do so, we need to include our application CA certificate as our route’s destination certificate. We can do so by exporting our certificate from p12 file using this command,

keytool -exportcert -keystore edw.p12 -storetype PKCS12 -storepass password -alias edw -file edw.crt -rfc

And paste the certificate on our route,

The end result would be like below image,

And as you can see, we are using certificate from end to end for securing our connection.

Google+