Openshift Posts

A Simple Load Testing Pipeline on Openshift 4 and Jenkins

Theres one thing needed to be done before deploying your app to production environment, and that is ensuring that your app able to perform well under a high load of transaction. One way to achieve that is by doing a load testing and stress testing internally before deploying to production, but there are times when load testing are being done at the end of development phase with not many time left for developer to do performance tuning. Therefore the better approach is by “shifting left” both load and stress testing phase to an earlier phase, and that is since development phase.

The concept on this blog is doing a load testing on a temporary deployed application, with a maximum one percent acceptable fail. Why i need to deploy the application first before doing a load testing? Because im trying to simulate the exact same condition with production, where each application is a standalone pod, with a specific memory and cpu allocation.

Everything is automated, monitored and managed thru jenkins pipeline with a specific load testing scenario created separatedly in a regular JMeter desktop ui, saved and mapped to a specific application. The detail can be see on on below image where scenario 1 is a scenario created for application 1.

The hard part is creating a JMeter application that is able to consume different scenario, with a parameterized thread and testing endpoint. Thats why im leveraging jmeter-maven-plugin for this, because it’s so lightweight and have a dynamic configuration.

It consist only a one pom file with multiple parameterized fields,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""

    <description>A Load Testing tool</description>





Next is we need to create a JMeter test scenario, a simple http GET to root url. And save it to test01.jmx, and put it on /test/jmeter folder so that jmeter-maven-plugin can detect this scenario.

We can test our JMeter script with below command, below example we are running test01.jmx which is doing a 25 hit testing within a 5 seconds timeframe.

mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 \
 -Durl=localhost -Dport=8080 -Dtestfile=test01.jmx

The next complicated task is to create a simple Jenkins pipeline script to run this. It needs to have the ability to build and deploy an apps on a temporary pod, do load testing, and clean all resources once load testing is done.

node('maven2') {
    def appname = "app-loadtest-${env.BUILD_NUMBER}"
    try {
        stage ('pull code') {
            sh "git clone source"
        stage ('deploy to ocp') {
            dir("source") {
                sh "oc new-build jenkins2/openjdk-11-rhel7 --name=${appname} --binary "
                sh "oc start-build ${appname} --from-dir=. --follow --wait"
                sh "oc new-app --docker-image=image-registry.openshift-image-registry.svc:5000/jenkins2/${appname}:latest --name=${appname} || true"
                sh "oc set resources dc ${appname} --limits=cpu=500m,memory=1024Mi --requests=cpu=200m,memory=256Mi"
        stage ('do load test') {
            sh "git clone load"
            dir("load") {
                // 5 threads x 5 loops in 5 seconds
                sh "mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 -Durl=${appname} -Dport=8080 -Dtestfile=test01.jmx"
    } catch (error) {
       throw error
    } finally {
        stage('housekeeping') {
            sh "oc delete svc ${appname}"
            sh "oc delete bc ${appname}"
            sh "oc delete is ${appname}"
            sh "oc delete dc ${appname}"

If we run the pipeline, we can see that it will spawn an appication pod. We can check whether application runs perfectly or not, by running terminal directly inside it.

The result on Jenkins Dashboard will be like this,

As for the loadtest result, we can see those on our Jenkins logs

All codes are available on github,

So, have fun with Jenkins and JMeter 🙂

Error “verifying certificate: x509: certificate signed by unknown authority” When Creating HTTPS on Openshift

I had this error when creating a reeencrypt secure route on Openshift,

spec.tls.certificate: Invalid value: "redacted certificate data": error verifying certificate: x509: certificate signed by unknown authority
reason: ExtendedValidationFailed

this is the oc command that im using,

oc create route reeencrypt my-https --service my-gateway-https --key private.key --cert server.pem --cacert cacert.pem -n my-project

This error is due to im using an incorrect certificate for reencrypting. Using certificate decoder shows whats wrong with my certificate,

The correct certificate should show this,

Regenerating my certificate fix this issue.

Migrating ReplicationController Between Openshift Cluster

For this scenario, im trying to migrate replicationcontroller (RC) between two OCP with different version. One OCP is on version 3.x, while the other one is 4.x.
So, it’s actually quite tricky. This is the first method that im doing, a simple export on OCP 3

oc get rc -o yaml -n projectname --export > rc.yaml

And do a simple import on OCP 4

oc create -n projectname -f rc.yaml

But there are some error happens,

Error from server (Forbidden): replicationcontrollers "rc-1" is forbidden: 
cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>

It seems like despite im using –export parameter, somehow still exporting the previous DC uid on rc.yaml

      app: HelloWorld
      group: com.redhat.edw helloworld
      provider: fabric8
      version: "1.0"
    name: helloworld-5
    namespace: project
    - apiVersion:
      blockOwnerDeletion: true
      controller: true
      kind: DeploymentConfig
      name: helloworld
      uid: 8a96de62-9be4-11ea-a05c-0a659b38d468
    resourceVersion: "65350349"
    selfLink: /api/v1/namespaces/project/replicationcontrollers/helloworld-5
    uid: 257eda84-9be7-11ea-a05c-0a659b38d468

The solution is by removing ownerReferences tag from yaml,

sed -i '/ownerReferences/,+6 d' rc.yaml

It will regenerate ownerReference tag once successfully imported to a new project.

But another problem arise. It seems like despite i’ve successfully import all my RCs, they are not showing when i do a oc get rc command. The culprit is revisionHistoryLimit, removing it from our dc solve this problem.

oc patch  dc helloworld -n project --type json --patch '[{ "op": "remove", "path": "/spec/revisionHistoryLimit" }]'

Migrating Image Stream from One Openshift Image Registry to Another Image Registry with Skopeo

I have a requirement where i need to move all images from Image Registry on Openshift 3, to Image Registry on Openshift 4. There are a lot of ways to do it, such as mounting the same disk to multiple Openshift instance or move in manually using docker pull, tag and then push.

After brainstorming for quite some time, i come up with a solution of using Skopeo as a tools to do image migration. It’s a very convenient tool for handling image copying from one image registry to another.

It is actually a very simple script, first we need to capture all images within every OCP3 project,

oc get project -o template --template='{{range.items}}{{}}{{"\n"}}{{end}}' | while read line
	oc get imagestreamtag -n $line -o template \ 
		--template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{}}{{"\n"}}{{end}}' > images.txt

Use this command to capture your OCP username and token,

# capturing your username
oc whoami 

#capturing your token
oc whoami -t

And then we need to iterate the content of generated file with the username and token you get from previous command.

cat images.txt | while read line
	skopeo copy  --src-creds ocp3username:ocp3token --src-tls-verify=false \
		--dest-creds ocp4username:ocp4token  --dest-tls-verify=false \
		docker://docker-registry-from.ocp3/$line \

After all is done, what is left is do a simple validation to count how many images has been migrated.

 oc get imagestreamtag --no-headers | wc -l

Securing Connection Between Pods in Openshift with SSL

On this post, im trying to create a simple microservices application on top of Openshift 3.11 and each services will do a simple secure connection between it by using a self-sign SSL which are managed by Openshift.

The goal of why Openshift are managing SSL certificate thru Openshift Secret is to have a rolling or rotating certificate feature on each services but can be triggered by Openshift without have to replace SSL on each services manually.

First is generate a p12 certificate by using keytool

cert>keytool -genkey -alias edw 
	-keystore edw.p12 -storetype PKCS12 
	-keyalg RSA -storepass password 
	-validity 730 -keysize 4096
What is your first and last name?
  [Unknown]:  Edwin
What is the name of your organizational unit?
  [Unknown]:  Company 01
What is the name of your organization?
  [Unknown]:  IT
What is the name of your City or Locality?
  [Unknown]:  Jakarta
What is the name of your State or Province?
  [Unknown]:  Jakarta
What is the two-letter country code for this unit?
  [Unknown]:  ID
Is CN=Edwin, OU=Company 01, O=IT, L=Jakarta, ST=Jakarta, C=ID correct?
  [no]:  yes

Next is creating two java projects which are connected one and another,

There are several part of the code that need mentioning,

First is making sure https option is active on, include our p12 certificate and make certificate password as parameterized. This parameter later on will be injected as environment variables on Openshift.



And the next is because we are using a custom certificate, dont forget to include it on RestTemplate.

public class MyRestTemplate {

    private String sslKeyStore;

    private String sslPassword;

    public RestTemplate restTemplate() throws Exception {
        KeyStore clientStore = KeyStore.getInstance("PKCS12");
        clientStore.load(new FileInputStream(sslKeyStore), sslPassword.toCharArray());

        SSLContext sslContext = SSLContextBuilder
                .loadTrustMaterial(clientStore, new TrustSelfSignedStrategy())
        SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE);
        HttpClient httpClient = HttpClients.custom()
        HttpComponentsClientHttpRequestFactory factory = new HttpComponentsClientHttpRequestFactory(httpClient);

        return new RestTemplate(factory);

Deploy those two application to Openshift,

oc new-app

oc new-app

Deploy certificate as OCP Secret and mount it as a volume on our application,

oc create secret generic cert --from-file=cert\edw.p12

oc set volume dc ssl-pods-example --add -t secret -m /deployments/cert --name cert --secret-name cert
oc set volume dc ssl-pods-example-2 --add -t secret -m /deployments/cert --name cert --secret-name cert

And our certificate password as OCP Secret and inject it as environment variable to our application

oc create secret generic sslpassword --from-literal=SSLPASSWORD=password

oc set env dc ssl-pods-example --from=secret/sslpassword 
oc set env dc ssl-pods-example-2 --from=secret/sslpassword 

After all deployed on OCP, next is give a route for our application. Im using re-encrypt method for ensuring an end to end encryption within the app. In order to do so, we need to include our application CA certificate as our route’s destination certificate. We can do so by exporting our certificate from p12 file using this command,

keytool -exportcert -keystore edw.p12 -storetype PKCS12 -storepass password -alias edw -file edw.crt -rfc

And paste the certificate on our route,

The end result would be like below image,

And as you can see, we are using certificate from end to end for securing our connection.