Openshift Posts

Building Containerized Images on Openshift 4 and Push the Result to Third Party Image Registry

Sometimes in our pipeline, we need to build a docker images based on a specific Dockerfile and push the result to an external Image Registry such as Quay, Docker Hub or even on-premise Nexus or JFrog.

On this example, im trying to simulate build a simple java application, containerized it, and push it to Quay. The rough concept can be seen below,

1. Jenkins pull latest java code from Github, do testing and Maven build
2. Containerizing Maven build result and push it to Quay
3. Openshift Pre-Prod and Prod will pull from Quay, if build result is considered stable enough

For this example, im using a simple Dockerfile,

FROM registry.access.redhat.com/ubi8/ubi-minimal:8.0

MAINTAINER Muhammad Edwin < edwin at redhat dot com >

LABEL BASE_IMAGE="registry.access.redhat.com/ubi8/ubi-minimal:8.0"
LABEL JAVA_VERSION="11"

RUN microdnf install --nodocs java-11-openjdk-headless && microdnf clean all

WORKDIR /work/
COPY target/*.jar /work/application.jar

EXPOSE 8080
CMD ["java", "-jar", "application.jar"]

And build it in a Jenkins pipeline, on this example im deploying to Quay

node('maven') {
    stage ('pull code') {
        sh "git clone https://github.com/edwin/hello-world-java-docker.git source"
    }
    stage ('mvn build') {
        dir("source") {
            sh "mvn clean package"
        }
    }
    stage ('build and push') {
        dir("source") {
            sh "oc new-build --strategy docker --name=hello-world-java-docker \
                        --binary --to-docker \
                        --to=quay.io/edwinkun/hello-world-java-docker || true"
            sh "oc start-build hello-world-java-docker --from-dir=. --follow --wait "
        }
    }
}

One thing you need to remember is that we need to register our Quay credentials in order to be able to push there. And we can achieve it by using this command,

oc create secret docker-registry --docker-server=quay.io \
	--docker-username=edwinkun --docker-password=******* \
	--docker-email=unused \
	quay-login

oc secrets link default quay-login

Run our Jenkins pipeline and we can see the result on Jenkins dashboard,

When successfully deployed, we can see the pipeline log result will be like this,

And lastly we can see that the containerized image is successfully deployed to Quay

Code for above example can be found on this Github link,

https://github.com/edwin/hello-world-java-docker
Google+

A Simple Load Testing Pipeline on Openshift 4 and Jenkins

Theres one thing needed to be done before deploying your app to production environment, and that is ensuring that your app able to perform well under a high load of transaction. One way to achieve that is by doing a load testing and stress testing internally before deploying to production, but there are times when load testing are being done at the end of development phase with not many time left for developer to do performance tuning. Therefore the better approach is by “shifting left” both load and stress testing phase to an earlier phase, and that is since development phase.

The concept on this blog is doing a load testing on a temporary deployed application, with a maximum one percent acceptable fail. Why i need to deploy the application first before doing a load testing? Because im trying to simulate the exact same condition with production, where each application is a standalone pod, with a specific memory and cpu allocation.

Everything is automated, monitored and managed thru jenkins pipeline with a specific load testing scenario created separatedly in a regular JMeter desktop ui, saved and mapped to a specific application. The detail can be see on on below image where scenario 1 is a scenario created for application 1.

The hard part is creating a JMeter application that is able to consume different scenario, with a parameterized thread and testing endpoint. Thats why im leveraging jmeter-maven-plugin for this, because it’s so lightweight and have a dynamic configuration.

It consist only a one pom file with multiple parameterized fields,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>JMeterLoadTesting</artifactId>
    <version>1.0-SNAPSHOT</version>
    <name>JMeterLoadTesting</name>
    <description>A Load Testing tool</description>

    <properties>
        <java.version>11</java.version>
    </properties>

    <dependencies>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>com.lazerycode.jmeter</groupId>
                <artifactId>jmeter-maven-plugin</artifactId>
                <version>3.1.0</version>
                <executions>
                    <execution>
                        <id>configuration</id>
                        <goals>
                            <goal>configure</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>jmeter-tests</id>
                        <phase>integration-test</phase>
                        <goals>
                            <goal>jmeter</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>jmeter-check-results</id>
                        <goals>
                            <goal>results</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <testFilesIncluded>
                        <jMeterTestFile>${testfile}</jMeterTestFile>
                    </testFilesIncluded>
                    <propertiesJMeter>
                        <threads>${threads}</threads>
                        <rampup>${rampup}</rampup>
                        <loops>${loops}</loops>
                        <url>${url}</url>
                        <port>${port}</port>
                    </propertiesJMeter>
                    <errorRateThresholdInPercent>1</errorRateThresholdInPercent>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Next is we need to create a JMeter test scenario, a simple http GET to root url. And save it to test01.jmx, and put it on /test/jmeter folder so that jmeter-maven-plugin can detect this scenario.

We can test our JMeter script with below command, below example we are running test01.jmx which is doing a 25 hit testing within a 5 seconds timeframe.

mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 \
 -Durl=localhost -Dport=8080 -Dtestfile=test01.jmx

The next complicated task is to create a simple Jenkins pipeline script to run this. It needs to have the ability to build and deploy an apps on a temporary pod, do load testing, and clean all resources once load testing is done.

node('maven2') {
    def appname = "app-loadtest-${env.BUILD_NUMBER}"
    try {
        stage ('pull code') {
            sh "git clone https://github.com/edwin/app-loadtest.git source"
        }
        stage ('deploy to ocp') {
            dir("source") {
                sh "oc new-build jenkins2/openjdk-11-rhel7 --name=${appname} --binary "
                sh "oc start-build ${appname} --from-dir=. --follow --wait"
                sh "oc new-app --docker-image=image-registry.openshift-image-registry.svc:5000/jenkins2/${appname}:latest --name=${appname} || true"
                sh "oc set resources dc ${appname} --limits=cpu=500m,memory=1024Mi --requests=cpu=200m,memory=256Mi"
            }
        }
        stage ('do load test') {
            sh "git clone https://github.com/edwin/jmeter-loadtesting.git load"
            dir("load") {
                // 5 threads x 5 loops in 5 seconds
                sh "mvn clean verify -Dthreads=5 -Dloops=5 -Drampup=5 -Durl=${appname} -Dport=8080 -Dtestfile=test01.jmx"
            }
        }
    } catch (error) {
       throw error
    } finally {
        stage('housekeeping') {
            sh "oc delete svc ${appname}"
            sh "oc delete bc ${appname}"
            sh "oc delete is ${appname}"
            sh "oc delete dc ${appname}"
        }
    }
}

If we run the pipeline, we can see that it will spawn an appication pod. We can check whether application runs perfectly or not, by running terminal directly inside it.

The result on Jenkins Dashboard will be like this,

As for the loadtest result, we can see those on our Jenkins logs

All codes are available on github,

https://github.com/edwin/app-loadtest

https://github.com/edwin/jmeter-loadtesting

So, have fun with Jenkins and JMeter :)

Google+

Error “verifying certificate: x509: certificate signed by unknown authority” When Creating HTTPS on Openshift

I had this error when creating a reeencrypt secure route on Openshift,

spec.tls.certificate: Invalid value: "redacted certificate data": error verifying certificate: x509: certificate signed by unknown authority
reason: ExtendedValidationFailed

this is the oc command that im using,

oc create route reeencrypt my-https --service my-gateway-https --key private.key --cert server.pem --cacert cacert.pem -n my-project

This error is due to im using an incorrect certificate for reencrypting. Using certificate decoder shows whats wrong with my certificate,

The correct certificate should show this,

Regenerating my certificate fix this issue.

Google+

Migrating ReplicationController Between Openshift Cluster

For this scenario, im trying to migrate replicationcontroller (RC) between two OCP with different version. One OCP is on version 3.x, while the other one is 4.x.
So, it’s actually quite tricky. This is the first method that im doing, a simple export on OCP 3

oc get rc -o yaml -n projectname --export > rc.yaml

And do a simple import on OCP 4

oc create -n projectname -f rc.yaml

But there are some error happens,

Error from server (Forbidden): replicationcontrollers "rc-1" is forbidden: 
cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>

It seems like despite im using –export parameter, somehow still exporting the previous DC uid on rc.yaml

    labels:
      app: HelloWorld
      group: com.redhat.edw
      openshift.io/deployment-config.name: helloworld
      provider: fabric8
      version: "1.0"
    name: helloworld-5
    namespace: project
    ownerReferences:
    - apiVersion: apps.openshift.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: DeploymentConfig
      name: helloworld
      uid: 8a96de62-9be4-11ea-a05c-0a659b38d468
    resourceVersion: "65350349"
    selfLink: /api/v1/namespaces/project/replicationcontrollers/helloworld-5
    uid: 257eda84-9be7-11ea-a05c-0a659b38d468

The solution is by removing ownerReferences tag from yaml,

sed -i '/ownerReferences/,+6 d' rc.yaml

It will regenerate ownerReference tag once successfully imported to a new project.


But another problem arise. It seems like despite i’ve successfully import all my RCs, they are not showing when i do a oc get rc command. The culprit is revisionHistoryLimit, removing it from our dc solve this problem.

oc patch  dc helloworld -n project --type json --patch '[{ "op": "remove", "path": "/spec/revisionHistoryLimit" }]'
Google+

Migrating Image Stream from One Openshift Image Registry to Another Image Registry with Skopeo

I have a requirement where i need to move all images from Image Registry on Openshift 3, to Image Registry on Openshift 4. There are a lot of ways to do it, such as mounting the same disk to multiple Openshift instance or move in manually using docker pull, tag and then push.

After brainstorming for quite some time, i come up with a solution of using Skopeo as a tools to do image migration. It’s a very convenient tool for handling image copying from one image registry to another.

It is actually a very simple script, first we need to capture all images within every OCP3 project,

	
oc get project -o template --template='{{range.items}}{{.metadata.name}}{{"\n"}}{{end}}' | while read line
do
	oc get imagestreamtag -n $line -o template \ 
		--template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{.metadata.name}}{{"\n"}}{{end}}' > images.txt
done

Use this command to capture your OCP username and token,

# capturing your username
oc whoami 

#capturing your token
oc whoami -t

And then we need to iterate the content of generated file with the username and token you get from previous command.

cat images.txt | while read line
do
	skopeo copy  --src-creds ocp3username:ocp3token --src-tls-verify=false \
		--dest-creds ocp4username:ocp4token  --dest-tls-verify=false \
		docker://docker-registry-from.ocp3/$line \
		docker://image-registry-target.apps.ocp4/$line
done

After all is done, what is left is do a simple validation to count how many images has been migrated.

 oc get imagestreamtag --no-headers | wc -l
Google+