Openshift Posts

Deploy a New Application and Building It Using Openshift S2I Feature and a Custom Base Image

Lots of ways to deploy apps to Openshift, one of it is by using oc new-app command. We are trying now to create a new app using corresponding command, but specifying a custom base image for it. For this example, im using a OpenJDK 11 and RHEL 7 base image.

The command is quite easy, run it on your code folder

D:\source> oc new-app --name=spring-boot-2

D:\source> oc start-build spring-boot-2 --from-dir=.

It will create a BuildConfig with the name of spring-boot-2,

D:\source> oc get bc spring-boot-2
NAME            TYPE      FROM      LATEST
spring-boot-2   Source    Binary    3

We can see the detail of our BuildConfig by running this command,

D:\source> oc describe bc spring-boot-2

Strategy:       Source
From Image:     ImageStreamTag openjdk-11-rhel7:latest
Output to:      ImageStreamTag spring-boot-2:latest
Binary:         provided on build

And if we have some code change and want to redeploy, we can run this command

D:\source> oc start-build spring-boot-2 --from-dir=.

It will rebuild the whole image, and using new code which are uploaded from existing source directory.

[Openshift] Adding a NodeSelector on the Fly

This is snippet code for adding a specific nodeselector to a rc (ReplicationConfig)

oc patch rc simple-helloworld -p '{"spec": {"template": {"spec": {"nodeSelector": {"infra": "my-infra-node"}}}}}' -n dev

The same code should work for other type, such as Deployment, or DeploymentConfig.

[Openshift] Changing revisionHistoryLimit on DeploymentConfig using OC Command

DeploymentConfig on Openshift, and Kubernetes, have revisionHistoryLimit variable which shows how many history a DeploymentConfig should keep. By default it stores 10 last version of application deployment, but sometimes we have to stores less number of revision for saving storage space. Therefore we need to create a hard limit for number of revisionHistoryLimit allowed.

Given below yaml structure on DC,

Based on above image we can change directly on deploymentconfig’s Yml file, but for you who allergic to Yaml (such as me), OC command is much more convenient. This is how i change existing deploymentconfig’s configuration by utilizing OC patch command

oc patch dc starter-v0 -p '{"spec":{"revisionHistoryLimit":2}}'

It will reduce number of revision to two version before,

And this is what happen when changed into one version,

oc patch dc starter-v0 -p '{"spec":{"revisionHistoryLimit":1}}'

How to Display How Many Images are Available on Our Openshift Image Registry

Openshift is a very convenient platform, not only it provides an enterprise kubernetes cluster, but also provide its own image registry bundled within it. So we can push images and deploy it to our namescpace within our cluster in a timely manner. But there are times when i need to count how many images are resides in my existing Openshift cluster. After googling quite some time, i found the solution and write it here.

First we need to check where is our Openshift image registry url,

C:\>oc project default
Already on project "default" on server "".

C:\>oc get route
NAME               HOST/PORT                                                PATH      SERVICES           PORT       TERMINATION   WILDCARD
docker-registry              docker-registry    5000-tcp   reencrypt     None
registry-console             registry-console   <all>      passthrough   None

Next step is login to our oc cluster by using this command, and insert the right username and password.

oc login 

And see the oc login token

oc whoami -t

Use both username and token to do a simple curl to your docker registry url,

C:\>curl -X GET -k -u <my-username>:<my-token>

The result of that api contains list of images available on your Openshift’s Image Registry.

Monitoring Kafka Topics with Dockerized Kafka Manager

Yesterday, Dimas (one of my colleague), are asking me on how to monitor Kafka which are running on top of Openshift using a tools which are accessible thru browser.

One of the tools im recommending is Kafka Manager, which we can download from below url,

Lets start from the beginning, from how to start Zookeeper, Kafka Server, until simulate a simple produce and consume and monitoring it using Kafka Manager.

First, download Kafka from Apache site, extract it, and open bin folder. We need Zookeeper to start before we start anything else. Fyi for this example im using Win10 as my primary Operating System, so all my command below can be different depends on what Operating System you are using.

cd D:\software\kafka_2.13-2.4.0\bin\windows
zookeeper-server-start.bat ..\..\config\

And run Kafka Server afterwards,

kafka-server-start.bat ..\..\config\

Create a topic,

kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic my-testing-topic

Try produce a simple echo message using Kafka Producer,

kafka-console-producer.bat --broker-list localhost:9092 --topic my-testing-topic

And listen to the sent message using Kafka Consumer,

kafka-console-consumer.bat --bootstrap-server localhost:9092 -topic  my-testing-topic --from-beginning

If you only want to get all the new message, ignoring the old one, just remove “–from-beginning” parameter. And use “–offset” parameter to get a specific offset.

Next is running my Kafka Manager using Docker command. Fyi, is my laptop ip.

docker run --network host --add-host=moby: --add-host DESKTOP: -p 9000:9000 -e ZK_HOSTS=""  kafkamanager/kafka-manager

After Kafka-Manager is successfully started, we can browse our Kafka by opening thru browser,