docker Posts

Creating a Red Hat AMQ Broker or ActiveMQ Artemis Stress Testing using JMeter

Red Hat AMQ Broker (or its Opensource version, Apache ActiveMQ), is a flexible, high-performance messaging platform that delivers information reliably, enabling real-time integration. It can handle lot of messages at the same time, up to 21-22thousands messages/seconds, as per below link.

https://activemq.apache.org/performance.html

In this article, im trying to try and see how many transaction can one AMQ handle by using a JMeter script. Lets start by installing ActiveMQ Artemis from docker,

docker run -it --rm \
  -p 8161:8161 \
  -p 61616:61616 \
  -e ARTEMIS_USERNAME=admin \
  -e ARTEMIS_PASSWORD=password \
  vromero/activemq-artemis

Next step is downloading JMeter, and creating a test plan. But first, we need to download artemis-jms-client first and put it on our JMeter_FOLDER/lib/ext folder.

Next is opening our JMeter and create a thread group, for this example im creating 2 loops , with 2 threads each.

Create JMS Publisher, with below parameter

Initial Context Factory : org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
Provider URL : tcp://localhost:61616
User Authorization : admin
Password Authorization : password
Connection Factory : ConnectionFactory
Destination : dynamicQueues/queue01

JMSCorrelationID : ${__counter(FALSE,)} with class of value java.lang.String


Create JMS Subscriber, with below parameter

Initial Context Factory : org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
Provider URL : tcp://localhost:61616
User Authorization : admin
Password Authorization : password
Connection Factory : ConnectionFactory
Destination : dynamicQueues/queue01


Run JMeter and we can see the result on both Result Tree and Summary Report,

Have fun stresstesting AMQ 🙂

Deploying Apache Tomcat 8 on Top of Red Hat UBI 8 and Java 11

UBI (Universal Base Image) 8 is an OCI-compliant container base operating system images with complementary runtime languages and packages that are freely redistributable. Like previous base images, they are built from portions of Red Hat Enterprise Linux. UBI images can be obtained from the Red Hat container catalog, and be built and deployed anywhere.

On this sample, im trying to create a Apache Tomcat 8 (version 8.0.5 to be precise), and deploy it on top on a UBI8 base image, with JDK 11 installed on it. And after that, i’ll deploy a simple hello world java application on top of Tomcat 8.

So, lets start with a simple java application,

git clone https://github.com/edwin/hello-world-jboss-eap.git

and build it into a war file

mvn clean package

create a Dockerfile to install Apache Tomcat 8 and copy our created war file into Tomcat’s webapps folder

FROM registry.access.redhat.com/ubi8/openjdk-11

RUN curl -k https://archive.apache.org/dist/tomcat/tomcat-8/v8.0.5/bin/apache-tomcat-8.0.5.tar.gz -L -o "/tmp/apache-tomcat-8.0.5.tar.gz" \
    && tar -xf "/tmp/apache-tomcat-8.0.5.tar.gz" -C /deployments/ ;

# remove it manually, otherwise it will create an error "Endorsed standards and standalone APIs in modular form will be supported via the concept of upgradeable modules"
RUN sed -i 's/endorsed/e/g' /deployments/apache-tomcat-8.0.5/bin/catalina.sh

COPY HelloWorld.war /deployments/apache-tomcat-8.0.5/webapps/HelloWorld.war

EXPOSE 8080 
CMD ["/deployments/apache-tomcat-8.0.5/bin/catalina.sh", "run"]

Build it,

docker build -t ubi8-with-tomcat8 .

And run,

docker run -p 8080:8080 ubi8-with-tomcat8

We can see the result on browser,


If we want to create an HTTPS application, then there will be some modification needed. First we need to create a keystore, and give changeit as its password.

keytool -genkey -alias tomcat -keyalg RSA -keystore my-release-key.keystore

Edit Tomcat’s server.xml, adding below line

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               keystoreFile="/deployments/my-release-key.keystore" keystorePass="changeit"
               clientAuth="false" sslProtocol="TLS" />

Do some minor modification on Dockerfile,

FROM registry.access.redhat.com/ubi8/openjdk-11

RUN curl -k https://archive.apache.org/dist/tomcat/tomcat-8/v8.0.5/bin/apache-tomcat-8.0.5.tar.gz -L -o "/tmp/apache-tomcat-8.0.5.tar.gz" \
    && tar -xf "/tmp/apache-tomcat-8.0.5.tar.gz" -C /deployments/ ;

# remove it manually, otherwise it will create an error "Endorsed standards and standalone APIs in modular form will be supported via the concept of upgradeable modules"
RUN sed -i 's/endorsed/e/g' /deployments/apache-tomcat-8.0.5/bin/catalina.sh

COPY server.xml /deployments/apache-tomcat-8.0.5/conf/server.xml
COPY my-release-key.keystore /deployments/my-release-key.keystore
COPY HelloWorld.war /deployments/apache-tomcat-8.0.5/webapps/HelloWorld.war

EXPOSE 8080 8443
CMD ["/deployments/apache-tomcat-8.0.5/bin/catalina.sh", "run"]

Open browser to see the result,

Have fun with UBI8

Run Multiple Containerized Keycloak Instances Behind an Apache HTTPD Proxy

On this session im trying to create a sample condition where im having a High-Availability cluster of multiple Keycloaks instances which are located behind an HTTPD Reverse Proxy. HTTPD will do a round robin request to two Keycloaks instances behind it, showing the capability of session sharing between different Keycloak instances.

Im using a containerized Keycloak image and run them by using Docker for simulating a condition of running more than one Keycloak instances with different IP.

The concept is pretty much like below image,

The first thing needed is setuping a database for this. In here im using MySQL, despite Keycloak is able to connect to different type of databases. And for this sample, database is installed on my host laptop and not using a containerized one.

CREATE USER 'keycloak'@'%' IDENTIFIED BY 'password';
CREATE DATABASE keycloak_db;

The next step is running two Keycloak instances by executing below command, in here im putting 192.168.56.1 as the ip for my host machine.

docker run -p 8081:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak


docker run -p 8082:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak

The good thing about this keycloak image is that by default it is running a standalone-ha.xml and automatically form a cluster when being run locally at the same time. This can be seen on Keycloak’s log

06:20:53,102 INFO  [org.infinispan.CLUSTER] (non-blocking-thread--p8-t4) 
	[Context=offlineClientSessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,102 INFO  [org.infinispan.CLUSTER] (thread-35,ejb,a629f48aafa9) 
	[Context=sessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,103 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=work] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,114 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=loginFailures] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,121 INFO  [org.infinispan.CLUSTER] (thread-31,ejb,a629f48aafa9) 
	[Context=actionTokens] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11

The last step would be creating an HTTPD setup for creating reverse proxy with a load-balancing capability. And we can achieve that by editing httpd.conf file, for this sample we are using a round-robin mechanism of lbmethod=byrequests.

<VirtualHost *:80>
	ServerName localhost
	ProxyRequests Off
	ProxyPreserveHost On
  
	<Proxy "balancer://mycluster">
		BalancerMember http://localhost:8081
		BalancerMember http://localhost:8082
	
		ProxySet lbmethod=byrequests failontimeout=on
	 
	</Proxy>
  
	ProxyPass / balancer://mycluster/
	ProxyPassReverse / balancer://mycluster/
</VirtualHost>

In order to activate loadbalancer and http proxying feature on Apache HTTPD, there are several variables that need to be unremark on httpd.conf file such as proxy_balancer_module and proxy_http_module.

Restart httpd and open browser directly, and we can see that Keycloak is working well.

We can also simulate a condition where one instance suddenly stopped to simulate a failover scenario by killing one Keycloak instance using docker kill command. Have fun with Keycloak 😀

Notes.
For a better performance, a sticky session approach is recommended compared to a round robin one.

https://www.keycloak.org/docs/latest/server_installation/#sticky-sessions

How to Solve Openshift “Failed to pull image, unauthorized: authentication required”

Just recently got an unique error, this happens when my application is pulling an image within a different Openshift namespace. In this example, im creating my application in “xyz-project” and try to pull image from “abc-project”. Here’s the complete error detail,

Failed to pull image "image-registry.openshift-image-registry.svc:5000/abc-project/image01@sha256:xxxxxxxxxxxx": 
rpc error: code = Unknown desc = Error reading manifest sha256:xxxxxxxxxxxx in 
image-registry.openshift-image-registry.svc:5000/abc-project/image01: unauthorized: authentication required

Solution for this is quite easy, actually we need to give a specific access right in order for “xyz-project” to be able to pull image from “abc-project”.

oc policy add-role-to-user system:image-puller system:serviceaccount:xyz-project:default -n abc-project

Hope it helps.

How to Remove Docker’s Dangling Image on Windows 10

There are times where i need to clean all unused, dangling docker images on my laptop. Previously im running below command,

docker image prune -a

But it seems like altho almost all images are gone, i can see still some images are still occupying my laptop’s disk. So i have to do something to delete them.

The workaround is quite easy, run PowerShell as Administrator and run below command,

docker rmi $(docker images --quiet --filter "dangling=true") -f

Executing that command will generates this log,

And i can see that my harddisk is now have more free spaces 🙂