And Todays’s Quote Would Be..

Life begins at the end of your comfort zone.

Neale Donald Walsch

Create a Protected JBoss EAP UDP Cluster with Authentication

This is the second part of my previous article about how to run multiple containerized Keycloak and make them able to communicate one and another thru UDP protocol. But this approach have a problem, what if an authorized JBoss EAP suddenly joining the cluster and do malicious thing such as intercepting message or even deleting clustered caches. To prevent this, JBoss EAP have a mechanism called AUTH protocol. Which means only instances of JBoss EAP which have a specific credentials can join in the cluster group.

So lets try to simulate this in one JBoss EAP instance on a containerized Keycloak,

docker run -p 8081:8080 -e PROXY_ADDRESS_FORWARDING=true \
 -e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \
 -e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \
 --add-host=HOST:192.168.56.1 jboss/keycloak

Check running image’s containerId by using docker ps command, and copy standalone-ha.xml file to our host folder. For this example, our containerId would be 3cdab1375336.

docker cp 3cdab1375336:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml .

Edit our standalone-ha.xml and adding this part. Im using password123 as cluster’s password, which means a JBoss EAP instance can only join a cluster when they have the same password.

<stack name="udp">
	<transport type="UDP" socket-binding="jgroups-udp"/>
	<protocol type="PING"/>
	<protocol type="MERGE3"/>
	<socket-protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
	<protocol type="FD_ALL"/>
	<protocol type="VERIFY_SUSPECT"/>
	<protocol type="pbcast.NAKACK2"/>
	<protocol type="UNICAST3"/>
	<protocol type="pbcast.STABLE"/>
	<protocol type="AUTH">
		<property name="auth_class">org.jgroups.auth.MD5Token</property>
		<property name="auth_value">password123</property>
		<property name="token_hash">SHA</property>
	</protocol>
	<protocol type="pbcast.GMS"/>
	<protocol type="UFC"/>
	<protocol type="MFC"/>
	<protocol type="FRAG3"/>
</stack>

Create a Dockerfile,

FROM jboss/keycloak
COPY standalone-ha.xml /opt/jboss/keycloak/standalone/configuration/

Re-build the image,

docker build -t mykeycloak .

And run the new modified image,

docker run -p 8081:8080 -e PROXY_ADDRESS_FORWARDING=true  \
 -e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak"  \
 -e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db"  \
 --add-host=HOST:192.168.56.1 mykeycloak

If we try run the original image which are not having any AUTH password at all, an error would occur showing that the corresponding JBoss EAP is unable to join the cluster.

07:23:11,866 WARN  [org.jgroups.protocols.UNICAST3] (thread-7,ejb,0f9a0d05f1eb) 
JGRP000039: 0f9a0d05f1eb: failed to deliver OOB message [ebf69c82cffd to 0f9a0d05f1eb, 0 bytes, flags=OOB|INTERNAL]: 
java.lang.IllegalStateException: found GmsHeader[JOIN_REQ]: mbr=ebf69c82cffd from ebf69c82cffd but no AUTH header

The sample code for this article can be seen at below github page,

https://github.com/edwin/jboss-eap-clustered-with-auth

Have fun using JBoss EAP :)

Google+

Run Multiple Containerized Keycloak Instances Behind an Apache HTTPD Proxy

On this session im trying to create a sample condition where im having a High-Availability cluster of multiple Keycloaks instances which are located behind an HTTPD Reverse Proxy. HTTPD will do a round robin request to two Keycloaks instances behind it, showing the capability of session sharing between different Keycloak instances.

Im using a containerized Keycloak image and run them by using Docker for simulating a condition of running more than one Keycloak instances with different IP.

The concept is pretty much like below image,

The first thing needed is setuping a database for this. In here im using MySQL, despite Keycloak is able to connect to different type of databases. And for this sample, database is installed on my host laptop and not using a containerized one.

CREATE USER 'keycloak'@'%' IDENTIFIED BY 'password';
CREATE DATABASE keycloak_db;

The next step is running two Keycloak instances by executing below command, in here im putting 192.168.56.1 as the ip for my host machine.

docker run -p 8081:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak


docker run -p 8082:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak

The good thing about this keycloak image is that by default it is running a standalone-ha.xml and automatically form a cluster when being run locally at the same time. This can be seen on Keycloak’s log

06:20:53,102 INFO  [org.infinispan.CLUSTER] (non-blocking-thread--p8-t4) 
	[Context=offlineClientSessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,102 INFO  [org.infinispan.CLUSTER] (thread-35,ejb,a629f48aafa9) 
	[Context=sessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,103 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=work] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,114 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=loginFailures] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,121 INFO  [org.infinispan.CLUSTER] (thread-31,ejb,a629f48aafa9) 
	[Context=actionTokens] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11

The last step would be creating an HTTPD setup for creating reverse proxy with a load-balancing capability. And we can achieve that by editing httpd.conf file, for this sample we are using a round-robin mechanism of lbmethod=byrequests.

<VirtualHost *:80>
	ServerName localhost
	ProxyRequests Off
	ProxyPreserveHost On
  
	<Proxy "balancer://mycluster">
		BalancerMember http://localhost:8081
		BalancerMember http://localhost:8082
	
		ProxySet lbmethod=byrequests failontimeout=on
	 
	</Proxy>
  
	ProxyPass / balancer://mycluster/
	ProxyPassReverse / balancer://mycluster/
</VirtualHost>

In order to activate loadbalancer and http proxying feature on Apache HTTPD, there are several variables that need to be unremark on httpd.conf file such as proxy_balancer_module and proxy_http_module.

Restart httpd and open browser directly, and we can see that Keycloak is working well.

We can also simulate a condition where one instance suddenly stopped to simulate a failover scenario by killing one Keycloak instance using docker kill command. Have fun with Keycloak :D

Notes.
For a better performance, a sticky session approach is recommended compared to a round robin one.

https://www.keycloak.org/docs/latest/server_installation/#sticky-sessions
Google+

How to Solve Openshift “Failed to pull image, unauthorized: authentication required”

Just recently got an unique error, this happens when my application is pulling an image within a different Openshift namespace. In this example, im creating my application in “xyz-project” and try to pull image from “abc-project”. Here’s the complete error detail,

Failed to pull image "image-registry.openshift-image-registry.svc:5000/abc-project/image01@sha256:xxxxxxxxxxxx": 
rpc error: code = Unknown desc = Error reading manifest sha256:xxxxxxxxxxxx in 
image-registry.openshift-image-registry.svc:5000/abc-project/image01: unauthorized: authentication required

Solution for this is quite easy, actually we need to give a specific access right in order for “xyz-project” to be able to pull image from “abc-project”.

oc policy add-role-to-user system:image-puller system:serviceaccount:xyz-project:default -n abc-project

Hope it helps.

Google+

Get ImageStream Name and SHA from All DeploymentConfig within a Namespace on Openshift 4

There are times where we want to display list of DC within one Namespace, and want to see what are the images involved within it. We can do that easily by using a simple OC command like below,

oc get dc -n  <namespace> --no-headers -o template \
     --template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{.metadata.name}}{{" - "}}
     {{(index .spec.template.spec.containers 0).image}}{{"\n"}}{{end}}'
Google+

Delete All Pods Within a Specific Project or Namespace in Openshift 4

Sometimes we got some pods stuck in our namespace and unable to be deleted within a specific timeframe so we need to delete them forcefully, but sometimes it becomes problematic when we have like hundreds of them. Well deleting all of them is actually quite easy, all i have to do is run below command.

oc delete pod  -n youropenshiftprojectname --grace-period=0 --force --all
Google+