Programming Posts

Run Multiple Containerized Keycloak Instances Behind an Apache HTTPD Proxy

On this session im trying to create a sample condition where im having a High-Availability cluster of multiple Keycloaks instances which are located behind an HTTPD Reverse Proxy. HTTPD will do a round robin request to two Keycloaks instances behind it, showing the capability of session sharing between different Keycloak instances.

Im using a containerized Keycloak image and run them by using Docker for simulating a condition of running more than one Keycloak instances with different IP.

The concept is pretty much like below image,

The first thing needed is setuping a database for this. In here im using MySQL, despite Keycloak is able to connect to different type of databases. And for this sample, database is installed on my host laptop and not using a containerized one.

CREATE USER 'keycloak'@'%' IDENTIFIED BY 'password';
CREATE DATABASE keycloak_db;

The next step is running two Keycloak instances by executing below command, in here im putting 192.168.56.1 as the ip for my host machine.

docker run -p 8081:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak


docker run -p 8082:8080 -e PROXY_ADDRESS_FORWARDING=true  \ 
	-e DB_VENDOR="mysql" -e DB_ADDR="192.168.56.1" -e DB_USER="keycloak" \ 
	-e DB_PASSWORD="password" -e DB_PORT="3306" -e DB_DATABASE="keycloak_db" \ 
	--add-host=HOST:192.168.56.1 jboss/keycloak

The good thing about this keycloak image is that by default it is running a standalone-ha.xml and automatically form a cluster when being run locally at the same time. This can be seen on Keycloak’s log

06:20:53,102 INFO  [org.infinispan.CLUSTER] (non-blocking-thread--p8-t4) 
	[Context=offlineClientSessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,102 INFO  [org.infinispan.CLUSTER] (thread-35,ejb,a629f48aafa9) 
	[Context=sessions] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,103 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=work] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,114 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,a629f48aafa9) 
	[Context=loginFailures] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11
06:20:53,121 INFO  [org.infinispan.CLUSTER] (thread-31,ejb,a629f48aafa9) 
	[Context=actionTokens] ISPN100010: Finished rebalance with members [a629f48aafa9, 82298394e158], topology id 11

The last step would be creating an HTTPD setup for creating reverse proxy with a load-balancing capability. And we can achieve that by editing httpd.conf file, for this sample we are using a round-robin mechanism of lbmethod=byrequests.

<VirtualHost *:80>
	ServerName localhost
	ProxyRequests Off
	ProxyPreserveHost On
  
	<Proxy "balancer://mycluster">
		BalancerMember http://localhost:8081
		BalancerMember http://localhost:8082
	
		ProxySet lbmethod=byrequests failontimeout=on
	 
	</Proxy>
  
	ProxyPass / balancer://mycluster/
	ProxyPassReverse / balancer://mycluster/
</VirtualHost>

In order to activate loadbalancer and http proxying feature on Apache HTTPD, there are several variables that need to be unremark on httpd.conf file such as proxy_balancer_module and proxy_http_module.

Restart httpd and open browser directly, and we can see that Keycloak is working well.

We can also simulate a condition where one instance suddenly stopped to simulate a failover scenario by killing one Keycloak instance using docker kill command. Have fun with Keycloak 😀

Notes.
For a better performance, a sticky session approach is recommended compared to a round robin one.

https://www.keycloak.org/docs/latest/server_installation/#sticky-sessions

How to Solve Openshift “Failed to pull image, unauthorized: authentication required”

Just recently got an unique error, this happens when my application is pulling an image within a different Openshift namespace. In this example, im creating my application in “xyz-project” and try to pull image from “abc-project”. Here’s the complete error detail,

Failed to pull image "image-registry.openshift-image-registry.svc:5000/abc-project/image01@sha256:xxxxxxxxxxxx": 
rpc error: code = Unknown desc = Error reading manifest sha256:xxxxxxxxxxxx in 
image-registry.openshift-image-registry.svc:5000/abc-project/image01: unauthorized: authentication required

Solution for this is quite easy, actually we need to give a specific access right in order for “xyz-project” to be able to pull image from “abc-project”.

oc policy add-role-to-user system:image-puller system:serviceaccount:xyz-project:default -n abc-project

Hope it helps.

Get ImageStream Name and SHA from All DeploymentConfig within a Namespace on Openshift 4

There are times where we want to display list of DC within one Namespace, and want to see what are the images involved within it. We can do that easily by using a simple OC command like below,

oc get dc -n  <namespace> --no-headers -o template \
     --template='{{range.items}}{{.metadata.namespace}}{{"/"}}{{.metadata.name}}{{" - "}}
     {{(index .spec.template.spec.containers 0).image}}{{"\n"}}{{end}}'

Delete All Pods Within a Specific Project or Namespace in Openshift 4

Sometimes we got some pods stuck in our namespace and unable to be deleted within a specific timeframe so we need to delete them forcefully, but sometimes it becomes problematic when we have like hundreds of them. Well deleting all of them is actually quite easy, all i have to do is run below command.

oc delete pod  -n youropenshiftprojectname --grace-period=0 --force --all

How to Handle CORS in Red Hat Process Automation Manager

One feature on Red Hat Process Automation Manager (RHPAM) is the ability to provide an API endpoint which can be accessed from multiple applications. But when we call them directly from javascript within a browser, sometimes it would shows a CORS error.

Workaround is quite easy, either we add a CORS header on RHPAM API response or change call method from a browser call into server-to-server call. For this article i would go with the first approach, and that is adding a CORS header on API that RHPAM provides.

Luckily our RHPAM deployment is on top of Openshift, and by adding below key-value parameters to RHPAM kie-server’s Deployment Config is good enough to solve CORS issue.

         - name: FILTERS
           value: "AC_ALLOW_ORIGIN,AC_ALLOW_METHODS,AC_ALLOW_HEADERS,AC_ALLOW_CREDENTIALS,AC_MAX_AGE"
         - name: AC_ALLOW_ORIGIN_FILTER_RESPONSE_HEADER_NAME
           value: "Access-Control-Allow-Origin"
         - name: AC_ALLOW_ORIGIN_FILTER_RESPONSE_HEADER_VALUE
           value: "*"
         - name: AC_ALLOW_METHODS_FILTER_RESPONSE_HEADER_NAME
           value: "Access-Control-Allow-Methods"
         - name: AC_ALLOW_METHODS_FILTER_RESPONSE_HEADER_VALUE
           value: "POST,GET,OPTIONS,PUT"
         - name: AC_ALLOW_HEADERS_FILTER_RESPONSE_HEADER_NAME
           value: "Access-Control-Allow-Headers"
         - name: AC_ALLOW_HEADERS_FILTER_RESPONSE_HEADER_VALUE
           value: "*"
         - name: AC_ALLOW_CREDENTIALS_FILTER_RESPONSE_HEADER_NAME
           value: "Access-Control-Allow-Credentials"
         - name: AC_ALLOW_CREDENTIALS_FILTER_RESPONSE_HEADER_VALUE
           value: "true"
         - name: AC_MAX_AGE_FILTER_RESPONSE_HEADER_NAME
           value: "Access-Control-Max-Age"
         - name: AC_MAX_AGE_FILTER_RESPONSE_HEADER_VALUE
           value: "86400"