Server Configuration Posts

How to Connect to Red Hat AMQ using Artemis and SSL

We can protect our AMQ end point using a specific SSL, preventing those who doesnt have the exact certificate to connecting to my AMQ server’s endpoint.

In order to do so, first we need to create a very simple certificate to be use by our AMQ server with a simple keytool command

keytool -genkey -alias artemists -keyalg RSA -sigalg SHA1withRSA -keystore artemis.ts -keysize 2048

It will generate a new file, artemis.ts

And generate a new keystore,

keytool -genkey -v -keystore broker02.ks -alias broker02 -sigalg SHA1withRSA  -keyalg RSA -keysize 2048 -validity 10000

Put string “password” if you need to input a password while generating those two items.

And reference those files on our AMQ broker.xml configuration,

<acceptor name="core">
tcp://0.0.0.0:61617?protocols=CORE;sslEnabled=true;
	keyStorePath=D:/tmp/broker02.ks;keyStorePassword=password;
	trustStorePath=D:/tmp/artemis.ts;trustStorePassword=password;
	enabledCipherSuites=SSL_RSA_WITH_RC4_128_SHA,SSL_DH_anon_WITH_3DES_EDE_CBC_SHA,RSA;enabledProtocols=TLSv1,TLSv1.1,TLSv1.2;
	sslProvider=JDK;sniHost=localhost;anycastPrefix=jms.queue.;multicastPrefix=jms.topic;tcpSendBufferSize=1048576;
	tcpReceiveBufferSize=1048576;useEpoll=true;
	amqpCredits=1000;amqpMinCredits=300
</acceptor>

Start our server,

artemis run

We can connect using our artemis client with below command,

artemis producer --url tcp://127.0.0.1:61617?sslEnabled=true --message-count 1

The first time connected, it will shows error on your client’s side,

Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
        at java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
        at java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
        at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:380)

And on your server’s side,

Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131) [java.base:]
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117) [java.base:]
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308) [java.base:]
        at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:285) [java.base:]
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:181) [java.base:]
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164) [java.base:]
        at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:685) [java.base:]

Means you need to use your server’s cert on your client while making request, there are multiple ways of doing that. The simplest is by putting your server’s certificate on your local client’s trust store.

keytool -exportcert -alias artemists -keystore artemis.ts -file artemis.cer
keytool -exportcert -alias broker-server -keystore broker02.ks -file broker02.cer

keytool -import -alias artemists -file artemis.cer -cacerts
keytool -import -alias broker-server -file broker02.cer -cacerts

Use password “changeit” while importing your certificate to client’s cacerts.

When re-run artemis producer again, it should gives a successful message like this

\bin>artemis producer --url tcp://127.0.0.1:61617?sslEnabled=true --message-count 1
Producer ActiveMQQueue[TEST], thread=0 Started to calculate elapsed time ...

Producer ActiveMQQueue[TEST], thread=0 Produced: 1 messages
Producer ActiveMQQueue[TEST], thread=0 Elapsed time in second : 0 s
Producer ActiveMQQueue[TEST], thread=0 Elapsed time in milli second : 27 milli seconds
Google+

Connecting From Jenkins to Git Repository such as Gitlab, without Plugins

Basically all you need is your repository url, username and password.

First is adding your git username and password on Jenkins credentials,

And after that, we can create a username and password credentials, and we give an id with the name of “devops”.

Use this jenkins script to execute git commands,

node('maven') {
	stage('Clone') {
		sh "git config --global http.sslVerify false"
		git branch:"branch-01", url:"https://git.mygit.id/git/something/project.git", credentialsId:'devops'
	}
}
Google+

Reading Original IP on Keycloak when Installed Behind a Reverse Proxy

Keycloak, or Red Hat Single SignOn, have the capability of capturing ip of every request which are connected to it. But there are scenarios where Keycloak is located behind a reverse proxy, and Keycloak would capture reverse proxy’s ip instead of original requestor IP.

The workaround is actually quite simple although can be at different xml files depends on your server , can add below configuration on default-server tag.

<server name="default-server">
	<http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"
		proxy-address-forwarding="true" />
	<https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"
		proxy-address-forwarding="true" />
	<host name="default-host" alias="localhost">
		<location name="/" handler="welcome-content"/>
		<http-invoker security-realm="ApplicationRealm"/>
	</host>
</server>
Google+

Starting JBoss EAP or Wildfly with a Specific XML Configuration

We can run EAP or Wildfly with a specific XML configuration, not just the default one. For example we have a new configuration with the name of standalone-full-ha_1.xml, and want to run EAP based on it. First we need to put corresponding XML on below directory,

%WILDFLY_DIRECTORY%\standalone\configuration

And run using below command,

standalone.bat -c standalone-full-ha_1.xml

(Y)

Google+

Monitoring Kafka Topics with Dockerized Kafka Manager

Yesterday, Dimas (one of my colleague), are asking me on how to monitor Kafka which are running on top of Openshift using a tools which are accessible thru browser.

One of the tools im recommending is Kafka Manager, which we can download from below url,

https://github.com/yahoo/kafka-manager

Lets start from the beginning, from how to start Zookeeper, Kafka Server, until simulate a simple produce and consume and monitoring it using Kafka Manager.

First, download Kafka from Apache site, extract it, and open bin folder. We need Zookeeper to start before we start anything else. Fyi for this example im using Win10 as my primary Operating System, so all my command below can be different depends on what Operating System you are using.

cd D:\software\kafka_2.13-2.4.0\bin\windows
zookeeper-server-start.bat ..\..\config\zookeeper.properties

And run Kafka Server afterwards,

kafka-server-start.bat ..\..\config\server.properties

Create a topic,

kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic my-testing-topic

Try produce a simple echo message using Kafka Producer,

kafka-console-producer.bat --broker-list localhost:9092 --topic my-testing-topic

And listen to the sent message using Kafka Consumer,

kafka-console-consumer.bat --bootstrap-server localhost:9092 -topic  my-testing-topic --from-beginning

If you only want to get all the new message, ignoring the old one, just remove “–from-beginning” parameter. And use “–offset” parameter to get a specific offset.

Next is running my Kafka Manager using Docker command. Fyi, 192.168.1.20 is my laptop ip.

docker run --network host --add-host=moby:192.168.1.20 --add-host DESKTOP:192.168.1.20 -p 9000:9000 -e ZK_HOSTS="192.168.1.20:2181"  kafkamanager/kafka-manager

After Kafka-Manager is successfully started, we can browse our Kafka by opening thru browser,

Google+