Java Posts

Deploying A Spring Boot Application to Openshift, with Java 11 S2I and Jenkins Pipeline

Openshift provide a base Java s2i image where we can use as a base image to run our java applicaiton. But too bad, the default Java base image is still using java 8. On this example, im trying to do a simple deployment to Openshift but using Java 11 as its base image, instead of Java 8.

First we need to import a Java 11 s2i image from Red Hat Registry, it will become our base image to run our executable java app.

oc import-image my-project/openjdk-11-rhel7 --from=registry.redhat.io/openjdk/openjdk-11-rhel7 --confirm

Next is creating a very simple pipeline script to build, and deploy our app using Jenkins Pipeline.

node('maven') {
	stage('Clone') {
		sh "git config --global http.sslVerify false"
		sh "git clone https://github.com/edwin/hello-world.git"
	}
	stage('Build') {
		sh "mvn -v"
		sh "mvn clean package -f hello-world/pom.xml"
		
		def jarFile = sh(returnStdout: true, script: 'find hello-world/target -maxdepth 1 -regextype posix-extended -regex ".+\\.(jar|war)\$" | head -n 1').trim()
		sh "cp ${jarFile} app.jar"
	}
	stage('Deploy') {
		sh "oc new-build --name hello-world --binary -n my-project --image-stream=my-project/openjdk-11-rhel7  || true"
		sh "oc start-build hello-world --from-file=app.jar -n my-project --follow --wait"
		sh "oc new-app hello-world || true"
		sh "oc expose svc/hello-world || true"
	}
}
Google+

Creating a Jenkins Slave Image with Maven 3.6, Java 11 and Skopeo

Openshift have a default maven Jenkins slave image, but too bad it is build on top of Java 8. And on this project which im currently working on, i need a custom Jenkins slave but with Java 11 and the ability to move images between Image Registry. Therefore i create a custom Dockerfile which contains Skopeo, Maven 3.6.3 and Java 11. Below is the detail Dockerfile which i created,

FROM openshift/jenkins-slave-base-centos7:v3.11

MAINTAINER Muhammad Edwin < edwin at redhat dot com >


ENV MAVEN_VERSION=3.6.3 \
    PATH=$PATH:/opt/maven/bin

# install skopeo
RUN yum install skopeo -y && yum clean all

# install java
RUN curl -L --output /tmp/jdk.tar.gz https://download.java.net/java/GA/jdk11/9/GPL/openjdk-11.0.2_linux-x64_bin.tar.gz && \
	tar zxf /tmp/jdk.tar.gz -C /usr/lib/jvm && \
	rm /tmp/jdk.tar.gz && \
	update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk-11.0.2/bin/java 20000 --family java-1.11-openjdk.x86_64 && \
	update-alternatives --set java /usr/lib/jvm/jdk-11.0.2/bin/java
	
# Install Maven
RUN curl -L --output /tmp/apache-maven-bin.zip  https://www-eu.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.zip && \
    unzip -q /tmp/apache-maven-bin.zip -d /opt && \
    ln -s /opt/apache-maven-${MAVEN_VERSION} /opt/maven && \
    rm /tmp/apache-maven-bin.zip && \
    mkdir -p $HOME/.m2

RUN chown -R 1001:0 $HOME && chmod -R g+rw $HOME

COPY run-jnlp-client /usr/local/bin/

USER 1001

Build by using this command,

docker build -t jenkins-slave-skopeo-jdk11-new -f skopeo-jdk11.dockerfile .

Pull the image to Openshift,

oc import-image docker.io/edwinkun/jenkins-slave-skopeo-jdk11-new --confirm

Register on Jenkins as a

And try on

node('maven') {
	stage('Clone') {
		sh "git config --global http.sslVerify false"
		sh "git clone https://github.com/edwin/hello-world.git"
	}
	stage('Build') {
		sh "mvn -v"
		sh "mvn clean package -f hello-world/pom.xml"
	}
}

This is the result,

Detail code can be seen on my github page, https://github.com/edwin/jenkins-slave-maven-jdk11-skopeo

Google+

Starting JBoss EAP or Wildfly with a Specific XML Configuration

We can run EAP or Wildfly with a specific XML configuration, not just the default one. For example we have a new configuration with the name of standalone-full-ha_1.xml, and want to run EAP based on it. First we need to put corresponding XML on below directory,

%WILDFLY_DIRECTORY%\standalone\configuration

And run using below command,

standalone.bat -c standalone-full-ha_1.xml

(Y)

Google+

Does JBoss EAP’s “org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker” doing a “Select 1 from Dual”?

TL;DR
No it is NOT.

Longer Version :
JBoss EAP and Wildfly have its own internal query validator to check whether connection is active or not before doing connection to Oracle database, it resides on below class.

org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker

Why i’m interested in OracleValidConnectionChecker class? Because several days ago, i got a critical question from Habiburrokhman Sjarbini, my fellow RedHat Platform Consultant regarding this. Whether OracleValidConnectionChecker which is embedded in JBoss EAP will do a “select 1 from dual” query?

I have to decompile a lot of classes to see what is running under the hood,

Here is the screenshot of what OracleValidConnectionChecker looks like,

On Oracle database, it will call “oracle.jdbc.driver.OracleConnection” and invoke method “pingDatabase”. Lets check what is the content of “pingDatabase” method,

OracleConnection will call its super class, OracleConnectionWrapper, and invoke pingDatabase method. It goes here and there, until finally it reach oracle.jdbc.driver.PhysicalConnection class on method doPingDatabase.

And as you can see, it runs a “SELECT ‘x’ FROM DUAL” query, instead on “select 1 from dual”. :)

Jar versions :

ojdbc8.jar
ironjacamar-jdbc-1.4.11.Final.jar
Google+

Removing Bucket Name as Subdomain when Using AmazonS3Client Android Library

On a simple java desktop app, uploading an image to an S3-compliant API service is so simple. Each bucket are represented as folders, therefore no special approach is needed. Here is how it was done,

public AWSTestCase() {
	AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:8082/", "us-west-2");
	client = AmazonS3ClientBuilder.standard()
			.withPathStyleAccessEnabled(true)
			.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
			.withEndpointConfiguration(endpoint)
			.withClientConfiguration(new ClientConfiguration().withProtocol(Protocol.HTTP))
			.enablePathStyleAccess()
			.build();
}

@Test
public void testUploadToBucket() {
	client.putObject("bucket01", "jim.png", new File("d:\\jim.png"));
}

It will try to upload to below url,

http://localhost:8082/bucket01

But the same approach is not working on Android,

File file = new File(Environment.getExternalStorageDirectory(), attachmentModel.getLocalPath());

AWSCredentials credentials = new BasicAWSCredentials(ConstantCommon.ACCESS_KEY, ConstantCommon.SECRET_KEY);
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(Protocol.HTTP);

AmazonS3Client client = new AmazonS3Client(credentials, clientConfig);
client.setEndpoint("http://localhost:8082/");
client.putObject("bucket01", attachmentModel.getRemotePath(), file);

It will gives error,

com.amazonaws.AmazonClientException: Unable to execute HTTP request: bucket01.localhost

Somehow app will try to upload to below url, because bucket name are treated as subdomain.

http://bucket01.localhost:8082/

Workaround is quite easy, setting bucket name as empty string and set bucket name as folder on endpoint should solve this issue.

public AWSTestCase() {
	client = new AmazonS3Client(new AnonymousAWSCredentials());
	client.setEndpoint("http://localhost:8082/bucket01");
}

@Test
public void testUploadToBucket() {
	client.putObject("", "jim.png", new File("d:\\jim.png"));
}

The content of my Gradle file,

    implementation 'com.amazonaws:aws-android-sdk-s3:2.9.2'
    implementation 'com.amazonaws:aws-android-sdk-core:2.16.5'
Google+