Error “No marshaller registered for object of Java type” in Infinispan

Had this error whenc trying to put a Java bean to Infinispan 15,

java.lang.IllegalArgumentException: No marshaller registered for object of Java type com.edw.model.User : com.edw.model.User@1a8d9c94
	at org.infinispan.protostream.impl.SerializationContextImpl.getMarshallerDelegate(SerializationContextImpl.java:517) ~[protostream-5.0.4.Final.jar:5.0.4.Final]
	at org.infinispan.protostream.WrappedMessage.writeCustomObject(WrappedMessage.java:300) ~[protostream-5.0.4.Final.jar:5.0.4.Final]
	at org.infinispan.protostream.WrappedMessage.writeMessage(WrappedMessage.java:250) ~[protostream-5.0.4.Final.jar:5.0.4.Final]
	at org.infinispan.protostream.WrappedMessage.write(WrappedMessage.java:243) ~[protostream-5.0.4.Final.jar:5.0.4.Final]
	at org.infinispan.protostream.ProtobufUtil.toWrappedByteBuffer(ProtobufUtil.java:152) ~[protostream-5.0.4.Final.jar:5.0.4.Final]
	at org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller.objectToBuffer(ImmutableProtoStreamMarshaller.java:55) ~[infinispan-commons-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.commons.marshall.AbstractMarshaller.objectToByteBuffer(AbstractMarshaller.java:70) ~[infinispan-commons-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.marshall.MarshallerUtil.obj2bytes(MarshallerUtil.java:117) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.DataFormat$DataFormatImpl.valueToBytes(DataFormat.java:92) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.DataFormat.valueToBytes(DataFormat.java:211) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.impl.RemoteCacheImpl.valueToBytes(RemoteCacheImpl.java:628) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.impl.RemoteCacheImpl.putAsync(RemoteCacheImpl.java:315) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:196) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]
	at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:186) ~[infinispan-client-hotrod-15.0.7.Final.jar:15.0.7.Final]

Actually it happen because of my Java bean (com.edw.model.User) doesnt have any marshaller. This is how my configuration files looks like,

@Configuration
public class InfinispanConfiguration {
    @Bean
    public RemoteCacheManager remoteCacheManager() {
        return new RemoteCacheManager(
                new org.infinispan.client.hotrod.configuration.ConfigurationBuilder()
                        .addServers("localhost:11222")
                        .security().authentication().username("admin2").password("password")
                        .clientIntelligence(ClientIntelligence.HASH_DISTRIBUTION_AWARE)
                        .marshaller(ProtoStreamMarshaller.class)
                        .build());
    }
}

And everything works well after i register a marshaller for User bean

@Configuration
public class InfinispanConfiguration {
    @Bean
    public RemoteCacheManager remoteCacheManager() {
        return new RemoteCacheManager(
                new org.infinispan.client.hotrod.configuration.ConfigurationBuilder()
                        .addServers("localhost:11222")
                        .security().authentication().username("admin2").password("password")
                        .clientIntelligence(ClientIntelligence.HASH_DISTRIBUTION_AWARE)
                        .marshaller(ProtoStreamMarshaller.class)
                        .addContextInitializer(new UserIndexSchemaInitializerImpl())
                        .build());
    }
}

Where UserIndexSchemaInitializerImpl is a generated code coming from interface that extending SerializationContextInitializer class.

Full code for this can be found on my Github repository,

https://github.com/edwin/spring-boot-2-and-infinispan-15

Deploying Red Hat DataGrid 8 using Operator to Openshift 4 and Setting Up a Persistent Cache

Red Hat Data Grid, or its Open Source version which is Infinispan, is a good distributed cache and key-value NoSQL data store software developed by Red Hat which can be used as an embedded library or as a standalone server. It even can be deployed easily to a container management platform like Openshift by using Operator or Helm chart. And for this example, we are trying to deploy Red Hat Data Grid 8.4 to Openshift by using Operator.

First lets start by creating a namespace dedicated for Data Grid

$ oc new-project datagrid-ns

And create two new YAML file, one is datagrid-operator.yaml

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
 name: datagrid
 namespace: datagrid-ns

and another one is datagrid-subscription.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
 name: datagrid-operator
 namespace: datagrid-ns
spec:
 channel: 8.4.x
 installPlanApproval: Manual
 name: datagrid
 source: redhat-operators
 sourceNamespace: openshift-marketplace

apply them,

$ oc apply -f datagrid-operator.yaml
$ oc apply -f datagrid-subscription.yaml

it would generate a new datagrid item on “Installed Operators” page,

select “Upgrade” and “Approve” to install Data Grid Operator

a successful installation is going to looks like this,

next is creating a new Infinispan cluster, this is for sample purpose only therefore we disable TLS certificate to connect with a very minimum CPU and memory

kind: Infinispan
apiVersion: infinispan.org/v1
metadata:
  name: datagrid-cluster
  namespace: datagrid-ns
spec:
  replicas: 1
  security:
    endpointEncryption:
      type: None
  container:
    cpu: "500m:100m"
    memory: "1Gi:500Mi"

a successful configuration it would generate pods like this,

next is to expose its endpoint so it can be accessible from external

$ oc create route edge --service datagrid-cluster --hostname=datagrid.apps-crc.testing

the result is looks like this,

we can login by using credentials which is stored in Openshift’s Secret. If we are able to successfully login, we can see below Data Grid dashboard

Now lets focus on the Java part. For this, we are using Spring Boot and Java 17 which is defined in our pom.xml file

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>spring-boot-with-datagrid-operator</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>17</maven.compiler.source>
        <maven.compiler.target>17</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

        <version.infinispan>14.0.2.Final</version.infinispan>
        <version.protostream>4.6.2.Final</version.protostream>
        <version.spring.boot3>3.0.4</version.spring.boot3>

        <start-class>com.edw.Main</start-class>
    </properties>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.infinispan</groupId>
                <artifactId>infinispan-bom</artifactId>
                <version>${version.infinispan}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-parent</artifactId>
                <version>${version.spring.boot3}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>


    <dependencies>
        <!-- Spring Boot -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <!-- infinispan -->
        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-spring-boot-starter-remote</artifactId>
        </dependency>
        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-jboss-marshalling</artifactId>
        </dependency>

        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-remote-query-client</artifactId>
        </dependency>
        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-client-hotrod</artifactId>
        </dependency>
        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-api</artifactId>
        </dependency>

        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-spring-boot-starter-embedded</artifactId>
        </dependency>

        <dependency>
            <groupId>org.infinispan</groupId>
            <artifactId>infinispan-client-hotrod</artifactId>
        </dependency>
        <dependency>
            <groupId>javax.transaction</groupId>
            <artifactId>jta</artifactId>
            <version>1.1</version>
        </dependency>

        <dependency>
            <groupId>io.micrometer</groupId>
            <artifactId>micrometer-registry-prometheus</artifactId>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <version>${version.spring.boot3}</version>
                <configuration>
                    <layout>JAR</layout>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>repackage</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>

a Java file for configuration,

package com.edw.configuration;

import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class InfinispanConfiguration {
    @Bean
    public RemoteCacheManager remoteCacheManager() {
        return new RemoteCacheManager(
                new ConfigurationBuilder()
                        .addServers("datagrid-cluster.datagrid-ns.svc.cluster.local:11222")
                        .security().authentication().username("developer").password("password")
                        .clientIntelligence(ClientIntelligence.HASH_DISTRIBUTION_AWARE)
                        .marshaller(new GenericJBossMarshaller())
                        .addJavaSerialWhiteList(".*")
                        .build());
    }
}

And this is perhaps most important class where we can define our caches. For this example, we are trying to create two different caches where one cache is persistent and another one is not.

package com.edw.helper;

import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.commons.configuration.XMLStringConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.util.HashMap;
import java.util.UUID;

@Service
public class CacheHelper {

    private RemoteCacheManager cacheManager;

    @Autowired
    public CacheHelper (RemoteCacheManager cacheManager) {
        this.cacheManager = cacheManager;
    }

    private Logger logger = LoggerFactory.getLogger(this.getClass());

    public void populate() {
        // build cache without persistence
        RemoteCache cacheWithoutPersistence = cacheManager.administration().getOrCreateCache("cache-without-persistence",
                new XMLStringConfiguration("<distributed-cache name=\"cache-without-persistence\" mode=\"ASYNC\">\n" +
                        "\t<encoding media-type=\"application/x-jboss-marshalling\" />\n" +
                        "</distributed-cache>")
        );
        for (int i = 0; i < 50; i++) {
            cacheWithoutPersistence.put("key"+i, UUID.randomUUID().toString());
        }

        // build cache with persistence
        RemoteCache cacheWithPersistence = cacheManager.administration().getOrCreateCache("cache-with-persistence",
                new XMLStringConfiguration("<distributed-cache name=\"cache-with-persistence\" mode=\"ASYNC\">\n" +
                        "\t<encoding media-type=\"application/x-jboss-marshalling\" />\n" +
                        "\t<persistence passivation=\"false\">\n" +
                        "\t\t<file-store>\n" +
                        "\t\t  <index path=\"/opt/infinispan/server/data\" />\n" +
                        "\t\t  <data path=\"/opt/infinispan/server/data\" />\n" +
                        "\t\t</file-store>\n" +
                        "\t</persistence>\n" +
                        "</distributed-cache>")
        );
        for (int i = 0; i < 50; i++) {
            cacheWithPersistence.put("key"+i, UUID.randomUUID().toString());
        }
    }

    public HashMap getCacheWithoutPersistence() {
        RemoteCache<String, String> cache = cacheManager.getCache("cache-without-persistence");

        HashMap hashMap = new HashMap();
        for (Object key : cache.keySet()) {
            hashMap.put(key, cache.get(key));
        }
        return hashMap;
    }

    public HashMap getCacheWithPersistence() {
        RemoteCache<String, String> cache = cacheManager.getCache("cache-with-persistence");

        HashMap hashMap = new HashMap();
        for (Object key : cache.keySet()) {
            hashMap.put(key, cache.get(key));
        }
        return hashMap;
    }
}

with one controller file,

package com.edw.controller;

import com.edw.helper.CacheHelper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.HashMap;

@RestController
public class IndexController {

    private CacheHelper cacheHelper;

    @Autowired
    public IndexController (CacheHelper cacheHelper) {
        this.cacheHelper = cacheHelper;
    }

    @GetMapping(path = "/cache-without-persistence")
    public HashMap cacheWithoutPersistence() {
        return cacheHelper.getCacheWithoutPersistence();
    }

    @GetMapping(path = "/cache-with-persistence")
    public HashMap cacheWithPersistence() {
        return cacheHelper.getCacheWithPersistence();
    }

    @GetMapping(path = "/populate")
    public HashMap populate() {
        cacheHelper.populate();
        return new HashMap() {{
            put("status", "success");
        }};
    }
}

After that we can build and deploy our Java application to Openshift,

Trigger populate endpoint from our Spring Boot to generate Caches and its contents,

$ curl -kv http://localhost:8080/populate
*   Trying ::1:8080...
* Connected to localhost (::1) port 8080 (#0)
> GET /populate HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.76.1
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Sat, 14 Sep 2024 17:22:03 GMT
< 
* Connection #0 to host localhost left intact
{"status":"success"}

We can check the content of each Caches stores,

Now lets try to delete Data Grid pod and see whether all the cache data is gone or not,

$ oc delete po --grace-period=0 --force datagrid-cluster-0

We can see that cache-with-persistence Cache Store still having its data

while cache-without-persistence Cache Store is not having any data at all

Source code for this tutorial can be found here,

https://github.com/edwin/spring-boot-with-datagrid-operator

Java and Sonarqube Integration with Maven

Sonarqube is one recommended tools that we can leverage to do code quality scanning, and we can integrate it seamlessly with our Java build tool which is Maven. Having it integrated would make scanning easier since we can do it before build our application, while enforcing a quality gate to prevent low quality codes from going to the next phase.

Integrating Sonarqube to our application is actually quite simple, since a pom.xml configuration is sufficient enough for this. We can define all the required configurations on the “properties” tag,

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.edw</groupId>
    <artifactId>java-sonarqube</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>21</maven.compiler.source>
        <maven.compiler.target>21</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

        <!-- sonar -->
        <sonar.language>java</sonar.language>
        <sonar.java.coveragePlugin>jacoco</sonar.java.coveragePlugin>
        <sonar.dynamicAnalysis>reuseReports</sonar.dynamicAnalysis>
        <sonar.coverage.jacoco.xmlReportPaths>${project.basedir}/target/site/jacoco/jacoco.xml</sonar.coverage.jacoco.xmlReportPaths>
        <sonar.jacoco.reportsPaths>${project.build.directory}/jacoco.exec</sonar.jacoco.reportsPaths>
        <sonar.tests>src/test/java</sonar.tests>
        <sonar.host.url>${SONAR_URL}</sonar.host.url>
        <sonar.projectKey>${SONAR_PROJECT_KEY}</sonar.projectKey>
        <sonar.projectName>${SONAR_PROJECT_NAME}</sonar.projectName>
        <sonar.token>${SONAR_TOKEN}</sonar.token>
        <sonar.scm.disabled>true</sonar.scm.disabled>
    </properties>
    
</project>

and run it with below command,

$ mvn clean verify sonar:sonar \ 
        -DSONAR_URL=http://xxxxx \ 
        -DSONAR_PROJECT_KEY=xxxxx \ 
        -DSONAR_PROJECT_NAME=xxxxx \ 
        -DSONAR_TOKEN=xxxxx

A much more detail sample code can be found on below Github repository,

https://github.com/edwin/java-sonarqube

Configuring Spring Boot 3 Behind Nginx Reverse Proxy

Recently got a very unique usecase where nginx is redirecting request to a different port that is provided by Nginx. Based on below image, we can see that user is accessing port 8080 on Nginx but getting HTTP code 302 redirect to port 8081 which is not exposed by Nginx.

Workaround is quite straighforward, we can set this configuration on Spring Boot’s application properties.

server.forward-headers-strategy=FRAMEWORK

while having this configuration on nginx.conf

        location / {
                proxy_pass http://127.0.0.1:8081;
                proxy_http_version 1.1;
                proxy_set_header Connection "";
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
				proxy_set_header X-Forwarded-Port  $server_port;
                proxy_set_header X-Forwarded-Host  $host;
				
                proxy_busy_buffers_size 512k;
                proxy_buffers 8 512k;
                proxy_buffer_size 256k;
                proxy_read_timeout 1800;
                proxy_connect_timeout 1800;
                proxy_send_timeout 1800;
                client_max_body_size 50M;
                proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
                proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
                proxy_ssl_ciphers HIGH:!aNULL:!MD5;
                proxy_ssl_verify off;
                proxy_set_header cookie $http_cookie;

                port_in_redirect off;
                absolute_redirect off;
        }

We can use this Java project for testing,

https://github.com/edwin/spring-3-keycloak

Creating Java 21 Runtime on top of an UBI Base Image

Just recently had a request to create a custom image based on UBI but extendable, means able to be installed custom package for other functionality.

For this sample, im using UBI9 latest version which is 9.4. We can find our Dockerfile below,

FROM registry.access.redhat.com/ubi9/ubi-minimal:9.4

LABEL BASE_IMAGE="registry.access.redhat.com/ubi9/ubi-minimal:9.4"
LABEL JAVA_VERSION="21"

ENV LANGUAGE='en_US:en'
ENV TZ='Asia/Jakarta'

RUN microdnf install -y --nodocs java-21-openjdk-headless  \
    && microdnf clean all  \
    && echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security

WORKDIR /work/

COPY --chown=185 target/quarkus-app/lib/ /work/lib/
COPY --chown=185 target/quarkus-app/*.jar /work/application.jar
COPY --chown=185 target/quarkus-app/app/ /work/app/
COPY --chown=185 target/quarkus-app/quarkus/ /work/quarkus/

ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:TieredStopAtLevel=1 -noverify -XX:+UseShenandoahGC -XX:+AlwaysPreTouch -XX:+UseNUMA -Xlog:gc*,safepoint=debug:file=/tmp/gc.log.%p:time,uptime:filecount=5,filesize=50M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/"

EXPOSE 8080
USER 185

CMD java $JAVA_OPTS -jar application.jar

Based on above script, we are using UBI9 and installing some packages using microdnf with a custom JAVA_OPTS variables.

Code can be found on this URL,

https://github.com/edwin/quarkus-and-java21