Spring and Infinispan Integration on Wildfly Cluster

Spring framework provides a caching abstraction API that makes it very easy and consistent to use a lot of embeddable caching frameworks like, EHCache, Infinispan, Hazelcast etc. But sometimes it is also necessary to integrate with the underlying application server’s caching framework, especially in a clustered, distributed environment. In this post I will show how we can integrate a spring-boot application, deployed on clustered Wildfly application servers (version 10.1.0.Final), to integrate with the distributed Infinispan cache. Below is an illustration:


spring-wildfly-infinispan

Wildfly already packs Infinispan caching framework as a subsystem, which the applications can leverage from in order to cache shared/private objects. So lets first create a custom replicated-cache container which we can use to share cached objects across a cluster of Wildfly servers. In order to use replicated cache, each instance of Wildfly needs to be started with full-ha profile via standalone-full-ha.xml configuration file. Below is what we need to add in this configuration file in order to create a replicated cache container:

<cache-container name="mycache" aliases="ha-partition" default-cache="default" module="org.wildfly.clustering.server">
    <transport lock-timeout="60000"/>
    <replicated-cache name="default" mode="SYNC">
        <transaction mode="BATCH"/>
    </replicated-cache>
</cache-container>

We can lookup this cache container using the following JNDI name:

java:jboss/infinispan/container/mycache

In the Spring Boot Application we need the following dependency added to lookup and use the Infinispan cache.

<dependency>
    <groupId>org.wildfly</groupId>
    <artifactId>wildfly-clustering-infinispan-extension</artifactId>
    <version>10.1.0.Final</version>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupId>org.jboss.metadata</groupId>
            <artifactId>jboss-metadata-common</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.jboss.metadata</groupId>
            <artifactId>jboss-metadata-ejb</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.jboss.metadata</groupId>
            <artifactId>jboss-metadata-ear</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.jboss.openjdk-orb</groupId>
            <artifactId>openjdk-orb</artifactId>
        </exclusion>
    </exclusions>
</dependency>

Also because the dependency is “provided” by the application server, we need to add manifest entries so that we have access to all the required classes on runtime. In order to achieve this we’ll add the following plugin to the pom file.

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <configuration>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <archive>
            <manifestEntries>
                <Dependencies>org.infinispan, org.infinispan.commons, org.jboss.as.clustering.infinispan
                    export
                </Dependencies>
            </manifestEntries>
        </archive>
    </configuration>
</plugin>

Now the application is all setup to lookup and use the distributed Infinispan cache, below is how we can do this:

String cacheContainerJndi ="java:jboss/infinispan/container/mycache";

Cache cache = (Cache) ((DefaultCacheContainer) new JndiTemplate().lookup(cacheContainerJndi)).getCache();
cache.put("mykey", "myvalue");

String val = cache.get("mykey");

The complete Spring Boot application is available on Github.

This is just a simple usecase that shows how we can use Wildfly’s distributed cache in applications. In the next blog entry I will demonstrate how this distributed Infinispan cache can be used to integrate CAS Single Sign On in order to effectively achieve SSO/SLO in a clustered/distributed Wildfly environment.

10 Comments

Hi Kamran,

I did followed your instruction and pull the source code from Guthub to test but fail. I got the error below, could you please help ?

“WFLYCTL0080: Failed services” => {“jboss.undertow.deployment.default-server.default-host./spring-wildfly-infinispan” => “org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./spring-wildfly-infinispan: java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘defaultCache’ defined in org.kamranzafar.spring.infinispan.Application: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.infinispan.Cache]: Factory method ‘defaultCache’ threw exception; nested exception is javax.naming.NameNotFoundException: infinispan/container/mycache [Root exception is java.lang.IllegalStateException]
Caused by: java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘defaultCache’ defined in org.kamranzafar.spring.infinispan.Application: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.infinispan.Cache]: Factory method ‘defaultCache’ threw exception; nested exception is javax.naming.NameNotFoundException: infinispan/container/mycache [Root exception is java.lang.IllegalStateException]
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘defaultCache’ defined in org.kamranzafar.spring.infinispan.Application: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.infinispan.Cache]: Factory method ‘defaultCache’ threw exception; nested exception is javax.naming.NameNotFoundException: infinispan/container/mycache [Root exception is java.lang.IllegalStateException]
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.infinispan.Cache]: Factory method ‘defaultCache’ threw exception; nested exception is javax.naming.NameNotFoundException: infinispan/container/mycache [Root exception is java.lang.IllegalStateException]
Caused by: javax.naming.NameNotFoundException: infinispan/container/mycache [Root exception is java.lang.IllegalStateException]
Caused by: java.lang.IllegalStateException”},
“WFLYCTL0412: Required services that are not installed:” => [“jboss.undertow.deployment.default-server.default-host./spring-wildfly-infinispan”],
“WFLYCTL0180: Services with missing/unavailable dependencies” => undefined

You need to start Wildfly with full-ha profile, like:

./standalone.sh –server-config=standalone-full-ha.xml

Thanks Kamran,

Sorry, i miss the information in previous post.
May I know if i run in domain cluster mode with ha profile , what part i need to change ?

Hi Ricky.. u can specify the profile in the server-group of your domain config, like below:

<server-group name="main-server-group" profile="full-ha">
<socket-binding-group ref="full-ha-sockets"/>
<deployments>
<deployment name="foo.war_v1" runtime-name="foo.war" />
<deployment name="bar.ear" runtime-name="bar.ear" />
</deployments>
</server-group>

Dear Kamran,

It’s finally work after I changed the profile to full-ha and socket-binding-groups to full-ha-socket. Thanks a lot for the information you gave. I just wonder whats the difference between ha and full-ha, will compare the config in detail later.

AFAIK, the full-ha is include JMS which ha dont have according to jboss document …

Hi,
Excellent article!

I am just curious, does making use of a caching solution such as this solve any issues with the fact that by default spring beans are singletons? If you have a cluster of 2 wildfly instances and have a spring webapp deployed (which contains the singletons) and multiple request from separate users are sent to the cluster for a singleton service, how will the cluster decide which instance (server 1 or 2) will serve the request (round robin ? ) and how will state be preserved across the two instances of the singleton bean? I know beans should not keep state but if this was the case.

Hi, you normally have a load balancer (like apache, nginx etc.) that routes the requests, between the nodes on your cluster, based on the configuration. The cluster replicates the http sessions on all the nodes, so any change in the session will be replicated across all nodes. Like you said, it is not a good idea to have states saved in singletons outside of the http session because these state changes will be local and not replicated.

Hi ,

I’ve setup two node and one master in 3 machine with http mod cluster. I used the cluster demo application from internet to test the session failover with full-ha profile. The session replicate is not working with following step
machine A node 1 shutdown -> session failover to machine B node 2 ( work ) , machine A start ( add some to session ) -> machine B node 2 shutodnw -> machine A get session fail.
Is it normal or i miss something ?

Thanks for share , BR.

Leave a Reply