Securing Hazelcast (tcp) traffic with Stunnel

Hazelcast is a distributed in-memory data grid, which allows to evenly share data among nodes in clustered environments. The open source version of Hazelcast does not support encryption at the transport or even at the cache level. So in order to secure traffic in the Hazelcast cluster, we need to extend it by making some code changes, which may not always be an option.

There is however another way to secure the transport by using stunnel. As the official documentation states, “Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs’ code”. This sounds like a perfect option in our case, and allows us to decouple transport encryption from our Hazelcast applications.

Continue reading at Medium…

Docker and iptables firewall

Docker works perfectly fine when no firewall is running on the host machine. Without the firewall docker containers can communicate with each other and with the outside world. But with the firewall we need to setup some rules in order to allow traffic to and from the docker interface. Below it is detailed how we can configure the firewall for docker on a Centos server.

First of all let us find the docker interface and IP, we can do that using the ifconfig command:

Here the interface name is docker0. Now we can setup firewall rules using the iptables command:

With these rules setup, Docker containers can now talk to each other and the outside world.

If you use CSF (Config Server Firewall), a custom chain with these rules can be added to file, like below:

And then reload the firewall rules using the command below:

Atomic Updates on MongoDB with Spring Data

MongoDB provides operators for atomic updates in addition to the normal insert and modify functionality. These operators include, $inc, $set and $unset; and are particularly useful for keeping counters and for thread-safe field updates, which helps resolve data-related concurrency issues in web and multi-threaded applications. If persistence is required and performance is not an issue, these operators can also be a good replacement for single-threaded caching solutions like Redis by providing persistent atomic operations. The use cases for using these operations include, real-time web analytics, leader boards, rate limiting etc.

Spring data provides simple constructs to easily perform these atomic operation on Mongo documents. Below is a sample document, so lets update it atomically.

Below is the Spring Data code snippet for querying and updating documents. The $inc operator is used to increment or decrement the field value. And $set updates the field value or adds a field in the document if it doesn’t exist. Similarly $unset operator removes the field from the document.

When the above update operation runs, the document will have “hits” incremented by 1, “lastAccess” set to current date and the “inactive” field removed. All of it thread-safe and atomic. And the resulting document will look something like below:


CAS and distributed Infinispan integration

CAS is a centralised authentication server that makes it very easy to implement single sign-on (SSO) in java application, and is a very good OAuth alternative. For distributed/clustered environments CAS also offers integration with a number of caching technologies in order to support single-logout (SLO). Recently we had an issue on clustered Wildfly J2EE servers where we had to support SLO in our distributed spring-boot based applications. Infinispan is the default distributed caching technology used by Wildfly, and unfortunately we didn’t find any integration support provided by CAS for Infinispan. So I wrote an integration library for CAS & Infinispan, which is available on github under Apache License. This library makes it possible to achieve SLO in distributed J2EE applications on Wildfly.

In order integrate CAS on the client side add the following dependency to the application’s POM file:

We then lookup the distributed Infinispan cache using JNDI and use it as storage for CAS tickets. Please see my previous post for more information on Infinispan and Spring boot integration.

With this CAS will use the distributed Infinispan on Wildfly to cache the authentication tickets. The authentication tickets are deleted on user logout. So now when the user logs out from one of the applications, deployed on Wildfly, he/she will be logged out from all the clustered applications; hence achieving single-logout (SLO).

For more information please see the CAS-Infinispan integration client on Github.

Spring and Infinispan Integration on Wildfly Cluster

Spring framework provides a caching abstraction API that makes it very easy and consistent to use a lot of embeddable caching frameworks like, EHCache, Infinispan, Hazelcast etc. But sometimes it is also necessary to integrate with the underlying application server’s caching framework, especially in a clustered, distributed environment. In this post I will show how we can integrate a spring-boot application, deployed on clustered Wildfly application servers (version 10.1.0.Final), to integrate with the distributed Infinispan cache. Below is an illustration:


Wildfly already packs Infinispan caching framework as a subsystem, which the applications can leverage from in order to cache shared/private objects. So lets first create a custom replicated-cache container which we can use to share cached objects across a cluster of Wildfly servers. In order to use replicated cache, each instance of Wildfly needs to be started with full-ha profile via standalone-full-ha.xml configuration file. Below is what we need to add in this configuration file in order to create a replicated cache container:

We can lookup this cache container using the following JNDI name:

In the Spring Boot Application we need the following dependency added to lookup and use the Infinispan cache.

Also because the dependency is “provided” by the application server, we need to add manifest entries so that we have access to all the required classes on runtime. In order to achieve this we’ll add the following plugin to the pom file.

Now the application is all setup to lookup and use the distributed Infinispan cache, below is how we can do this:

The complete Spring Boot application is available on Github.

This is just a simple usecase that shows how we can use Wildfly’s distributed cache in applications. In the next blog entry I will demonstrate how this distributed Infinispan cache can be used to integrate CAS Single Sign On in order to effectively achieve SSO/SLO in a clustered/distributed Wildfly environment.

Spring Boot and WordPress CMS Integration

There are a lot of excellent open source content management systems available today,  WordPress being one of them, which is widely used. WordPress is a PHP based CMS so not a lot of cohesion is available with other technologies like Java, and is simply ignored as a viable content management option just for being non-Java.

In this post I will demonstrate how WordPress can be integrated with Spring Boot applications, all made possible with WordPress REST plugin and a Spring client. Both of these are open source and available for free. Below is a simple design diagram that explains how this integration works. This post assumes that the reader has prior knowledge of WordPress and Spring Boot.

First install and activate WP-API v2 plugin on the WordPress site. After activation, you could verify that the plugin is working by accessing http://yoursite/wp-json/wp/v2/posts, which should display all the posts in JSON format.

In the spring boot application add the following dependency in order to include the Spring WP-API client.

Below is a sample configuration we need in the file, so that we can connect our Spring Boot application to our WordPress site.

The Spring WP-API client needs the RestTemplate so lets create a bean in our Application and autowire the Spring client. We also need to add component scan annotation so that Spring finds and autowires all the required dependencies.

Now our spring boot application is all setup to connect to out WordPress site in order to retrieve and create content. So lets try and lookup all the posts having the keyword “Spring”, below is how we can do this.

The complete test application is available with the Spring WP-API client on Github.

This is just a small example of how we can integrate Spring Boot and WordPress. We can take this further by properly integrating WordPress in the Spring web applications platform, e.g. by enabling Single Sign On using the CAS plugin.

And we can evolve this design even further by having a second-level content storage, like a database or an in-memory caching solution etc. This storage can serve as a cache and can also be used for additional configuration, so that we display relevant content faster in our Spring applications.

For more information please see the documentation of the WordPress REST plugin and Spring WP-API client.

Using Standalone ActiveMQ Broker with Wildfly

Wildfly is an open source J2EE container, which comes with an embedded HornetQ JMS broker enabled by default in it’s full configuration. The embedded HornetQ works great out of the box, but sometimes it can be handy to use an existing JMS broker, like ActiveMQ. ActiveMQ includes a JCA resource adaptor that can be used to integrate it with J2EE containers. Below we will see how to integrate standalone ActiveMQ 5.13.2 with Wildfly 9.0.2 server.

The first step is to download, tweak and deploy the ActiveMQ Resource Adaptor (RA) on Wildfly. ActiveMQ RA binaries are available on Apache’s site for download. So lets download and unpack the adaptor.

Now, because we don’t need to run ActiveMQ embedded in Wildfly, we’ll comment out the necessary configuration. This can be done by editing the activemq-rar/META-INF/ra.xml, and removing the config-property with the following comment:

After this we’ll pack the adaptor and deploy it on Wildfly.

We can now configure this Resource Adaptor on Wildfly in order to create and bind an ActiveMQ connection factory in JNDI. Below is a sample configuration that we need in Wildfly’s standalone.xml configuration file (or the server’s current domain configuration file depending on the setup). The resource adaptor configuration reference is available on Apache’s site

After this, in our applications deployed on Wildfly, we can simply lookup the ActiveMQConnectionFactory from JNDI (java:/ConnectionFactory) and start sending and receiving messages.

Protect your identity from SSH servers

Recently I came across a tweet that highlighted a serious privacy issue related to SSH servers and online services that publish user information in public domain without having to explicitly take permission from users.

In this tweet the guy talks about running a SSH server that matches the incoming connection’s public key with the public keys of Github users, which for some reason Github has kept in the public domain (totally not cool!).

So I decided to give it a go and below is what I found.

kamran@kamran-laptop: ~_005

My public keys were already captured by this server from Github, and so it knew who I was. Which kind of creeped me out. But the feeling was only momentary because I knew what I had to do to protect myself.

The answer lies in the way the public key is shared. Public key is like a signature of the user. My mistake was that I was using my Github public/private key as my default SSH key. So now it was time to change this.

SSH allows you to configure keys per host, therefore limiting the use of the SSH keys for only the specified hosts. This SSH key and host mapping can be configured in ~/.ssh/config file.

So the solution is to backup your existing SSH key and configure the SSH client to use it for the right service.

Then generate a new default public/private key pair.

Now when I connect to Filippo’s dodgy SSH server, it doesn’t know who I am 🙂

kamran@kamran-laptop: ~_007

Note: Another thing to be aware of; if you are using ssh-agent to manage your keys, then it will send all the public keys to any SSH server you connect to. It is better to avoid using ssh-agent, and configure host specific keys in the SSH config file for identity protection.

Real-time Web Cam Surveillance with Raspberry-PI

Raspberry PI is a great little tool for building small projects, from personal web servers to robot butlers. With it’s linux core you could potentially use RPI like a regular linux box. There are loads of example projects available on the internet. Here I will present how to use RPI to build a poor man’s surveillance camera using an ordinary web cam. I found an old web cam in the drawer, I seldom open, that came in handy for this project.

The idea is pretty simple, stream the captured web-cam video over the internet in almost real-time. So here is what we need for this project:

  1. Raspberry PI
  2. Wifi adaptor (Network connectivity)
  3. USB Web cam

I will assume that RPI is connected to the internet, so will skip the setup of point #2 in the list above.

The first step is to connect the web cam to RPI. After that check if RPI can recognize it, by running the following command:

This will list all the usb devices and you should also see the web cam in the list.

Now what we need is a RTP (Real-time Transfer Protocol) server; for this purpose we’ll use NnginX web server and build it from source with the RTP module to stream the video. So lets download, build and install it.

After this we’ll configure RTP on Nginx in the /etc/nginx/nginx.conf file.

And then restart the server

The server is all setup to stream the video. What we need to do now is capture and send the video to nginx for streaming. For this we can either use ffmpeg or avconv. I will use avconv in this example.

First install avconv

After this we’ll start the video capture and send it to nginx. We can do this with the following command.

You can play with settings to increase the video size and buffer etc.

Once the streaming starts you will see something like below on the shell terminal:

pi@kamran-rpi: ~_002

To view the video I used VLC player, which is the best open video player available that can literally play whatever you throw at it. So point the web cam to what you want to capture and open the following URL in VLC (replace <rpi-ip> with the IP of your Raspberry PI).

Here I am keeping an eye on my guitar which has been suspiciously going out of tune lately.

rtmp:-- - VLC media player_003

XMPP web chat with ejabberd, Converse.js and Spring MVC

Converse.js is a fine little XMPP web chat client that can easily be used in web applications. It can integrate with various online XMPP services, like Google Talk, MSN etc. In this post I will demonstrate how to use converse.js with ejabberd in a Spring based application. I used Ubuntu linux box to for this demo.

ejabberd is available in main Ubuntu repositories so can be installed by using the following command.

The first thing to do is to add an admin user, by adding the following to the /etc/ejabberd/ejabberd.cfg configuration file.

Then restart ejabberd

Now register the admin user by entering the following command on the shell prompt.

For more details on ejabberd setup and additional options, follow this tutorial.

[turbo_widget widget-prefix=text&obj-class=WP_Widget_Text&widget-text–title=&widget-text–text=%3C!–+Post+Ad+–%3E%0A%3Cins+class%3D%22adsbygoogle%22%0A+++++style%3D%22display%3Ablock%22%0A+++++data-ad-client%3D%22ca-pub-4433761869744390%22%0A+++++data-ad-slot%3D%229020480262%22%0A+++++data-ad-format%3D%22auto%22%3E%3C%2Fins%3E%0A%3Cscript%3E%0A(adsbygoogle+%3D+window.adsbygoogle+%7C%7C+%5B%5D).push(%7B%7D)%3B%0A%3C%2Fscript%3E&widget-text–filter=false]

Next we enable http binding, which is required because converse.js will communicate with ejabberd using the BOSH protocol. In order to do this look for the 5280 (ejabberd_http) port setup in the configuration file and update it to look like below:

Then add the http_bind module under the modules section, then restart ejabberd.

Now the XMPP server is setup for this demo. The http binding can be verified by opening the following link in the web browser.

Optionally we can also setup Apache/Nginx web server to proxy the http bind requests to ejabberd. If required add something like the following to the web server configuration. This is useful, if ejabberd port 5280 is blocked by the firewall.

Converse.js can now talk to the ejabberd server. The javascript snippet below shows how to use it in the web application.

This requires login, and also allows registration of new users from the frontend.

[turbo_widget widget-prefix=text&obj-class=WP_Widget_Text&widget-text–title=&widget-text–text=%3C!–+Post+Ad+–%3E%0A%3Cins+class%3D%22adsbygoogle%22%0A+++++style%3D%22display%3Ablock%22%0A+++++data-ad-client%3D%22ca-pub-4433761869744390%22%0A+++++data-ad-slot%3D%229020480262%22%0A+++++data-ad-format%3D%22auto%22%3E%3C%2Fins%3E%0A%3Cscript%3E%0A(adsbygoogle+%3D+window.adsbygoogle+%7C%7C+%5B%5D).push(%7B%7D)%3B%0A%3C%2Fscript%3E&widget-text–filter=false]

ejabberd HTTP Pre-binding

Sometimes it is desirable not to ask the users to login again, if they are already logged on to the parent application. Which means that the parent application should login to ejabberd from the backend, and then attach the ejabberd session to converse.js. This is achieved by something known as “http pre-binding”, which is supported by converse.js. In order to use an existing ejabberd session, created by the parent application, converse.js needs three pieces of information:

  1. JID (ejabberd user id)
  2. SID (ejabberd session id)
  3. RID (request id)

These IDs are obtained by the parent application, after initiating the session, from the backend, by logging on to ejabberd. In the demo, it is done from a custom spring security AuthenticationProvider.

The converse.js javascript snippet below shows how pre-binding is configured on the frontend.

The prebind URL must return the three IDs, mentioned above, in JSON format. More information on prebind_url is available on converse.js homepage.

The complete Spring boot application is available on Github, which covers ejabber authentication and pre-binding.