Spring IO 2019!

Spring I/O has become a yearly tradition for our JWorks consultants. With 21 colleagues we went to the conference in beautiful Barcelona on the 16th and 17th of May.

The conference was held at the same location as last year, the Palau de Congressos de Barcelona. As indicated last year, this year there was an overlap at the venue with the Barcelona International Motor Show. This gave us the opportunity to take a look at some beautiful cars during breaks.

Venue Spring I/O 2019

JWorks at Spring I/O 2019

In this blog post we’ll talk about some of the presentations of this year but it is not a complete list. There were so many interesting talks and all of them are available on Youtube.

Let us know if we missed anything by filing an issue or contacting us at our general JWorks email.

Day 1: Talks & Workshops

Moving from Imperative to Reactive by Paul Harris

Paul Harris

When development started on the Cloud Foundry Java client, Spring Reactor was also rebooted. Which means that they became their very first customer.

Paul Harris says that he made all the mistakes you can make with reactive programming. And now he’ll teach us how to avoid many of those.

It al started with the Reactive Manifesto in 2013, which came up with four ideas for reactive applications:

  • Responsive: it should feel as if the application is progressing, with for example, some feedback.
  • Resilient: if a particular part of you application fails, the remainder should be able to cope with that.
  • Elastic: make the most out of the resources available to the application.
  • Message-Driven: more message-driven than event-driven.

A manifesto is nice, but it does not compile.

The next step was Reactive Streams which defined a set of interfaces for how we might deal with reactive streaming situations. You can distinguish four interfaces:

  • Publisher: which emits ‘things’ or signals in other words.
  • Subscriber: listens to those signals.
  • Subscription: after a subscriber subscribed to a publication a subscription is obtained.
  • Processor: is a combination of a publisher and a subscriber that allows you to process data.

The intention of Reactive Streams was that more useful real world implementations would follow. One of these is Spring Reactor. For a good introduction to Spring Reactor you can read our blog post about it.

Spring Reactor contains various reactive frameworks, the three big ones are:

Before Paul dove in the code he first explained Mono and Flux.

a Mono

A Reactive Streams Publisher that emits none or a single element. a Mono

a Flux

A Publisher that emits zero to N elements. a Flux

Paul showed us a demo of how to make a legacy Maven Spring application reactive. In order to do so the following steps were taken:

  • Add dependency to Spring Boot Starter WebFlux. This is the reactive variant of Web MVC. You shouldn’t need to change anything to keep it running unless you have used specific server features.
  • Convert the return of a List to a Flux.
  • Convert return types for repository methods to Mono or Flux.
    • Use the static method .justOrEmpty of the Mono type in order to deal with an optional.
    • Use .switchIfEmpty to return a proper error response.
    • In order to return a Flux: use fromIterable.

Conclusion is that reactive starts off complicated, but it will become easier when you have used it more often. It doesn’t have that many different methods you can use, so all in all it is quite easy to wrap your head around.

You can rewatch his talk here:

Configuration Management with Kubernetes, a Spring Boot use case by Nicolas Frankel

Nicolas Frankel

Nicolas Frankel is a Developer Advocate who works for Exoscale, a European cloud hosting provider.

In this session, he explained how to correctly configure each environment with its own parameters and settings.

There are traditional configuration management tools such as Chef, Ansible, Puppet, … . But what is the point?

Docker images are and should always be immutable.
They should be configurable depending on the environment where we want to run our image in. A Docker image should be able to run in different environments without problems, this is where Kubernetes comes in. Kubernetes can easily configure and parameterize each Docker image to run in different environments.

One thing to remember is that you should make sure that you are working in the correct environment. Nicolas likes to add banners to the page to know in which environment you are currently working. For example, if you are working in the development environment, then you might want to show a big blue ‘Development’ banner, while in production you would prefer using a big, red, blinking one.

This can all be done with the power of Kubernetes and immutable images. You can simply declare your environment variables in Kubernetes, then you can inject your environment in your Spring Boot application.

There are three ways to access your environment variables in Spring Boot: profiles, @Value or @ConfigurationProperties.

To get started in Kubernetes, you have to create a few Kubernetes objects:

  1. (Optional) A Namespace
  2. A Service
  3. A Pod / Deployment

In the Deployment, you can give the arguments based on the environment that you want to spin up. With a ConfigMap, you can combine your environment variables that belong to each other (ex. database settings, AWS keys, …). Once done defining the ConfigMap, you can import the ConfigMap in your Deployment declaration.

What’s also very interesting is that you can declare your environment variables in a seperate Git repository with the use of an initContainer. Of course, you can already do this with Spring Cloud Config. This is just an alternative on the Kubernetes side of configuration management.

If you want to read more about Nicolas Frankel’s work, you can read his blog here.

You can watch his talk here:

Building better monoliths - Modulithic Applications with Spring Boot by Oliver Drotbohm

Oliver Drotbohm

This talk caught our attention because we’re currently working at a client where we see some of the limitations of a microservice architecture. We are currently considering merging multiple of them into a more coarse-grained architecture containing multiple more ‘monolithic’ applications.

Microservice architectures also have disadvantages!

The talk starts off explaining some of the key differences between microservice and classic monoliths from an architectural point of view.

The key takeaway here is that although there’s an advantage in terms of architecture degradation, it is harder to accidentally call another microservice than to call a method on another bounded context in a monolith. This advantage comes at a cost, you lose compile-time safety which makes it harder to refactor than a monolith and it is harder to test the whole system. These are especially disadvantageous in the early stages of the project when it is not clear yet what the correct bounded contexts are.

The Modulith

It might thus be useful to consider starting off with a well-structured monolith before considering evolving towards microservices. But how do we avoid having our architecture degrade quickly? Enter the modulith; a modulith is basically a monolith with multiple modules with well-defined dependencies in it. To achieve this, Oliver demonstrated a ‘Moduliths’ tool that he’s in process off developing for the Spring framework.

The idea is to use a package structure convention and enforce it with tests using Java’s Reflection. In this package structure convention only (public members of) the root package of each module are accessible to other modules. It’s considered the API package of that module while subpackages are considered internal. There’s more to this tool however; another problem of modularizing your application is that you typically want to do integration testing on the bounds of your modules. The ‘Moduliths’ tool allows to bootstrap your module alone or in various configurations with specific module dependencies for integration testing. To top it off, there’s support to generate PlantUML diagrams for documentation purposes!

For more details take a look at https://github.com/odrotbohm/moduliths.

Alternative approaches

There are of course other ways to divide your application into modules and maintain the architecture:

Multiple artifacts(gradle/maven modules).

You might get an explosion of artifacts and it can become kind of verbose with all the configuration(pom.xml or build.gradle) files. Additionally, the artifacts are redundant since we’re planning on deploying everything together anyway. There’s also no support to dynamically compose modules for tests since the dependencies are typically statically defined.

This was actually the way we were considering handling it at our client. The big advantage here in our eyes is that reflection can be avoided and the architecture can be verified at compile-time. The good news is that it is possible to combine it with the moduliths approach, which might be useful for integration testing.

Java(9+) Module System

Could be used but it’s certainly not designed for this. It definitely doesn’t have any support for dynamically composing your modules for testing.

External tools

JQAssist, Sonargraph, jDepend… . These are powerful tools but usually run during the build making the feedback loop bigger.

Wrapping it up

The moduliths approach explained in this talk gives us a nice intermediate step towards a better architecture. It alleviates some of the biggest problems with monoliths without introducing new ones using a more complex architecture like microservices!

You can rewatch his talk here:

Cutting-Edge Continuous Delivery: Automated Canary Analysis through Spring based Spinnaker by Andreas Evers

The ultimate goal of continuous delivery is to deploy software quickly and automatically. This can only be achieved if we are able to push new code without fear.

Andreas Evers

Throughout the years, Andreas saw that there are two opposing forces that are battling it out. On the one hand there is the need for speed while on the other hand there is the need for confidence. Like, updating in production without testing will give you great speed, but not much confidence.

Microservices using integration tests on an acceptance environment might mean that you test an already obsolete topology because microservices can change that quickly. Contract testing does not cover all the aspects needed to provide confidence as it does not test behaviour.

A good alternative that Andreas talked about is Canary Analysis. In order to do so let’s first introduce Spinnaker: Spinnaker is an open source, multi-cloud continuous delivery platform created at Netlfix. It supports a lot of cloud environments like: OpenStack, AWS, Google Cloud, Microsoft Azure, Cloud Foundry, … Major contributors are Netflix, Google, Microsoft, Pivotal, …

Under the hood, Spinnaker is composed of a bunch of Spring Boot microservices. Another important component of Spinnaker is Halyard: a bill of materials for the different microservices of Spinnaker, helping you with the deployment of Spinnaker. Spinnaker also integrates well with your CI environments.

Cloud deployments are often complex:

  • Different regions
  • Different accounts for your environments (production, acceptance, … )

Teams want an easy road into the cloud, no complexity to deploy. On the other hand easy rollbacks are important. Spinnaker can help you with this!

Various deployment strategies exist:

You can define a pipeline to deploy into production: pipeline tasks

For every stage you will have a series of steps. Within every step you can define multiple tasks and every task has some operations which get executed. A lot of these steps are very specific depending on the cloud which Spinnaker tends to abstract away.

Spinnakers makes it possible to go fast but still do it safely:

  • Automated rollbacks
  • Deployment windows
  • Cluster locking
  • Traffic guards which are extra safeguards which can be configured
  • Manual judgements which makes use of the human “gut” feeling, which a computer does not have

Andreas had a Rick & Morty demo application of which he had an old, already deployed version and a new version. When doing canary, it is wise to startup a baseline, the old version, so that you have the solid baseline to measure against. Spinnaker will also look at JVM metrics like memory, CPU, etc. But you can also define business metrics like startup time of the app, response times, …

When the canary fails, it will just rollback and restore the previous version. Spinnaker will decide if the canary fails by looking at the statistics it gathered.

Canary testing allows you to test with real users and real production data. At the same time it reduces the possible impact of your new version on end users.

You can rewatch his talk here:

Using Java Modules in Practice with Spring Boot by Jaap Coomans

Jaap Coomans

Current State

First Jaap started with addressing the current state of the module system:

  • Most tooling support is good (Maven, IDE, …)
  • In frameworks the adoption is very low
  • For developers it is even lower

It can be summarized as: it’s like eating vegetables, we know it is healthy, we know its beneficial, but we don’t do it.

Using modules?!

What challenges will you face when you want to migrate to modules.

  • Split packages: packages with the same name exposed by more than one module.
  • Automatic modules: plain JAR on your module path and thus interpreted as a module. This exports and opens all packages, reads all other modules and it derives its module name from the filename The problem with that is that you can only have one module with a certain name on your module path.
    • In Maven Central, 3.500 collisions are possible.
    • You can circumvent this with the Automatic-Module-Name in your manifest file.

For the demo application, Jaap used MongoDB. There is a split package issue in the legacy mongo client, not with the new one, but Spring Data Mongo relies heavily on the legacy mongo client instead of the new one.

In order to get started with modules, the first steps are just to minimize the problems you might encounter.

Step 1 + 2 + 3:

  • Upgrade your dependencies as this will minimize conflicts.
  • Use JDK11+.
  • Compile to JDK11+.

These first three steps are just to reduce the problems you might encounter.

Step 4:

Prepare the module structure within your code, so you can go from module to module. Don’t start with one big module from the start.

Step 5:

Add module descriptors bottom-up.

Create a new module-info.java.

This first module has no external dependencies whatsoever making it very easy to define. You only need to indicate what your are going to expose.

module nl.jaapcoomans.boardgame.domain {
    exports nl.jaapcoomans.boardgame.domain;
    exports nl.jaapcoomans.boardgame.domain.command;

Note: you might need a newer version of the Maven Surefire Plugin; Jaap used version 3.0.0-M3.

For a module which needs other modules, you will need to define a little bit more within the module-info.

module nl.jaapcoomans.boardgame.bgg {
    requires nl.jaapcoomans.boardgame.domain;
    //requires com.sun.xml.bind;
    requires java.xml.bind;
    requires feign.core;
    requires feign.jaxb;
    exports nl.jaapcoomans.boardgame.bgg.factory;
    opens nl.jaapcoomans.boardgame.bgg.xmlapi;
  • requires: Defines the modules that you need.
  • exports: The packages that you expose.
  • opens: This means that this will make a module available for reflection (i.e. at runtime), that you might need for JAXB in this case.

Spring is not yet modular, but they did define automatic module names in all of their JARs.

Step 6:

Add a module descriptor to the main JAR. Only then you get all the benefits of the module system. At this moment you will also encounter all the hurdles as this will also get you into runtime errors. If you do not execute this step, your main application will still be using the classpath and not the module path.

Export the main class and the application module.

module nl.jaapcoomans.boardgame.application {
    requires nl.jaapcoomans.boardgame.bgg;
    requires spring.context;
    exports nl.jaapcoomans.boardgame;
    exports nl.jaapcoomans.boardgame.application;

You can also define requires transitive.

 module nl.jaapcoomans.boardgame.persistence{
    requires transitive nl.jaapcoommans.boardgame.domain;

This last part means that when you depend on that module, you will also implicitly depend on that transitive module.

Runtime Errors.

When you encounter runtime errors, you can pretty much copy paste the errors you get about opening the modules. Encountering ClassNotFoundExceptions hints at missing modules for which you should define a requires definition.

When you stop getting errors, this means that you have reached the next phase..

Spring does use some of the internals of the JDK, which can be fixed by:requires jdk.unsupported. This does help you out for now, but the module name alone screams that you should not use it.

Lessons Learned.

As a summary here are the lessons learned by Jaap:

  • Move bottom up.
  • Test all paths on every step, because you will encounter runtime errors.
  • The logs have the answer while the JVM gives you a good indication of errors by the module system.
  • It still involves pioneering.

You can rewatch his talk here:

Stream Processing with the Spring Framework by Josh Long and Viktor Gamov

All the source code of the live demo can be found on GitHub.

Josh Long

Viktor Gamov

Statement: It is dangerous to think of Kafka as a message queue as it tends to become a vine of data within your organization to move data around thus becoming a database.

In the demo they made use of Apache Avro. The Avro format will be used as a contract for the messages, it also gives you the option to generate Java classes based on the schema. You can use an Avro Maven plugin for that.

Kafka does not care what you put in there. But passing along a schema gives your consumers the option to verify that they can process the message or not.

Spring Kafka gives you KafkaTemplates that you can use. The KafkaTemplate wraps a producer and provides you with some extra handy methods to send data to Kafka topics. For more information you can check out the reference guide to use KafkaTemplates.

It is important that you think about the type of the key and the type of the value, serializer and deserializer.

For this you will need to define a DefaultKafkaProducerFactory which will provide you with some default config options like:

  • Bootstrap servers: where to find your Kafka.
  • Schema registry URL: where to find your Avro schema registry.
  • Key serializer: the class to be used to serialize your key when writing the message to Kafka.
  • Value serializer: the class to be used to serialize your value.

Without those serializers, Kafka will not be able to transform your message.

Various other frameworks worth mentioning:

  • Spring Cloud Stream: allows you to abstract the use of message brokers. It will manage a lot of the bindings for you with Kafka Streams mapping a lot of the configuration automatically for you.
  • Kafka Streams: a stream processing pipeline that you can use to build processing pipelines. Similar to Spark but less of a hassle to set up. Ktable is the representation of state.

Some final notes about Kafka Streams:

  • Kafka streams allows you to visualize your topology in a Directed Acyclic Graph using TopologyDescroption. For more info see this link.

  • Kafka Streams allows you to do stateful stream processing in an easy way. Its state store is replicated within Kafka so it can restore it in case of failure.

  • Do not forget your SerDes when writing out Kafka Streams code. Spring Cloud Stream automatically converts to JSON but your Kafka streams code deals with binary data, so it needs to know how to serialize / deserialize. Some of them are predefined by Kafka: StringSerde, LongSerde and JsonSerdes.

It was a very entertaining live coding session which you can rewatch here:

How Fast is Spring by Dave Syer

Dave Syer

In this talk Dave is going over the performance improvements carried out by the Spring team.

Cold startup time of the JVM takes up some time but once started it is an awesome place.

We went through some measurements. An application started up in 1.300 milliseconds. This went down to 1.200 milliseconds by tuning Spring a bit. By then using Spring functional bean definitions it went down to 600 milliseconds.

The overhead of Spring Boot versus no Spring Boot is currently around 15 milliseconds. Thus, a lot of the overhead has already been dealt with.

If the classloader has been warmed up, the startup time difference is much smaller. With Spring Devtools you have a warm classloader, which reduces your startup time.

Lots of optimizations have happened e.g. heap memory went down from 10MB to 6MB with the move to Spring Boot 2.

Async profiler is a tool you can attach to a running Java process. It has little to no impact on the running performance and shows the calls being executed. The width of the flame is the time it took to run. Red and yellow colour means: not in Java user memory and ready for garbage collection.

Async profiler

Spring Boot 2.2 has boosted performance.

  • Classpath exclusion from Spring Boot web starters.
  • spring-context-indexer: this is marginal but with a lot of beans it will have a bigger impact.
  • Spring Actuators used to be costly for startup time, but no longer a big impact since the optimizations in Spring Boot 2.0.
  • Use explicit spring.config.location.
  • Switch of JMX spring.jmx.enabled = false (in 2.2 this is the default setting).
  • Make bean definitions lazy by default. In production you might not want this because if a bean is lazy loaded, the application might not fail on startup. It can make sense to do this during development in order to improve development time.
  • Unpack the fat JAR and run it by specifying the explicit classpath as java --jar is little bit slower compared to using java --cp.
  • Run the JVM with -noverify and consider -XX:TieredStopAtLevel=1.
    • All JVM experts will tell you not to do this.
    • -noverify will gain you 40% time with any app but it does not validate byte code which is less interesting in production as the JVM will just crash and show you no exception whatsoever.
    • -XX:TieredStopAtLevel=1 this deals with the JIT compiler, will gain you around 10% with any app.
  • Import auto configuration manually as it is not needed and might give you a small speed gain.

A nice list of tools you can use:

Other Remarks:

  • The Hibernate team is pretty aware of the GC issues and have done serious optimizations around it.
  • Lazy beans: Pay attention to custom beans with an expensive @PostConstruct. It tends to be misused for opening files, accessing database, which tends to block up the startup.
  • You can try using @ImportAutoConfiguration but then you need to know which AutoConfigurations you need to include. Discovering that is the hard part.
  • Functional Bean Definitions: If you use @Configuration then you make use of reflection. You can implement an ApplicationContextInitializer which makes you reflection free, but it is a bit harder to implement.
  • CPU constrained environments benefit from native images built with GraalVM.

You can rewatch his talk here:

Kubernetes and/or Cloud Foundry - How to run your Spring Boot Microservices on state-of-the-art cloud platforms by Matthias Haeussler

Matthias Haeussler

Matthias Haeussler is a Cloud Advocate at NovaTec Consulting. He gave a presentation about the differences between Kubernetes and Cloud Foundry. He showed us this live with Spring Boot application which was deployed on both Kubernetes and Cloud Foundry.

Cloud Foundry

To deploy your application on Cloud Foundry, you simply have to run one command: cf push (under the assumption that you have the CLI installed and configured). This will send your whole codebase to Cloud Foundry, which then builds a container for your application and runs it. Cloud Foundry does not use Docker images, only containers. The thing with Cloud Foundry is that it uses containers behind-the-scenes, but as a CF user, you don’t really notice it.


With Kubernetes, it’s a whole different story. You can’t just ‘run’ your application on Kubernetes. You will need a Docker image to run your application, which means your application must have a Dockerfile inside it. This Docker image must be pushed to a Docker registry, which is then pulled from the registry by Kubernetes and ran with the specified configuration.


In Kubernetes, you can configure way more which is a huge benefit. On the other hand you also need to know more about the platform to do so. Whereas with Cloud Foundry it is just one command and your codebase is pushed, wrapped into a container and ran on the platform. Way more simple but with less configuration options. Thus you can configure less than when using Kubernetes.

Kubernetes also requires more dependencies if you want to get more out of it (ex. Helm, Prometheus, Istio, …). This requires additional maintenance of those dependencies.

The ideal platform is: the simplicity of Cloud Foundry with the functional features of Kubernetes.

You can rewatch his talk here:

Migrating a modern spring web application to serverless by Jeroen Sterken and Wim Creuwels

Jeroen Sterken Wim Creuwels

Is serverless the holy grail? These guys explored the possibilities while migrating an existing monolith to serverless at one of their clients.

Serverless will help your developers focus on the code instead of server management and database setup. Wim and Jeroen also mention the flip side of the coin. It’s a new technology and as is the case with every new technology, there is a learning curve. Developers have to get used to the services that the cloud provider supports. They need to “think serverless” and model applications as functions in well-defined steps. Infrastructure has to be modelled using Infrastructure as Code. A topic on which you can find a great resource on our blog here.

No, serverless is not the holy grail. It is however a great solution for some typical use cases:

  • Event-driven architectures
  • Internet of things
  • Applications with varying load
  • Data analysis

Step functions

Step Function Diagram

Jeroen and Wim glued their app together using AWS Step Functions. Step Functions is a serverless orchestration service that lets you model your workflow as a series of steps. Step Functions will keep your Lambda functions free of logic that triggers other Lamba functions. Instead it will use the output of one Lambda function to trigger the next one, thus progressing towards the next step.
These steps are made visible by a clear step diagram that shows your workflow. This diagram allows you to monitor your flow by changing color when something goes wrong. In case of an error Step Functions will automatically retry.

Spring Cloud Functions

We are at SpringIO and we’re talking about Serverless Cloud technology so Spring Cloud Functions cannot be left unmentioned. Spring Cloud Function is a project by the Spring team that allows you to write cloud platform independent code. In the process you can keep using familiar Spring constructs like beans, autowiring and dependency injection. You can find great guides on baeldung.com and spring.io. Using Spring Cloud Functions will lower the stepping stone towards Serverless because most Java developers are already familiar with the Spring Framework.

Serverless was already a hot topic. The fact that Spring now has also jumped on the wagon only makes it hotter. Definitely keep your eyes open for Serverless in the near future.

You can rewatch their talk here:

Day 2: Talks & Workshops

Testing Spring Boot Applications by Andy Wilkinson

Andy Wilkinson

Andy Wilkinson of Pivotal explained us the importance and essence of writing tests in your application to ensure the quality of your services. Of course, having a zero risk functionality is practically impossible but testing helps you to reduce your risk to a minimum.

But how do you know if a test is ‘good’? Almost everyone is basing this on the amount of code coverage in their project. This does not determine the quality of your tests. When you write tests, you want to think about mistakes that you make or that can be made by the end user.

Unit testing

When you rewrite your application logic, there’s a high chance that you have to rewrite your unit tests as well. So make sure that you do not have to spend a lot of time on rewriting your tests when you want to refactor your application or write extra features.

It’s also very important to use descriptive names for your tests. Make sure that your tests are readable by the human eye. No one wants to read a test that is not clear or creates more confusion (JUnit 5 comes with a display annotation to make a test name more readable).

When you are familiar with writing unit tests, you’ve probably also heard of mocking. Unit testing is all round mocking external services and dependencies. After all, in a unit tests we are under the assumption that all our external dependencies are working as they should.

Integration Testing

Andy gave us a detailed explanation of how the various testing annotations work such as @SpringBootTest, which gives us a more Spring Boot way of testing our application (which means less configuration, hooray!). @SpyBean and @MockBean to create a mock or spy object of a Spring Component, @ActiveProfiles to run your test class with a specific profile, etc.

Testing Against Databases

One of the more appearing problems in integration testing is working with a database. Typically, when you want to test against data in a database you are going to want to use an in-memory database. Often this is a HSQLDB or H2 database. This is where it gets interesting. You can tell your H2 instance to run in a specific database software mode, such as PostgreSQL. However, it’s not exactly the same as working with a real PostgreSQL server. H2 only interprets the queries that are ran in a PostgreSQL dialect and tries to convert to its own syntax. This can cause lots of problems, because you are not working with a real Postgres server. Even though H2 is ran with PostgreSQL compatibility mode, it can still fail with queries that will run perfectly on a real PostgreSQL server.

Andy recommended us to use TestContainers. These have the power to spin up a Docker image of a database of your choice. So you’ll have the full functionality of a database server.

What’s next?

We are really excited to see what the new Spring Boot versions will have in store to help us write better and clearer tests. Spring Boot 2.2 will come with full JUnit 5 functionality and thus, will leave JUnit 4 behind.

You can rewatch his talk here:

How to live in a post-Spring-Cloud-Netflix world by Olga Maciaszek and Marcin Grzejszczak

Olga Maciaszek

Andy Wilkinson

Discovering the new Spring Cloud stack. That’s what this talk was all about. Olga Maciaszek and Marcin Grzejszczak showed us the new solutions for Gateway proxying, circuit breaking and the whole new Spring Cloud stack.

The world is changing

Spring Cloud Load Balancer

Client side load balancing via the @LoadBalancerClient annotation. Use the @LoadBalanced annotation as a marker annotation to indicate that a RibbonLoadBalancingClient should be used to interact with a service.

Spring Cloud Gateway

Via routes your requests are processed to downstream services. Spring Cloud Gateway is used as a simple way to achieve this routing to your APIs. You can keep configuring this as code:

return builder.routes()
                route -> route.path("/user-service/**")
                        .filters(filter -> filter.stripPrefix(1))

or in your properties file:

    name: proxy
      - id: fraud
        uri: lb://fraud-verifier
        - Path=/fraud-verifier/**
        - StripPrefix=1
        - name: Retry
            retries: 3

Circuit Breaking and Resilience4J

You need a design that is resilient and fault tolerant.

After a number of failed attempts, we can consider a service unavailable. We will then back off and stop flooding it with requests. We can save system resources for calls which are likely to fail. And give the other service some time to get back on their feet.

Micrometer and Prometheus

Periodically scraping metrics from your services to monitor health.

Spring Cloud Config Server

Externalize your configuration. You don’t have to restart your application to reload your configuration. Just fetch it from the remote service again.


Check out a fully working Spring Cloud microservices demo here: https://github.com/OlgaMaciaszek/spring-cloud-netflix-demo. A lot of gratitude to Olga and Marcin for providing a working example that we can play around with to get acquainted with the new services.

You can rewatch their talk here:

Event-Driven Microservices with Axon and Spring Boot: excitingly boring by Allard Buijze

Allard Buijze

In this presentation, Allard Buijze, Founder and CTO of AxonIQ talks about the advantages of event-driven architectures and shows how easy it is to set up your own event-driven microservices using Axon and Spring Boot.

What is Axon?

The Axon framework is used for building event-driven microservices using Domain-Driven Design, CQRS and Event Sourcing. It is there to prevent developers from getting lost inside a complex microservice architecture.


Command and Query Responsibility Segregation is a design pattern where you split the reading and writing of data into seperate models. You use queries for reading the data and commands for updating the data. While for basic CRUD operations having these models combined might be fine but once the amount of business logic and the amount of queries increases it might become increasingly difficult to manage.

State Storage vs Event Sourcing

A big part of this presentation is about the advantages of using event sourcing rather than state storage. Events describe the history of an object rather than just the current state of the object. It is easy to go through the history and generate the current state while also getting a lot of extra information about the object you would otherwise miss out on when just storing its current state. Explicit record that something happened, rather than an implicit record of what happened based on changes that occured. This also makes testing your application easier because you do not have to rely on state but rather on a series of events to take place or an exception to occur.



One of the biggest advantages of events is that they remain valuable over time. They need to be the source of everything in the application and show a true representation of your entities. Once again, you don’t save the state of an aggregate, you can generate the state by replaying the history.

The power of not now

The power of not now basically means that because you save all the events, you can generate reports whenever you want based on the captured data. You don’t have to know in advance what data is important to store for later on, everything is stored.

Axon Server

Axon Server is a service that distributes your components, manages routing, stores events and provides high availability and observability. By combining all these otherwise different services into one single easy to configure service, you make your entire architecture a lot easier to manage than if you were to use for example the Netflix Eureka Discovery Service for communication between microservices, the MySQL Event Store for storing events and RabbitMQ to handle messaging. By simply adding the Axon Server dependency and adding some annotations you can use all of these services while keeping the complexity low.


Axon can also manage tracing for you by just adding the Axon tracing and Jaeger to your dependencies. Where otherwise setting up tracing would be a lot of work having to deal with all kinds of headers, passing headers along and interpreting them, Axon tracing takes care of all of this for you.

You can rewatch his talk here:

How to secure your Spring apps with Keycloak by Thomas Darimont

Thomas Darimont

In this presentation Thomas Darimont talks about what Keycloak is, what you can do with it and gives a demo of how it works and how you can set it up for your own applications.

What is Keycloak?

Keycloak is a Java based authentication and authorization server. It is developed by Red Hat who use it as a base for their enterprise RH-SSO application on which they provide additional support and documentation. It is also backed by a large open source community providing additional features, documentation and bugfixes.

Keycloak Features

One of the most important features of Keycloak is the Single Sign-On and with this the Single logout. Sign into keycloak once to gain access to multiple applications and sign out once to sign out of all applications. Do note though that individual applications can disable this single logout so you might not get logged out of all the applications within a realm. Another great feature of Keycloak is their multi-factor authentication using one of the known authentication apps like the Google Authenticator. Keycloak also supports authentication through social media platforms such as Facebook, Twitter, Google or even Github. Then of course there is the fact that Keycloak is completely customizable and extensible. It comes with a preferred stack on which we will dive into more detail later on. You can get away from this and use your own preferred services albeit with some additional configuration. Keycloak also comes with an easy to use management console for administrators and a user management interface where all users can update their user details. The last feature to discuss are the realms. Sets of applications, users and registered OAuth clients to whom the Keycloak settings will be applied. With these realms you can give users specific roles or just authenticate them across multiple applications using the Single Sign-on feature.

The keycloak preferred stack

By default Keycloak is a WildFly based server with a plain JAX-RS application. It uses JPA for storing data and Infinispan for the horizontal scaling of multiple Keycloak nodes that all distribute information like user sessions between eachother. Other than that it uses the Freemarker template engine to render for example the login pages and Jackson 2.x for everything JSON related like the tokens.

Securing your application

To add Keycloak to your applications you have to add a dependency and you will have to register your application within a Keycloak realm. After doing some configuration within your application and the Keycloak management console, you will have to authenticate through Keycloak to regain access to your application.

The authentication process

The following steps describe the Keycloak authentication process:

  • Unauthenticated user accesses application
  • The application redirects to Keycloak for login
  • When the login is successful, Keycloak will create an SSO session and will emit cookies
  • Keycloak generates a random code and redirects the user back to application
  • The application receives the code associated with the sign-on session and sends the code back to Keycloak via a separate channel
  • If the code sent back is associated with the sign-on session, Keycloak will reply with an access token, a refresh token and an id token
  • The application verifies the tokens and associate them with a session
  • The user is now logged in to the application

For more information you can refer to Keycloak’s official documentation and you can also watch the original talk itself in the following video:

Zero Downtime Migrations with Spring Boot by Alex Soto

Alex Soto

In this talk, Alex Soto, Software Engineer at Red Hat covers the subject of zero downtime migrations of microservices in Spring Boot. He covers some of the different deployment techniques. Some easy to understand ones and some more advanced techniques for when you are dealing with the persistent states of your applications.

Dealing with downtime when using microservices

While it is easier and faster to take down a single service rather than a single monolith application, deployment or redeployment of services will happen a lot more often when using a microservices architecture. Another thing to consider is that when you have downtime on a service, all the services that have a dependency on the offline service will no longer work either. For this reason it is important to minimise downtime of your applications or even have no downtime at all. This is why you need to deploy and release services at different times. Here are some techniques on how to do this.

Blue/Green deployment

Blue/Green deployment is where you will deploy an updated version of the service you want to replace and release it by changing the routing from the old one to the new one. It is important to keep the old service deployed and monitor the new one so that in case of errors you can easily revert the routing back to keep everything up and running. The downside of blue/green deployment is that if something goes wrong, all users are affected if changes happen before reverting the routing. But of course there is a solution to this problem, Canary releases.

Canary releases

Canary releases is where you route a small percentage of your traffic through an updated service while the rest keeps using the original service. This limits the amount of users that might be affected by unwanted changes while you monitor your new application. As everything goes well you increase the percentage of users until eventually your entire userbase uses the new service. All while still having the advantage of Blue/Green deployment to fall back on when things go wrong.

Mirroring traffic

Mirroring traffic is another deployment technique where you deploy an updated version of a service next to the original one and send your requests to both services. Only the original service handles requests while the requests to the updated service are just fire and forget requests while you monitor if everything goes according to plan. Before eventually changing the routing to the updated version of the service.

But what about sticky sessions

When dealing with sticky sessions, which you often see with for example shopping carts on webshops, your session is linked to a specific service by IP. This means that when you get rerouted to an updated service, you will lose your session. To counter this problem you can use an in-memory datagrid using for example Redis and duplicate this across all the services that use these sticky sessions in your cluster. When you do this, your shopping cart will stay, even when the service you are accessing changes.

Dealing with persistent data

While problems with in-memory data are often fixed quite easily, when dealing with persistent data, zero downtime deployment becomes a little bit more tricky. Take for example the change of a column name in a database. If you were to use different services accessing the same database but using different column names, this would cause issues. This exact problem and how to tackle it is shown in the demo. Together with a more in-depth explanation of the covered topics in a video of the talk below.

The end

Spring is trendy as ever. Solid fundamentals and ready for the future of software development. It was great to further extend our Spring expertise. And let’s not forget the amazing time we had amongst colleagues. We’ll be back next year for more! Will you be there too?!

Nick is passionate about cloud technology. He has major expertise in AWS and AWS serverless but he appreciates other clouds just as well. He wants to be ahead of change and thus he’s also working with IoT and AI.

Tom is a Senior Developer at Ordina Belgium, passionate about all software related to data. As competence leader Big & Fast Data he guides his fellow developers through dark data swamps by giving workshops and presentations. Tom is passionate about learning new technologies and frameworks.

Yolan Vloeberghs is a Java and Cloud engineer with a sharpened focus on all things related to cloud, specifically AWS. He loves to play around with various technologies and frameworks and is very passionated and eager to learn about everything related to cloud-native development.

Jago Staes is a Java Consultant with a strong interest for Spring Boot projects, enjoys learning about new technologies and wants to learn more about frontend technologies and microservices.

Gina is a Java consultant at Ordina Belgium. Her main focus is to build quality applications, staying informed with new technologies helps her in doing this.

Tim is a Java Developer at Ordina Belgium. His main focus is on back-end development. He is passionate about Microservices, Domain driven design and refactoring.