Table of Contents

  1. Looking back at the first version
  2. Lagom 1.2
  3. Message Broker API & Kafka implementation
  4. Generic read-side support, sharding support and automatic offset handling
  5. JDBC support
  6. Migrating from 1.0 to 1.2
  7. Looking at the rest of our initial feedback
  8. Lagom 1.3 preview
  9. Conclusion
  10. Extra resources

Looking back at the first version

Shortly after the first MVP version of Lagom was released we wrote a blogpost in which we wrote down our first impressions and in which we did an initial comparison to Spring Cloud and Netflix OSS. We also did an introduction presentation on Lagom which is available on YouTube. Lightbend let us know that they really appreciated our feedback. They definitely understood some of our remarks and suggestions and they were willing to work on some of them. One of our majors remarks was that Maven support should be added to Lagom in order to properly target Java developers.

We weren’t the only ones with this feedback and Lightbend took note of it. In September 2016, Lightbend released Lagom 1.1. In this first new minor version they introduced Maven support which also includes support for running the Lagom development environment in Maven. More information about setting up a Lagom project in Maven is available in the documentation.

A few months later they released Lagom 1.2 with a couple of new features which is what this blogpost will be about. We will also revisit our initial comparison against Spring Cloud and Netflix OSS.

Lagom 1.2

In this new minor version of Lagom, the read-side has been overhauled. Other notable additions are the following:

Naturally, at the same time, a couple of existing issues and bugs were also resolved. A full list is available on Github.

Message Broker API & Kafka implementation

The introduction of message broker support is the most notable feature of Lagom 1.2. By adding message broker support, Lagom now allows both direct streaming of messages between services but also through a broker. With version 1.2, Lagom comes with out-of-the-box support for Apache Kafka, a very popular scalable message broker for building real-time data pipelines and streaming application. The message broker API has been designed to be independent of any backend meaning that support for other brokers may be added in the future.

Compared to the existing Publish-Subscribe mechanism where messaging is only available intra-service, message broker communication happens between one service to many other services. Another key difference between the two is that with Publish-Subscribe it is possible that messages might get lost, for example due to network issues or a restart of a service, message broker based communication can provide at-least-once and at-most-once delivery semantics even if the subscriber is down. In Publish-Subscribe messaging, a subscriber will only receive a message after its subscription has been accepted by the Publish-Subscribe infrastructure. The message broker on the other hand will allow the subscriber to consume all the messages since the last message it has consumed, even if the subscriber was offline or down, thanks to its decoupling of services producing events from others consuming them. The Publish-Subscribe mechanism in Lagom is provided by the Akka Cluster underneath a Lagom service whereas message broker support is provided by a third party product such as Kafka.

Lagom takes care of publishing, partitioning, consuming and failure handling of messaging and when executing the runAll command, Lagom automatically runs a Kafka server with you along with Zookeeper for you. Next to ServiceCall for communicating with other services mapping down onto HTTP, there now exists a Topic abstraction that represents a topic that one service publishes and that other services can consume after subscribing.

In order to make use of it you need to add the Lagom Kafka Broker module to the dependencies for both the service publishing to a topic and the service subscribing to a topic. Note that Lagom Kafka Broker module requires an implementation of Lagom Persistence so you need to add either Lagom Persistence Cassandra or Lagom Persistence JDBC to the dependencies.

In our simple demo project Lagom Shop we define a new method in the item-api project’s ItemService that will return a Topic of item creation events to subscribe on and we also add it to the descriptor:

Topic<ItemEvent> createdItemsTopic();

@Override
default Descriptor descriptor() {
    return Service.named("itemservice").withCalls(
            Service.restCall(Method.GET,  "/api/items/:id", this::getItem),
            Service.restCall(Method.GET,  "/api/items", this::getAllItems),
            Service.restCall(Method.POST, "/api/items", this::createItem)
    ).publishing(
            Service.topic("createdItems", this::createdItemsTopic)
    ).withAutoAcl(true);
}

In the item-api project we define a new ItemEvent interface for external use in other services. An ItemEvent already exists in the item-impl project but you want this one to be solely used within the implementation project. It is considered a best practice to have a separate definition for external use as it would otherwise cause you to break clients if you would apply internal changes.

@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", defaultImpl = Void.class)
@JsonSubTypes({
        @JsonSubTypes.Type(ItemEvent.ItemCreated.class)
})
public interface ItemEvent {
    UUID getId();

    @JsonTypeName("item-created")
    final class ItemCreated implements ItemEvent {
        private final UUID id;
        private final String name;
        private final BigDecimal price;
    
        @JsonCreator
        public ItemCreated(UUID id, String name, BigDecimal price) {
            this.id = id;
            this.name = name;
            this.price = price;
        }
    
        @Override
        public UUID getId() {...}
    
        public String getName() {...}
    
        public BigDecimal getPrice() {...}
    
        @Override
        public boolean equals(Object o) {...}
    
        @Override
        public int hashCode() {...}
    
        @Override
        public String toString() {...}
    }
}

In the item-impl project we add the implementation to ItemServiceImpl that will publish all ItemCreated events to the topic:

@Override
public Topic<be.yannickdeturck.lagomshop.item.api.ItemEvent> createdItemsTopic() {
    return TopicProducer.singleStreamWithOffset(offset -> {
        return persistentEntities
                .eventStream(ItemEventTag.INSTANCE, offset)
                .filter(eventOffSet -> eventOffSet.first() instanceof ItemCreated)
                .map(this::convertItem);
    });
}

private Pair<be.yannickdeturck.lagomshop.item.api.ItemEvent, Offset> convertItem(Pair<ItemEvent, Offset> pair) {
    Item item = ((ItemCreated)pair.first()).getItem();
    logger.info("Converting ItemEvent" + item);
    return new Pair<>(new be.yannickdeturck.lagomshop.item.api.ItemEvent.ItemCreated(item.getId(), item.getName(),
            item.getPrice()), pair.second());
}

We now want to make use of this in our order-impl project. Using the injected ItemService instance we can now subscribe on the topic and act on each message. In this case we log something whenever we receive a new message.

@Inject
public OrderServiceImpl(PersistentEntityRegistry persistentEntities, ReadSide readSide,
                        ItemService itemService, PubSubRegistry topics, CassandraSession db) {
    ...
    itemService.createdItemsTopic()
            .subscribe()
            .atLeastOnce(Flow.fromFunction((be.yannickdeturck.lagomshop.item.api.ItemEvent item) -> {
                logger.info("Subscriber: doing something with the created item " + item);
                return Done.getInstance();
            }));
}

As mentioned earlier, you have the option to make use of either the atLeastOnce or atMostOnceSource delivery semantic on the Subscriber instance.

If we run the application, create an item and afterwards check the logs, we see that the order service is receiving messages:

2016-12-23 22:04:22,337 INFO  b.y.l.i.i.ItemServiceImpl - Creating item: CreateItemRequest{name=newItem, price=15}
2016-12-23 22:04:22,349 INFO  b.y.l.i.i.ItemEntity - Setting up initialBehaviour with snapshotState = Optional.empty
2016-12-23 22:04:22,357 INFO  b.y.l.i.i.ItemEntity - Processed CreateItem command into ItemCreated event ItemCreated{item=Item{id=d370993c-dd75-4a88-bf8b-d9dba1820feb, name=newItem, price=15}, timestamp=2016-12-23T21:04:22.357Z}
2016-12-23 22:04:22,359 INFO  b.y.l.i.i.ItemEntity - Processed ItemCreated event, updated item state
2016-12-23 22:04:22,412 INFO  b.y.l.i.i.ItemEntity - Processed GetItem command, returned item
2016-12-23 22:04:22,413 INFO  b.y.l.i.i.ItemServiceImpl - Looking up item d370993c-dd75-4a88-bf8b-d9dba1820feb
2016-12-23 22:04:25,472 INFO  b.y.l.i.i.ItemEventProcessor - Persisted Item Item{id=d370993c-dd75-4a88-bf8b-d9dba1820feb, name=newItem, price=15}
2016-12-23 22:04:25,776 INFO  b.y.l.i.i.ItemServiceImpl - Converting ItemEventItem{id=d370993c-dd75-4a88-bf8b-d9dba1820feb, name=newItem, price=15}
2016-12-23 22:04:25,883 INFO  b.y.l.o.i.OrderServiceImpl - Subscriber: doing something with the created item ItemCreated{id=d370993c-dd75-4a88-bf8b-d9dba1820feb, name='newItem', price=15}

Generic read-side support, sharding support and automatic offset handling

Up until now, the read-side processor API was specific to Cassandra and required you to do the necessary offset tracking. This existing API, while still usable, has been declared depricated and instead a new specific implementation for constructing Cassandra read-sides and a more generic one for JDBC read-sides have been added.

The read-side can now also be sharded by tagging persistent entity events with sharded tags. Instead of declaring only one tag, read-side processors now declare a list of them. The processing of these tags across the cluster is handled for you by Lagom.

To compare the two of them, here is a sample of using a single tag:

public interface ItemEvent extends Jsonable, AggregateEvent<ItemEvent> {
    
    AggregateEventTag<ItemEvent> TAG = AggregateEventTag.of(ItemEvent.class);
    
    @Override
    default AggregateEventTag<ItemEvent> aggregateTag() {
        return TAG;
    }
}

And an example of using sharded tags:

public interface ItemEvent extends Jsonable, AggregateEvent<ItemEvent> {

    int NUM_SHARDS = 20;

    AggregateEventShards<ItemEvent> TAG = AggregateEventTag.sharded(ItemEvent.class, NUM_SHARDS);

    @Override
    default AggregateEventShards<ItemEvent> aggregateTag() {
        return TAG;
    }
}

In your ReadSideProcessor you will have to override the aggregateTags() abstract method differently.

When using a single tag:

@Override
public PSequence<AggregateEventTag<ItemEvent>> aggregateTags() {
    return TreePVector.singleton(ItemEvent.TAG);
}

And when using sharded tags:

@Override
public PSequence<AggregateEventTag<ItemEvent>> aggregateTags() {
  return ItemEvent.TAG.allTags();
}

Lagom now also provides automatic offset tracking, which until now, you had to do yourself in your read-side processors by explicitly loading and persisting offsets. This allows us to get rid of quite a bit of code and as less code means less bugs, this is definitely a good thing. In the section dealing with migrating to Lagom 1.2 there is a part that shows the code that we got rid of.

That the existing Cassandra read-side support API still exists but it has been deprecated and might be removed in an upcoming version so it is a good idea to migrate as soon as possible. It isn’t really that much work to migrate to the new API, the exact work required is described in the Migrating from 1.0 to 1.2 section.

JDBC support

JDBC support has been added to ease the introduction of Lagom into the existing organisation of potential users in order to provide support for using their existing relational database infrastructure. In Lagom 1.3.0, support for JPA will also be added which should become the preferred choice. A relational database is less preferred when setting up a non-blocking and reactive architecture but by providing support for it, it would allow potential users to start making use of the Persistent Entity API without the necessity of having to switch over to Cassandra.

To make use of JDBC support you need to add the Persistence JDBC module to your project while also having to add the jar for your JDBC driver. Lagom makes use of akka-persistence-jdbc to persist entities to the database. At the time of writing only four relational databases are supported:

  • PostgreSQL
  • MySQL
  • Oracle
  • H2

akka-persistence-jdbc uses Slick for mapping tables and managing asynchronous execution of JDBC calls. Slick requires you to configure it to use the right Slick profile for your database. An example of a Slick configuration in application.conf:

db.default {
  driver = "org.postgresql.Driver"
  url = "jdbc:postgresql://database.example.com/playdb"
}

jdbc-defaults.slick.driver = "slick.driver.PostgresDriver$"

A table and journal table are required by the akka-persistence-jdbc plugin and by default, Lagom is able to create these automatically for you. This automatic generation functionality can be disabled which you will probably want for any environment higher than development.

lagom.persistence.jdbc.create-tables.auto = false

The definition of the tables differ for each database, here is an example of the table definition for PostgreSQL:

DROP TABLE IF EXISTS public.journal;

CREATE TABLE IF NOT EXISTS public.journal (
  ordering BIGSERIAL,
  persistence_id VARCHAR(255) NOT NULL,
  sequence_number BIGINT NOT NULL,
  deleted BOOLEAN DEFAULT FALSE,
  tags VARCHAR(255) DEFAULT NULL,
  message BYTEA NOT NULL,
  PRIMARY KEY(persistence_id, sequence_number)
);

DROP TABLE IF EXISTS public.snapshot;

CREATE TABLE IF NOT EXISTS public.snapshot (
  persistence_id VARCHAR(255) NOT NULL,
  sequence_number BIGINT NOT NULL,
  created BIGINT NOT NULL,
  snapshot BYTEA NOT NULL,
  PRIMARY KEY(persistence_id, sequence_number)
);

The definition scripts for each database are available here.

If you are unaware of CQRS, Event Sourcing and the Persistent Read-Side in Lagom you could take a look at the CQRS and Event Sourcing section in our previous blogpost on Lagom. It might be a bit out of date regarding the API in Lagom but it should give you an idea.

The new API for Cassandra read-side support, while very similar compared to the existing API, still differs in a few places. In the upcoming section we describe the migration process of upgrading to Lagom 1.2 and the changes we had to do to our read-side, so the required code changes can be read in that section.

The API for the JDBC read-side support is rather similar compared to the Cassandra read-side support. Instead of using a CassandraSession for querying, you use a JdbcSession to retrieve a connection which in turn you will use for the execution of queries. The JDBC read-side API however, unlike the Cassandra read-side API, is not fully non-blocking which will lead to a performance difference. It could be an option to switch to Cassandra later on in the project if you want to get better performance out of it. But at least by having an API right now for these four databases it might be easier to integrate a new Lagom application within an existing architecture having these kinds of databases.

Migrating from 1.0 to 1.2

A migration guide is available with the steps necessary to upgrade your project to Lagom 1.2. At Ordina Belgium we streamed and recorded an introduction video on Lagom 1.0 in which we demoed a shop application. For the purpose of this blogpost, let us see how much work it is to upgrade to 1.2.

First of all, we have to upgrade the Lagom version itself. Lagom 1.0 only supported sbt, so our demo is still using that. We change the version to be used in project/plugins.sbt:

addSbtPlugin("com.lightbend.lagom" % "lagom-sbt-plugin" % "1.2.2")

For the Lagom Persistence module, Cassandra support has been pulled into its own module so you need to update the lagomJavadslPersistence dependency to lagomJavadslPersistenceCassandra in the build.sbt file.

We also have to update the Scala version in build.sbt:

scalaVersion in ThisBuild := "2.11.8"

The ConductR version in project/plugins.sbt:

addSbtPlugin("com.lightbend.conductr" % "sbt-conductr" % "2.1.16")

The descriptor() in the service interfaces needs to be updated since the .with(...) has been replaced by withCalls(...).

We replace the existing code:

@Override
default Descriptor descriptor() {
    return Service.named("itemservice").with(
            Service.restCall(Method.GET,  "/api/items/:id", this::getItem),
            Service.restCall(Method.GET,  "/api/items", this::getAllItems),
            Service.restCall(Method.POST, "/api/items", this::createItem)
    ).withAutoAcl(true);
}

With the following:

@Override
default Descriptor descriptor() {
    return Service.named("itemservice").withCalls(
            Service.restCall(Method.GET,  "/api/items/:id", this::getItem),
            Service.restCall(Method.GET,  "/api/items", this::getAllItems),
            Service.restCall(Method.POST, "/api/items", this::createItem)
    ).withAutoAcl(true);
}

These are the necessary changes for us to successfully compile our project. Subsequently, we need to update our read-sides to make use of the new API. Starting with replacing the deprecated CassandraReadSideProcessor:

public class ItemEventProcessor extends CassandraReadSideProcessor<ItemEvent> {

With ReadSideProcessor:

public class ItemEventProcessor extends ReadSideProcessor<ItemEvent> {

Next step is to inject an instance of CassandraSession and CassandraReadSide via the constructor:

private final CassandraSession session;
private final CassandraReadSide readSide;

@Inject
public ItemEventProcessor(CassandraSession session, CassandraReadSide readSide) {
    this.session = session;
    this.readSide = readSide;
}

All code related to handling offsets can be deleted since Lagom now handles this for us. We delete the following:

private PreparedStatement writeOffset = null; // initialized in prepare

private void setWriteOffset(PreparedStatement writeOffset) {
    this.writeOffset = writeOffset;
}

private CompletionStage<Done> prepareWriteOffset(CassandraSession session) {
    logger.info("Inserting into read-side table item_offset...");
    return session.prepare("INSERT INTO item_offset (partition, offset) VALUES (1, ?)").thenApply(ps -> {
        setWriteOffset(ps);
        return Done.getInstance();
    });
}

private CompletionStage<Optional<UUID>> selectOffset(CassandraSession session) {
    logger.info("Looking up item_offset");
    return session.selectOne("SELECT offset FROM item_offset")
            .thenApply(
                    optionalRow -> optionalRow.map(r -> r.getUUID("offset")));
}

After having our ItemEventProcessor class extend from the ReadSideProcessor abstract we are prompted to implement two methods: buildHandler() and aggregateTags(). aggregateTags() simply replaces aggregateTag() where as buildHandler() will contain the setup needed for our ReadSideHandler. The prepare(CassandraSession session) and defineEventHandlers(EventHandlersBuilder builder) methods that used to be overridden are now both implemented in buildHandler(). The logic from the existing prepare() is split up into a setGlobalPrepare(), for creating Cassandra tables (note that these tasks should be idempotent), and a prepare(), for preparing statements, in buildHandler().

We start by deleting the old prepare(CassandraSession session) and the defineEventHandlers(EventHandlersBuilder builder):

@Override
public CompletionStage<Optional<UUID>> prepare(CassandraSession session) {
    return
            prepareCreateTables(session).thenCompose(a ->
                    prepareWriteOrder(session).thenCompose(b ->
                            prepareWriteOffset(session).thenCompose(c ->
                                    selectOffset(session))));
}

@Override
public EventHandlers defineEventHandlers(EventHandlersBuilder builder) {
    logger.info("Setting up read-side event handlers...");
    builder.setEventHandler(ItemCreated.class, this::processItemCreated);
    return builder.build();
}

Finally we perform the necessary refactoring to implement both buildHandler() and aggregateTags(), and we clean up the existing processing logic for the read-side:

private CompletionStage<List<BoundStatement>> processItemCreated(ItemCreated event) {
    BoundStatement bindWriteItem = writeItem.bind();
    bindWriteItem.setUUID("itemId", event.getItem().getId());
    bindWriteItem.setString("name", event.getItem().getName());
    bindWriteItem.setDecimal("price", event.getItem().getPrice());
    logger.info("Persisted Item {}", event.getItem());
    return CassandraReadSide.completedStatements(Arrays.asList(bindWriteItem));
}

@Override
public ReadSideHandler<ItemEvent> buildHandler() {
    CassandraReadSide.ReadSideHandlerBuilder<ItemEvent> builder = readSide.builder("item_offset");
    builder.setGlobalPrepare(() -> prepareCreateTables(session));
    builder.setPrepare(tag -> prepareWriteItem(session));
    logger.info("Setting up read-side event handlers...");
    builder.setEventHandler(ItemCreated.class, this::processItemCreated);
    return builder.build();
}

@Override
public PSequence<AggregateEventTag<ItemEvent>> aggregateTags() {
    return TreePVector.singleton(ItemEventTag.INSTANCE);
}

This leaves us with some refactoring to be done in our service implementations for registering the read-side. The sole thing that needs to be done is replacing the injected CassandraReadSide, which is now deprecated, with ReadSide.

So going from:

@Inject
public ItemServiceImpl(PersistentEntityRegistry persistentEntities, CassandraReadSide readSide,
                        CassandraSession db) {
    this.persistentEntities = persistentEntities;
    this.db = db;

    persistentEntities.register(ItemEntity.class);
    readSide.register(ItemEventProcessor.class);
}

To:

@Inject
public ItemServiceImpl(PersistentEntityRegistry persistentEntities, ReadSide readSide,
                       CassandraSession db) {
    this.persistentEntities = persistentEntities;
    this.db = db;

    persistentEntities.register(ItemEntity.class);
    readSide.register(ItemEventProcessor.class);
}

This concludes migrating our read-side logic to the new API.

Since Lagom now supports multiple persistence backends, and not only just Cassandra, TestKit no longer starts with Cassandra enabled by default. This requires us to add a single line .withCassandra(true) to our server setup:

@BeforeClass
public static void setUp() {
    server = ServiceTest.startServer(ServiceTest.defaultSetup()
            .withCassandra(true)
            .withConfigureBuilder);
}

All in all, migrating the code base to Lagom 1.2 took about fifteen minutes.

Looking at the rest of our initial feedback

In the previous blogpost on Lagom 1.0 we, like many other Java developers, made the point that only offering sbt as the build tool would repel many potential Java developers. We are glad that they addressed this quickly in the first new major version (1.1) they released.

Regarding using Lagom in production without using ConductR, the bare minimum for this was to write your own service locator. Jonas Bonér started a service locator project for ZooKeeper and Consul for this purpose. Lightbend also plans to offer a free limited use evaluation license which will allow developers to start working with the Reactive Platform’s commercial features from the beginning. Including not only ConductR but other goodies such as monitoring for Lagom circuit breakers and Akka actors.

In the blogpost we also wrote down several impressions compared to Pivotal’s Spring Cloud and Netflix OSS, currently the most popular choice of doing microservice architectures in Java. A strong point for Lightbend is that they want to distinct themselves from Pivotal by providing extensive commercial support if you get the Reactive Platform license. Pivotal also offers commercial support but only if you buy the commercial Pivotal Cloud Foundry.

A key advantage of Lagom remains that it is non-blocking down to the core, starting from the persistence layer up to the endpoints, while Spring isn’t just yet. In the upcoming Spring Framework 5, of which a milestone version (M3) is already available, Pivotal are integrating their Spring Reactive initiative providing core reactive functionality and reactive web endpoint support. On the topic of Spring Reactor, a colleague of ours, Tom Van den Bulck, recently wrote a blogpost on Reactive Programming with Spring Reactor. In the blogpost, Tom writes about the presentation of Stephane Maldini at Ordina’s JOIN 2016 event, a small one-day conference hosted by Ordina, on reactive programming which has also been recorded and is available on YouTube. So it will be interested to see whether Spring Framework 5 allows them to catch up on being fully non-blocking and reactive, which up until now, remains a stronger point of Lagom.

Lagom’s CQRS and Event Sourcing integration remain another advantage of Lagom, the out-of-the-box integration is easy to work with and they continue to improve on it, now with JDBC support and soon JPA support. In a Spring Cloud application a common solution for this is making use of the Axon Framework although it requires you to do the necessary gluing yourself. There is also the Eventuate framework written by Chris Richardson.

Polyglot support is something that still lacks a bit on the side of Lagom. Spring has Sidecar for this purpose. Lagom services map down to ordinary, standard HTTP and WebSockets and as for other frameworks calling Lagom services, an integration client exists for JVM frameworks while others will have to invoke the Lagom services directly via REST. As for Lagom consuming other REST services, you define an API Service and implement the descriptor() to describe the external API. Documentation for this is currently lacking but an example with Httpbin and Slack’s Messages API exists. In order to further address the polyglot support, Lightbend is planning on implementing a solution for this probably based upon Akka Streams and Alpakka. The idea is that you should be able to generate a Lagom service interface from specifications, allowing transparent integration.

Having binary coupling was another remark of ours and to address this, support for Swagger will be added in version 1.4.

Conclusion

While there are still a couple of important things in need of being addressed we believe that we can conclude that Lightbend is carefully listening to the feedback given by the community. They continue to improve Lagom with new features and to offer better user experience for the developers.

At the same time, Pivotal is working on providing better reactive support with their upcoming Spring Framework 5.

Lightbend and Pivotal make some nice rivals to each other which is nice since this will have a positive impact on both frameworks. Pivotal clearly still has an advantage over Lightbend due to how mature and well-known Spring is although Lagom seems to develop nicely and continues to improve on its strongest points while also trying to address weak points.

We think that Lagom is worth adopting if you plan on getting a Reactive Platform license. Now that Maven support is available, a big hurdle for Java developers has disappeared. Without the Reactive Platform, development should be fine but you will probably miss the useful goodies it has to offer for running your system into production such as monitoring and service orchestration which you get all for free if you go with Spring Cloud and Netflix OSS. We are sure that using Lagom without the Reactive Platform will become more interesting given enough time for the community to come up with solutions for this.

Scala developers will also be eagerly awaiting the release of 1.3 after which they can finally set their teeth into Lagom with the Scala API that it introduces.

Lagom 1.3 preview

The first issue, created after Lagom was released, was the need to implement a separate Scala API. Something many users of Lightbend’s technologies were craving for. Until now it was already possible to use Scala with Lagom by using the Java API, see the following seed created by Mirco Dotta.

Lightbend made work of it and at the time of writing, a release candidate for 1.3.0 has been made available which finally includes the Scala API for Lagom! Other features to be expected in 1.3.0 include JPA support and new test APIs for testing message broker integration.

Extra resources

Yannick is a principal Java consultant and practice manager at Ordina Belgium. He’s passionate about everything Java, Spring and cloud related as well as reactive programming, Kotlin, Lightbend technologies, software architectures, and coaching and enabling other colleagues.