Devoxx Poland 2017
Krakow in the ICE Krakow Congress Centre.
We started off day 1 with the keynote in the absolutely, phenomenal main room:
Table Of Contents
- Keynote: Speed without Discipline: a Recipe for Disaster
- Feature Branches And Toggles In A Post-GitHub World
- A reasonable overview of Java 9 and how you could think of it
- The Language of Actors
Keynote: Speed without Discipline: a Recipe for Disaster (Venkat Subramaniam)
Venkat kicked off the keynote, talking about a paradigm shift, that is happening right now in software development: In the nineties, everybody was doing imperative programming, using objects to implement functionality. Nowadays, this style of software development is shifting towards a more declarative approach. In imperative programming, developers focus on both what they want to do and how they want to do it. In declarative programming on the other hand, developers focus on what they want to do and use tools and libraries to facilitate their goal. Venkat went on to state that programming in a functional style is declarative, but that not all declarative code is functional.
Functional style = declarative style + higher order functions
Declarative vs Imperative
Venkat told the audience that he doesn’t like driving cars. He compared driving a stick shift to imperative programming. His goal is going from point A to point B and he does not want to be involved in changing the gears (Manipulating the DOM). A car with an automatic drive train, is a step in the right direction, but still requires too much focus on how he wants to reach his destination (Using a library like JQuery). Using the auto pilot functionality in certain modern cars is another step in the correct direction, but what he really wants is a car with a dedicated driver, like Uber or Lyft offer (Abstracting the DOM and using frameworks like Angular). In this comparison the ride-sharing service is the declarative approach.
I automate my tests, not because I have a lot of time, but because I don’t.
After an introduction to declarative programming, Venkat switched to the topic of testing. To really be agile, we need to be confident that implementing new features won’t cause failure. We can achieve this confidence by automating our tests and making sure they are repeatable. If we are really confident, we might even be able to ship software, without running the application.
Writing software without writing tests is described as JDD: Jesus Driven Development. Pray that it works. Obviously, TDD (Test Driven Development) makes a lot more sense.
Software development: a profession where people get paid to write poor quality code and get paid more later to cleanup the mess.— Venkat Subramaniam (@venkat_s) 27 september 2015
Testing vs verification
Testing and verification are two different things. Verification is the process that checks if the code (still) works. This is not something anyone should do manually, verification is exactly what should be automated. Testing is the process that checks if a feature is correctly implemented. Code represents what you have typed, not what you might have wanted the system to do. It is the act of gaining insight in the application and the business. This could well be a manual task. Unfortunately, most of our industry has neglected this important difference.
The maturity of software verification can be categorized in three maturity stages. Projects without verification automation are in denial, they are building up an increasing technical debt. The second stage describes projects that have some automated verification on the UI level. Venkat describes tools using WebDriver for UI level verification as a pathway to hell automation. This test method can be represented in the ice-cream cone anti-pattern. For projects with the right level of automation, the pyramid pattern is a good representation. The last maturity stage contains these projects with the right measure of automated verification.
Venkat drew a comparison with 1820, where patients died regularly within three weeks after being operating. Doctors (Joseph Lister, Louis Pasteur) started cleaning their tools after surgery and noticed a positive trend in survival.
Analogous to the doctors back then, we need to discipline ourselves in software engineering. This discipline is needed to keep up to speed and to stay agile, so that teams can react rapidly to customer requests. To build up this discipline, automated verification can be seen as the software equivalent of exercising.
We’re practicing a beautiful craft, let’s go turn it into a wonderful profession. Focusing on quality and creative things.
Feature Branches And Toggles In A Post-GitHub World (Sam Newman)
Sam told us about his experience at a project where the team was having trouble merging branches. The release branch for the next release was called R3, but for a large refactoring, branch R4 was created. Afterwards, he described merging the branches as a car crash. They even needed to introduce a dedicated R3-R4 merge bug fix team. Later on, they set up Continuous Integration in order to prevent the merging issues. The code, pushed by the developers, would get automatically validated by the CI setup. The problem with the R3-R4 release was that validation was done only for a branch and not on the integrated branches.
The integration should be validated every day and when the build breaks, fix it!
For unfinished work, we can wait until it is ready before checking in. This exposes us to the risk of losing work when it’s only on the developer’s computer.
An alternative would be to create a feature branch, which brings us back to the problem of merging branches.
Pain of merge = fn(size_of_merge, duration_since_last_merge)
Merging branches can be a difficult task and might lead to a commit race, offloading the effort to a colleague.
A third option would be to ‘check in anyway’, called trunk-based development. Every commit integrates to the trunk and developers should integrate their local changes daily. Small changes and integrating often makes it easier to merge new code.
New half-finished features can be hidden with feature toggles. These toggles can be managed using flags or configurations (eg, in Zookeeper, Consul, …).
A flag should be set and evaluated in as few places as possible, preferably only once each. Flags should be removed when the new implementation is done.
More info: Trunk-based development
Changes to an existing functionality can be done by providing an abstraction above the existing functionality. The new functionality can then be developed for the abstraction and when it is done, changed to the new implementation. Branch by abstraction has the side-benefit that it can be used for A/B and canary releasing.
The Continuous Delivery book tells us to treat every check-in as a possible release candidate. Developers start with the assumption that it is worthy, the CI tool decides whether it truly is. Deploy frequently with small changes, making it easier to rollback and lowering the risk of running into problems.
And then there was Git, developed by Linus Torvalds with the goal to merge a patch in less than three seconds. In Git, branches are much more lightweight and every local repository contains the full source history.
In 2008, GitHub was founded and introduced pull requests. If you wanted to contribute to open source projects before pull requests you had to:
- Develop it locally
- Generate a patch file
- Mail it over to the project owners
This feature contributed to GitHub’s success as three years later in 2011, they passed SourceForge and Google Code in popularity.
Sam made the remark that pull requests use branches, which might bring problems. On top of that GitFlow was introduced. Because GitFlow introduces even more branches, it is in controversy with fast deployment and small changes cycle. With tools like Split and LaunchDarkly, GitFlow is not needed, if merged frequently.
The conclusion was that experimental and release branches, that might even never get merged, still have their uses. The pull request mechanism works well in open source projects. Except for experiments, releases and pull requests, Sam recommends to prevent branches and to keep batch sizes small, integrate often and ship often.
A reasonable overview of Java 9 and how you could think of it (Oleg Šelajev - Slides)
Since Java 9 does not seem to have a codename and Java 10 is called Project Valhalla, Oleg proposed codename Java 9 the Fury Road, a Mad Max reference.
Java 9 Release date: September 21st 2017
JShell is the new REPL (Read-Eval-Print Loop) for Java. It can be used to run commands and get results immediately. For user-friendliness, the semicolons can be omitted after the instructions in JShell.
Several improvements will be added to the
Optionals can be turned into streams and have
For eager evaluation these functional methods can be applied directly to the
stream() in front of the functional methods a
ReferencePipeline is returned.
This can be used for lazy evaluation.
or() method will be added to chain a supplier to empty Optionals.
Two new methods will be added to the
For ordered streams, these methods drop or take elements while the predicate is true.
In unordered streams,
dropWhile returns a subset of elements starting from the first predicate match,
takeWhile returns a subset of elements matching the predicate.
CompletableFuture will be extended with a
CompletableFuture is a defensive copy and completing it doesn’t complete the original
ProcessHandle interface will be added, it can be used to get information and control processes.
Bits and pieces
The underscore will become a keyword, so assigning a value to
_ does not work.
This is probably a feature for the future where
_ will be used for matching arguments of any type.
Assigning a value to
__ will keep on working.
In Java 8, default methods were added to interfaces, in 9 they can be private.
Property files will support UTF-8 and there is already Java 9 support in several IDEs.
There will be several changes to improve
String performance, for example using a more space-efficient internal representation for Strings.
Javadoc will get an improved search, HTML5 compliance and more info on the module where the class or interface comes from.
The use of agents will be more flexible, a process can attach an agent to itself and a JAR can contain multiple agents.
The Java Platform Modules System (JPMS) allows modularization for Java applications.
A module can define dependent modules with
requires, to provide an API the
exports key word is used.
To give access to everyone the
opens keyword can be used.
There is a method
You should be aware though about using it for determining if a module is usable or not since it actually just returns the value of
setAccessable(), a toggle that you have to set yourself.
Actually it just returns the value of
setAccessable(), a toggle that you have to set yourself.
To make a smooth transition to the JPMS, any JAR on the classpath will become an automatic module.
--illegal-access=permit is the default mode for JDK 9, allowing modules access to all automatic modules.
As a migration strategy, Oleg proposes to wait for dependencies to modularize before modularizing yourself. Otherwise you might need to modularize twice to align with the dependencies.
Java 9 with Maven is complicated, many plugins need to be upgraded and a lot of functionality is not yet fully integrated with JPMS. Gradle releases fixes more often and currently supports more features.
A multi-release JAR containing multiple versions for the same file, in the same JAR, is a new feature that should be used with caution.
The G1 Garbage Collector (G1GC) will become the default in Java 9. Previous Garbage Collectors were not as scalable nor predictable. The G1GC promises a more scalable and more predictable system with few modification options.
By default, a quarter of the physical ram will be allocated to the heap, unless the size is specified with the
Due to its new heap division system, it might run into problems with large chunks of data.
It is recommended to feed streams of data directly to parsers without first capturing it in a byte array, this also applies to JSON parsing and database operations.
Another improvement is using immutable objects wherever possible.
StringBuilder instead of concatenating Strings will reduce heap usage.
For more info, Oleg referred to a talk on Moving to G1GC by Kirk Pepperdine
The JDK 9 will contain an incubator package with a HTTP/2 client with a fluent API. The modules in the incubator package are non-final APIs that can be finalized or removed in future releases.
Oleg concluded by recommending the audience not to touch multi-release JAR, jlink and Unsafe, unless you are 100% sure what you are doing.
For now, he recommends to upgrade your IDE and tools and upgrade Spring to version 5.0.
Then add the
--illegal-access=warn startup option and fix the easy fixable warnings and then wait a year or more until the classpath and the libraries, you depend on, are upgraded.
The Language of Actors (Vaughn Vernon - Slides)
Vaughn started his talk by introducing Rear Admiral Grace Hopper to the audience. In the American Navy, she was a Computer Scientist and wrote software for a long time. She was really into not wasting cycles and emphasised on not wasting nanoseconds.
Then Vaughn introduced Donald Knuth, another legend in Computer Science. Knuth is known from the quote Premature optimization is the root of all evil. But that is not exactly what he said, the full quote says:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
Another quote was shared of Donald Knuth:
People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird.
To further build his point, Vaughn told about a project that was written in Cobol. The code was across 5 diskettes thus user interaction was needed to run the application. To improve the usability of the application, it was rewritten in C, allowing the software to fit on just one diskette. With this introduction, Vaughn wanted to emphasize how hard it is to optimize software for resource usage.
Threading is hard
In 1973, academics discovered the Actor Model. 13 year later in 1986 Joe Armstrong rediscovered the approach. Armstrong designed and implemented a programming language on this model, Erlang In 2008, Jonas Bonér came up with Akka for the Java Virtual Machine and in 2011 José Valim came up with another Actor based language called Elixir.
Because the Actor Model is Message Driven, it inherently is Reactive.
Now is the time for the Actor Model, with the decreasing expense of memory, network and chips. Processors are having a lot of cores these days, Intel Xeon units go up to 88 cores, Intel Xeon Phi can have more then 200 coprocessors. The actor model allows us to embrace latency. If we design for latency, it will not have a blocking impact on the design.
We are not at Google scale, why use actors?
With the Actor Model you can do more with less. The total number of nodes can be reduced to just a few, several million actors per machine is not a problem.
The actor model uses the essence of Domain Driven Design (DDD), the bounded context and ubiquitous language. DDD is excellent way to make complexity surrender, by knowledge crunching.
Actors help us reason better by having less moving parts. This allows us to focus on business aspects, instead of the architecture around it.
How to do DDD in projects:
- Talk with customer (iterate)
- Write some scenarios (iterate)
- Strategic Event Storming (iterate)
- Tactical Event Storming (iterate)
- Implement acceptance tests and model (iterate)
This concludes our recap of this amazing edition of Devoxx Poland.