David Tanzer

When Tests get in Your Way


“When you have a lot of tests; What do you do when you want to change something, and the tests get in your way?”

I hear this question - in one form or another - every now and then. I hear it during the trainings I teach, when coaching or when I am working on some code with a team and try to convince them to write more automated tests.

It is one of the objections I often hear against doing test driven development (TDD): “When we have more tests, changing things later will become harder”.

This Should not Happen

In an ideal world, you would:

  • only add new tests when adding functionality - and not change any existing tests
  • only change related tests when changing functionality
  • not change any tests when refactoring - they will always stay green

At least this is the world we strive for when automating tests: A world where

  • Tests will provide a safety net
  • Tests will protect us against regressions -and-
  • Tests will not get in our way otherwise

This Will Happen

But most of us, most of the time, do not live in that world.

Sometimes you have tests that always get in your way. Like a test that is unstable: It is green most of the time, but sometimes, only on the build server, it breaks. Such tests are unconditionally bad.

Sometimes, a test will become “Red” even though the code change did not change relevant functionality. Sometimes you will have to change tests when adding functionality. Sometimes changing functionality becomes hard because of the tests involved.

When that happens, first look at the type of test.

Business-Facing Tests

There is one kind of test - Business-facing tests that verify that the applications works correctly from a user’s point of view - that should never during refactorings. We only expect to change them when we change functionality. But there, they might get in our way, especially if you have a lot of them. They might get in your way because the changes to the tests are too hard.

If such tests get in your way when changing code, try to find the root cause. Do you have many overlapping tests? Do you have tests that are not focussed, that are testing too much at once? Is there another reason?

When you find the root cause, see if you can refactor your test or replace them with a suite of better ones.

When such a test breaks during a refactoring, you have found a bad test. Try to find the root cause why the test was bad and remember to avoid it in the future. For now, try to make the test better, or delete it if you cannot do that.

Technology-Facing Tests

The other kind of test - Technology-facing tests that verify that our code works correctly from an internal point of view - might require some changes during a refactoring.

If you have contract tests that verify that a component uses an interface correctly, they might break when you inline some of the functionality that was in the interface before.

If you split a large class into two smaller ones, it might also be necessary to move the tests around or to delete the bigger tests and write new ones for the smaller classes.

On the other hand, those tests might survive changes to the functionality - But only in rare cases.

About Deleting Tests

“Did he really just write delete tests?” – Are you thinking this right now?

Yes. Delete the test. Especially if you have a bad test. Even if you do not have a replacement right now (but do strive to find or write a replacement).

“But then this piece of functionality is not tested anymore” - OK, but a bad test is often worse than having no test at all.

“But we already invested so much time in writing and maintaining this test” - That’s the Sunk Cost Fallacy - Don’t fall for it!

Also, some tests were not always bad, not always worth deleting. Some tests have just out-lived their usefulness. When a test was useful in the past, it was already worth the investment. When it now is not useful anymore, delete it without looking back.

Test Driven Development vs. Test First vs. Test After

So, here’s the problem with usefulness: When you write the tests after the fact, and even sometimes when you write the “tests first”, you are writing tests to protect you against regressions. You want your tests to notify you when your code has changed.

The usefulness of your tests comes from protecting you against invalid code changes. Having to delete such a test because of a valid code change almost proves that the test was not useful and not worth the investment.

When you do TDD, things are different. You write the tests to drive the implementation and design of your code. You write the tests to help you think and to know when to stop.

In the very moment when the test becomes green, you already have proof that the test was useful. Protection against regressions comes as a side effect. When you have to delete such a test later, it’s no big deal: The test was useful once, but has now out-lived its purpose.

To Recap...

When a test gets in your way, try to find out why. Write the root cause down in your engineering notebook or wherever, and discuss it with your team. Consider the type of test to know how big of a “problem” you have.

Try to find or write a replacement for the test, and try to make it better. Try to learn something and write better tests in the future.

And when you find a bad test or a test that has out-lived its usefulness, delete it. But do try to write or find a replacement.

Practice - Longer than a Code Kata


Recently I was teaching a course about “Software Crafting” to a group of developers, all from the same company. We agreed that I would prepare three days of training (or workshops), and then we would spend two days on whatever they wanted. The original idea was to mob program on their produciton code in those last two days, so we could work on problems they have right now.

But after the three workshop days, they asked me whether we could practice more.

“We want to do a longer example and practice everything we learned so far again!”

So we did that, and it went really well. And here I want to share with you what I have learned in those two days.

Prepared for the Workshop

But first, let’s talk about the…

Artificial Examples Problem

In my workshops and trainings, I usually do very small examples. We write some tests and some code for an hour or two, and then we move on to ne next thing.

I think this approach is necessary for what I want to teach. You can learn a lot from doing those small examples. Even if you do not produce some “done” software. Even if you delete the code afterwards.

You still keep the learnings.

Most people like it. But sometimes I hear comments like…

This is so annoying. We never finish something.


These small examples are unrealistic. How do you do that in the real world?

And I do understand those concerns. But workshops and trainings are almost always too short to do longer, more realistic examples. So, I was really glad that this group decided they wanted to spend their two extra days practicing!

Starting the Two Days

We did not prepare much up-front. The people at the company prepared an empty git repository on their GitLab server and they set up GitLab CI to run their tests on every commit

I thought about what we could do in those two days and decided that we implement “Four in a Row”. I also came up with a rough plan for the two days: Do a longer planning at the beginning of each day, then have three to four short iterations (2 hours max) where the team should produce working software.

In the beginning, we did some release planning. We talked about the rules of “Four in a Row”, how to do the UI (text-based), some other business requirements and nice-to-have features (like, playing against bots or a grapical user interface). After that, we had a single flip chart page full of text, describing what we want to implement.

Then I told them to create work packages from that. I tried to avoid Scrum terminology because they do not use it in their day-to-day work, so we called those cards “Features”.

First, they wanted to split the work along technical layers (We need a console input. And the rules of the game. And…). I explained that I wanted features that we can implement end-to-end and provide value to the user, yet are small and independent.

A rough plan for the day

Creating features like that was hard for them. But that is OK, it is also still hard for me. We came up with a plan, after some discussion, and it was good. At least good enough to start.

Day One

There were 7 attendees, so I asked them to work in three groups - Two pairs and one group of three. All of them would commit to the main line.

We took the first two cards to implement in the first iteration, but I asked them to first finish one feature together, then work on the next. To be able to split the work, we had a short design discussion. They came up with a design that had three modules, and a fourth one coordinating the other three (Conway’s law? Coincidence? I do not know…).

Roughly 1.5 hours after starting the workshop, we were ready to start coding. That was quick, I thought.

Towards the end of the first iteration, all three groups declared that they were “basically done, just have to fix a little thing and push”. They didn’t consider that in software development, the first 90% take just as long as the second 90% and the third 90% of a feature ;) (Nobody ever does) We ran out of time and went for lunch, without delivering working code.

After lunch. Two iterations left. We did a short retrospective of “What went well? / What is missing? / Do more of? / Do less of?”. There were some minor concerns that we could address quickly. But one attendee wrote down “We have not integrated anything. The game loop is still missing.”

This was my main concern too, so we decided to concentrate on integrating and shipping the first feature in this iteration.

And they did deliver that, and even the second feature.

But every one of the groups was writing more code than strictly necessary. They wrote code that they knew they would need in later features. This is something very natural for programmers. We write more code than we need to. For me, that is too still a very hard thing - To only write the code that is required, not what I think is required.

Because they wrote more code than necessary, we still did not have a release at the end of the second iteration. The two features worked, but there was code for some of the next features that was only partly working.

In the third iteration, progress took off. They implemented almost all of the small-but-important features and they had a working program that they could try. Even the tests were green most of the time. But near the end, someone accidentally pushed a test that asked for user input. So, still not green bar after the third iteration.

A rough plan for the day

Two other interesting things happened on day one.

At first, the team could not agree on whether to merge or rebase when pulling from git, so some merged, some rebased. In the afternoon, they saw that this is not working (without me saying anything; that was hard for me, keeping quiet for so long). They had a whole-team discussion and decided to rebase.

Something similar happened about software design. I saw that two pairs were working on related functionality but had completely different approaches (both were valid). I told them, and they quickly decided that they should use the same approach. But then they had a long discussion about which approach to take.

That time was not wasted, though: Now they have a software design that is better than before (IMHO) - and even more important, one that everyone agrees on.

The Half-Way Retrospective / Planning

On the first day we focused a lot on using git as a team and on progress. The attendees learned a lot about that, and they wrote a lot of production code and test code, so now it was time to focus on different things.

Results of the Half-way Retro

To focus more on the process, I wrote pink sticky notes with extra tasks for the team.

They wanted to try code reviews with pull requests, for example. So, I wrote a sticky note saying “Merge stable code to master” and I told them to do that after every iteration (of which we had multiple per day).

I also wanted to make them solve the “we don’t have a stable version” problem by doing that. They had this problem by implementing too much at once, so I wanted to nudge them to stabilize the code towards the end of the iteration instead of starting new things.

I also tried to make clear that I wanted them to deliver working software after every iteration. That it does not matter how much or how little code they delivered - The code had to work. Later, they said that this had put pressure on them.

But I think this pressure was good - It made them try harder, they made some mistakes, they learned from those mistakes.

Day Two

The team also modified the board on their own. They wanted to know who is currently working on what, so they created the three orange stickies near the bottom.

Updated Task Board

Every pair would put a sticky note on one of the three large, orange ones when they started something new. When others saw that a pair was moving the stickies there, they could ask questions or synchronize.

We had a long(ish) code review with lots of remarks. I made clear that the code review is there to to critizise code, not people. So, no personal comments, but also: Do not take comments of others personally.

And they did just that - They critiziesed the code harshly but were very fair and helpful. I never had to intervene, they just sometimes asked me questions about code quality or “which of those two ways is better?”.

We came up with a list of improvement ideas and the team decided that one pair should fix them and merge to master while the others would continue. This caused problems later because the team needed much longer than they thought for fixing and merging. But the others swarmed and helped them.

And then we had a stable, demoable version that already did something interesing!

It was time to work on the more difficult tasks, since one could already play the game. I wanted them to implement the feature “Game recognizes when a player has won”.

They immediately wanted to start, but I told them that this feature was much too large for a 1.5-hour iteration. They had to split it.

But not along technical lines / layers. That would be wrong, because then a single feature would again not deliver any value. They agreed, but did not know where to start the splitting.

I asked them: “Is there any situation where determining the winner is easy?” - Yes, if you place a stone in the right-most column when there are three stones in a row left to that column, it is easy to determine the winner. So, this was one of the split features, where the game could only determine the winner in this case. We ended up with three features that would provide value on their own, and together would make the game able to determine all winning scenarios correctly.

One pair messed up the git repository during a rebase. They did not run the tests after every step, and committed and pushed something broken. The others committed on top of that. The mess was hard to un-tangle, but another pair stopped their own work and helped the first pair. I helped a little too, and we soon had a working build again.

We implemented winning and some other, smaller features. In the end, we had a game that one could play, but there were some quirks because some minor features were missing.


What did they learn? Enough to fill a flip-char page:

Learnings attendees do not want to forget

There were many interesting things that happened in those two days that I could not teach in the “normal” training, but that the attendees experienced here:

  • How hard it is to create working software early and often
  • How important it is to only write code that is strictly necessary to achieve that
  • That working togehter on the same mainline branch and on the same classes can work
  • That merge conflicts happen and are generally easy to solve - Especially when you work in small steps and communicate with the others
  • That messing up the code in the repository can happen, but that git is a great tool to fix it
  • To watch the automated build
  • To run tests locally after every step
  • To split requirements and features in a way so that the smaller feature still provides value
  • To distribute tasks for a feature to pairs in a way so that the team can swarm

I think I told them most of those things during the training before, but you have to experience them to really learn them.

Those two days were a great experience for me. And the attendees loved them too. They practiced many techniques again they learned during the training - but in a more “realistic” setting. And this was the first time they tried swarming - i.e. to work on one feature, as a team, before moving to the next.

I thoroughly enjoyed facilitating this exercise / experiment and would love to do it again!

Do you want to try something like that, or do a different workshop? Find out how hiring me as a trainer could work out!

TDD: Why do You Want Me to Write Bad Code


Last week I was teaching TDD to two groups of programmers. And in the first group, someone asked me “Why do you always want me to write wrong code?

I was thinking to myself: “You are wrong, I do not want you to write wrong code, I want you to write the right code!” Fortunately, I did not say that out loud. Instead I asked: “I think I don’t understand what you mean. Can you explain it to me?”

Then the whole question turned into a long discussion with the whole group. And it turned out that this is a really good question and describes a problem that I had too, when I started with TDD. And the group was able to find the answer on their own, with very little input from myself.

It turned out, I had asked them to write “wrong” code…

One Hour Earlier

In the second lab, I asked the group to implement the rules for the game hangman - The children’s game where a player has to guess a secret word by guessing it’s letters. They would do so together (“mob programming”), and I would intervene every time when I thought they could improve something.

When the game starts up, it displays a hint before even asking the user to do anything. And at first, this hint only shows blank characters, one for each letter in the word. So, for the word “driven” it would show “_ _ _ _ _ _”.

The group already had the first test for this functionality, which looked like:

public void hintIsSingleUnderscoreForSingleLetterWord() {
    String secretWord = "a";
    Hangman hangman = new Hangman(secretWord);

    String hint = hangman.getHint();


And now they were arguing what was the simplest solution to implement this. When they were about to implement their solution, I said “Stop. That’s too complicated. Why don’t you just return a single underscore?

The Wrong Code

So, the production code now looked like this:

public String getHint() {
    return "_";

To me, this looked perfectly fine. But to one attendee, it was “wrong code”.

It was wrong to him because he had to change it 5 minutes later. It was arguably not code that could stay like it was under any circumstances.

Yes, when you work like that, you do a lot of extra work in the production code. And in the end, most of it will be gone - it will have been deleted or changed. After you write this second test, you will change the return in the production code to something completely different:

public void hintIsThreeUnderscoresForThreeLetterWord() {
    String secretWord = "the";
    Hangman hangman = new Hangman(secretWord);

    String hint = hangman.getHint();

    assertThat(hint).isEqualTo("_ _ _");

Why do We Write Code Like That?

Just because you will have to change the code again does not mean that it is useless. Even though I understand that it might feel like that. It is not useless because…

You made progress on the API. Before this test, you had no production code at all. Now, you have decided that you want a method getHint that returns a string.

You are designing the API of the Hangman class as you go. And you use your tests to drive that design. Some tests, like the first, drive only higher-level design - You can implement the method with return "_";. Other tests, like the second one, drive the implementation.

You made progress in your tests. Before this return "_";, you had zero green tests. Now you have one. One green test that will be there until you are “finished” (if you ever are). A test where you have to be careful that it stays green.

After the second test, you have two tests for the same functionality - rendering the initial hint - but for slightly different aspects of it.

By re-doing your production code over and over again, you are making progress in your tests. Your test suite gets better and better. And this test suite is your protection against regressions when you are refactoring and your executable documentation. So, you want it to be good.

You made smaller steps than in test-after. Having those fine-grained tests is important for us. This way of working forces us to take smaller steps. That helps us to write only the code that is necessary, and to write all the tests that are necessary. When testing after the fact, you would maybe only write the second test, and that might miss a subtle but important detail that is in the code.

The Difficulty: Taking Smaller Steps

I think that the question “Why do you want me to write the wrong code?” is part of a larger problem that I always see in my TDD workshops. TDD, when done right, forces you to take really small steps. And that is very hard when you have never done it before.

And that is why we practice so much in my workshops. I can explain the basics of TDD in 10 minutes (Yes, there are some subtleties. I will explain them on day two). But you have to experience it and practice it - Otherwise you will not even know that you want to ask questions like this.

If you liked this blog, you might be interested in some of my other posts in the category “TDD”:

Do you have any questions? Just send me an email!

Legacy Code: The Mikado Method


At the We Are Developers World Congress 2018, I gave a talk about how to deal with legacy code using the Mikado Method. So, here is the concrete example I gave in my talk.

The Mikado Method

The Mikado Method gives us a way to change legacy code in small, safe steps. Read the linked post for a detailed description. To summarize:

  1. Write down your current goal
  2. Try to reach that goal directly
  3. If you fail
    1. Write down everything that prevents you from reaching your current goal as a sub-goal
    2. Revert your changes
    3. For each sub-goal, repeat 1
  4. If you succeed
    1. Commit your changes
    2. Repeat with the next goal

This gives you a tree where all the leaves are immediately solvable, and when you solved them, you can try and solve the goals higher up in the tree. You might discover more problems that prevent you from reaching your goal - Then you just add another branch to the tree.

In the end, you will - hopefully - have solved your original goal. And you will have done so in small, safe-to-fail steps.

The Legacy Code

I used the Baby Steps Timer as the starting point for this exercise. It’s a pretty nasty piece of code where every part of the code is coupled to something else in the code. So, whenever you try to change something, something else breaks.

Before I started with the mikado method, I wrote some characterization tests (maybe I’ll write about it in a future blog post). They are still a little bit flaky - they sometimes fail without a reason - because of all the threading involved.

But it is absolutely crucial to have them: With the mikado method, we must have a way to quickly check whether our change broke something. And the compiler is not enough for that.

Move Timer Thread to Own File

This commit on the branch characterization-tests is where I started with the actual refactoring. I have at least some tests in place now. So, I created a new branch called mikado-method and I also wrote down my main goal:

Note with main mikado goal

…and tried to move the TimerThread out of the main class, using the refactoring tools. This resulted in code that does not compile:

Source code with errors: TimerThread

Now, let me repeat that this - the code not compiling - is totally expected: With the mikado method, you just try to reach the goal in the most straight-forward way. You expect things to break. But you do this to identify your sub-goals. And then, you revert all your changes, back to the last working version.

Now I was supposed to write down everything that prevents me from achieving my goal. But in such a situation, I do not write down every compiler error. I write down some high-level design changes that will help me reach my goal. In this case, the compiler errrors belong to two categories: Stuff the timer needs for their own logic to work and stuff the timer needs for rendering the result.

Note with main mikado goal and two sub-goals

Don’t forget to write those sub-goals down. It is not enough to just identify them and hope you will remember. I prefer paper for writing things like that down, but a simple text file or a mind mapping tool will do too.

Timer Logic

This was a quick one… I moved the three variables that are related to the business logic to the timer thread inner class:

private static final class TimerThread extends Thread {
    private static boolean timerRunning;
    private static long currentCycleStartTime;
    private static String lastRemainingTime;

And in the rest of the code, I access them through the TimerThread class:

} else if("command://stop".equals(e.getDescription())) {
    TimerThread.timerRunning = false;

Rendering Logic

I moved all the code related to rendering to a TimerRenderer class.

public static class TimerRenderer {
    static JTextPane timerPane;
    private static String bodyBackgroundColor = BACKGROUND_COLOR_NEUTRAL;
    private static JFrame timerFrame;


    public static String getRemainingTimeCaption(final long elapsedTime) {

    private static String createTimerHtml(final String timerText, final String bodyColor, final boolean running) {

I also had to create an interface that the timer thread can use:

public static class TimerRenderer {
    public static String getBodyBackgroundColor() {
        return bodyBackgroundColor;
    public static void update(String remainingTime, String bodyBackgroundColor, boolean timerRunning) {
        TimerRenderer.bodyBackgroundColor = bodyBackgroundColor;
        timerPane.setText(createTimerHtml(remainingTime, bodyBackgroundColor, true));

The timer thread now uses only this interface when updating the timer:

while(timerRunning) {
    long elapsedTime = wallclock.currentTimeMillis() - currentCycleStartTime;

    String bodyBackgroundColor = timerRenderer.getBodyBackgroundColor();
    String remainingTime = timerRenderer.getRemainingTimeCaption(elapsedTime);
    if(!remainingTime.equals(lastRemainingTime)) {
    timerRenderer.update(remainingTime, bodyBackgroundColor, true);

Note: This change introduces a subtle defect. Apparently, I forgot one characterization test that would have cought it. It is very hard to know which characterization tests to write, and whether you have enough. But you still should write them!

Can you spot the defect?

More Problems

Now my two original sub-goals were complete. So, I tried to move the TimerThread to its own class again. But I noticed two more problems:

Note with main mikado goal and now with four sub-goals

I must pass the timer renderer to the timer thread as a constructor parameter - otherwise, the renderer cannot access it anymore, once it lives in a different file.

public TimerThread(BabystepsTimer.TimerRenderer timerRenderer) {
    this.timerRenderer = timerRenderer;

And I also created an interface to stop and to reset the timer, so that the main class does not have to access private variables of the timer thread anymore:

public static void stopTimer() {
    timerRunning = false;

public static void resetTimer(long newTime) {
    currentCycleStartTime = newTime;

I also had to change the access of one constant from private to public.


And now it was finally possible: I was able to move the TimerThread out of the main class, into its own file.

Note with main mikado goal, all sub-goals done

You can see the end result in the mikado-method branch of the babysteps-timer repository.

Maybe I could also have done this by doing a “refactor by compiler error”: Just do something, and then fix all the errors, do more, fix more compiler errors.

But this would definitely have been harder. The nice thing about the mikado method is that it is very safe. The code always compiles. You can always run the tests.

And when you do something that does not work right out of the box, you undo and solve the problems first. This allows you to work in small, safe steps, and to identify the steps as you go.

If you have a lot of legacy code, you might also experience some “Agile Anti-Patterns”. So, go get my book “Quick Glance At: Agile Anti-Patterns” ;) And if you liked this post, you might be interested in my services: I help teams to get better at developing high-quality software. Just contact me!

Related posts in the category “Legacy Code”:

Overcoming my Shyness - Intro


I enter the room. I’m about to give a talk at a conference.
They gave me the biggest room of the venue.
Oh my god, I did not realize how big it is.
I hope it won’t be full.
Oh my god, I hope it won’t look empty!
I go to the audio engineer. Getting my microphone, testing my laptop.
I wait somewhere near the back. My heart starts racing.
Slowly, the room is filling. So many people.
Why did I submit my talk to this conference again?
5 minutes to go. I know that, because I am checking my watch every 20 seconds.
I ask the audio engineer the time. 4:40, then it starts.
I check if my hands are sweating. They are dry. Why are they dry?
I really don’t like to be in the same room with so many people.
40 seconds to go. I can’t remember what I wanted to talk about.
20 seconds. Do my slides even make sense? I should have re-arranged them.
0 seconds. OH MY GOD, I have to give my talk NOW.

I put on a smile. I walk up the stage.
“Good afternoon, everyone! So many came here - awesome! I hope your lunch was good. Today I will talk about…”

Two weeks ago, I gave a talk at a conference. This is roughly how I felt.

Had you asked me ~20 years ago, when I started university, if I could give a talk in front of 300 or more people, I would have said: “No. Never. Ever. I just cannot do that. I am too shy, I would die on the stage.”

Some of the things I am doing, like talking and networking at conferences, are still not easy to me. Even though I do them all the time. Even after many, many years. So, what has changed? Why am I doing them?

When WeAreDevelopers interviewed me, one of their questions (and my answer) was:

WeAreDevs: What is one of those things you wish you knew when you started out developing?

David: That becoming a good software developer is all about being good at communicating with other developers – and especially with “non-technical” people.

I think that communicating and networking are key to having a great career in software development. We must learn to talk to strangers. Talk to people from completely different backgrounds - Understand their background, adjust the communication. Be able to present ideas in front of groups. Quickly teach people a single thing about a topic.

Those are important, even if you never go to a conference. You also need those skills when joining a new team or when you want to become a technical lead, software architect or some other role that involves a lot of communication.

And so I am trying to get better at them. I am now at a point where, most of the time, many of those things are not scary anymore. But they are still exhausting. And I am not there yet - There is still much more to learn.

I am writing this down to clear my thoughts about it. And to get your feedback. And maybe what I wrote can help some others by showing them that this stuff is not easy for me, and that I (and others), too, are constantly working on that.

I want to write more about that, later. So, here are all the blog posts from this series:

If you have any questions, if you need help or advice or just somebody to talk to, feel free to ping me. I am sometimes quite busy, but I will help you if I can. DM me on Twitter or send me an email to business@davidtanzer.net.

Also, I want to teach a short (2-hour) workshop about “speaking at conferences” at SoCraTes Austria. Participants will practice

  • Finding conferences to speak at
  • Finding and refining a topic
  • Writing and refining abstracts
  • Preparing to create slides
  • Communicating with the conference
  • Courage and Self-Confidence

If this sounds interesting to you, come to SoCraTes Austria. And if you cannot make it, maybe I could teach it at a meetup or user group near you? Email me - business@davidtanzer.net - so we can talk about the details.

Immutable.js and Redux / React


In the last installment of this series, I wrote about the basics of immutable.js. Today, I want to write about how to use it together with Redux.

I wrote a very small application with React, Redux and immutable.js: It just displays two red squares and some lines to the corners of each square. And you can drag and drop the corners with your mouse.

Simple React / Redux App

This functionality may sound simplistic. But it has some react components that operate on common data that can change (both Link and Square need the location of a Square), and so it can potentially benefit from Redux.

You can find the full source code here. For now I am showing you the code from the tag simple-redux-immutable.

Reducers and Immutable

There are two things you have to do to make your reducer work with an immutable data structure:

Make the initial state an immutable object

import { fromJS } from 'immutable';

const initialState = fromJS({
    squares: [
        { x: 316, y: 281, },
    edges: [
        { x: 0,    y: 0,   squareX: 0,  squareY: 0,  },

export function reducer(state = initialState, action) {
    return state;


This application draws squares and lines between them. The data required to do so is stored in an immutable map with the keys squares and edges. Each of those keys contains an immutable list of Maps, that in turn contain the data.

Instead of creating a new JavaScript object, perform an immutable update

export function reducer(state = initialState, action) {
    switch(action.type) {
        case 'SQUARE_MOVED':
            return state
                .updateIn(['squares', action.id, 'x'], x => x+action.dx)
                .updateIn(['squares', action.id, 'y'], y => y+action.dy);
    return state;


When the user drags a square with their mouse (action SQUARE_MOVED), the reducer updates the x and y coordinate of that square.


In mapStateToProps, I get the data required for rendering the components from the redux store. Either as simple JavaScript numbers:

function mapStateToProps(state, ownProps) {
    return {
        x: state.getIn(['squares', ownProps.id, 'x']),
        y: state.getIn(['squares', ownProps.id, 'y']),


Where ID is provided by the parent component (src/DrawingArea.js):

<SquareContainer id={0} />

Or as an immutable data structure, that the component itself will pick further apart:

function mapStateToProps(state, ownProps) {
    return {
        square: state.getIn(['squares', ownProps.toSquare]),
        edge: state.getIn(['edges', ownProps.fromEdge]),


Where toSquare and fromEdge are again provided by the parent component (src/DrawingArea.js):

<LinkContainer fromEdge={0} toSquare={0} />


With redux, all the components that actually render stuff - Your Presentational Components - can be pure components (at least when you stick to the rules of Redux):

export class Square extends React.PureComponent {


This may or may not be faster than having “normal” react components. What is more important to me is: It serves as a reminder to keep this component side-effect-free.

And to not implement shouldComponentUpdate: A presentational component that gets its data from redux should:

  • Not have any internal state
  • Only get the data it absolutely requires in its props, and thus:
  • Always re-render when the props change


When the component only gets plain values in its props (like the numbers x and y for Square), then render looks as if there would be no immutable.js at all. But when the component gets immutable data structures through its props, like Link, it can use them in its render function (and elsewhere):

render() {
    const fromX = this.props.edge.get('x');
    const fromY = this.props.edge.get('y');

    const toX = this.props.square.get('x') + this.props.edge.get('squareX');
    const toY = this.props.square.get('y') + this.props.edge.get('squareY');

    return (
        <path d={'M '+fromX+' '+fromY+' L '+toX+' '+toY+' z'} className="link" />


Updates / Actions

On a DOM event, the component creates an action. There is no difference to not using immutable.js here. In this case, to handle mouse events correctly, the code first gets a ref to the DOM node and then installs the listener in componentDidMount:

<rect x={this.props.x} y={this.props.y} 
      width={50} height={50} className="square" 
      ref={e => this.rect=e} />


componentDidMount() {
    this.rect.addEventListener('mousedown', this._mousedown_bound);


Most of the code in the mouse listeners is for handling dragging with the mouse in a way that it does not jitter. But on mouse move, the component also calls an action creator:

_mousemove(e) {
    this.props.squareMoved(this.props.id, e.movementX, e.movementY);


Which only packs its three arguments into an action object of type SQUARE_MOVED. When the reducer handles this action, it must use immutable updates. Note how it calls the second updateIn on the result of the first and how it returns the result of that:

    return state
        .updateIn(['squares', action.id, 'x'], x => x+action.dx)
        .updateIn(['squares', action.id, 'y'], y => y+action.dy);



Redux only knows one reducer. But being forced to write a single, huge reducer would lead to unmanagable code. Hence you should split your reducer code and combine the smaller reducers with combineReducer.

And so, now I want to show you how you can structure your reducers when using immutable.js. Switch to the tag immutable-combine-reducers to see the source code of this version.

The only difference here is that you have to use combineReducers from redux-immutable instead of the default one:

import { combineReducers } from 'redux-immutable';

export const reducer = combineReducers({
    squares: squaresReducer,
    edges: edgesReducer,

const edgesInitialState = fromJS([
    { x: 0,    y: 0,   squareX: 0,  squareY: 0,  },

function edgesReducer(state = edgesInitialState, action) {
    return state

const squaresInitialState = fromJS([
    { x: 316, y: 281, },

function squaresReducer(state = squaresInitialState, action) {
    switch(action.type) {
        case 'SQUARE_MOVED':
            return state
                .updateIn([action.id, 'x'], x => x+action.dx)
                .updateIn([action.id, 'y'], y => y+action.dy);
    return state;


To Recap…

And that’s it: To use immutable.js within your React / Redux app, you have to:

  • Update your reducer’s initial state to immutable data structures
  • Use the immutable data structures in your mapStateToProps and possibly within your presentational components
  • Combine your reducers with a different function

You should also make all your presentational components PureComponents.

And then you will automatically get the advantages of immutable.js: It ensures by default that you have immutable state in your redux store and it does updates in a very efficient way.

Immutable.js - Basics


This post is part of a series about React, Redux, Immutable.js and Flow

Two of the three Redux principles are “State is read-only” and “Changes are made with pure functions”. When changing the read-only state, you are not allowed to mutate the existing object: You are supposed to create a new object.

But this new object can share the parts that have not changed with the previous state.

Writing that code to update the read-only state is definitely possible with just plain JavaScript objects and arrays. But it is easier when you have immutable data structures. In this aritcle, I want to write about the basics of immutable data structures. So that, in the next post, I can show you how to use them with Redux.


immutable.js gives you several immutable data structures: Map, List, Set, Record and more. Those data structures cannot be changed once created.

Convert from / to JavaScript

You can create those data structures with their constructors, which also accept JavaScript objects and arrays to initialize the data structure from.

import { Map, List } from 'immutable'

const emptyMap =  new Map();
const emptyList = new List();
const initializedMap =  Map({ x: 1, y: 2});
const initializedList = List([1, 2, 3]);

If you want to deeply convert some JavaScript data structure to nested immutable data structures, you can use fromJS. You can convert back to plain JavaScript data using toJS.

import { fromJS } from 'immutable'
const originalJS = {
    x: 'a',
    y: 'b',
    z: [1, 2, 3]
const immutable = fromJS(originalJS);
const convertedBack = immutable.toJS();

But, when working with immutable.js, you will rarely ever convert from or to JavaScript. Most of the time, you should work with the immutable data structures themselves (getting values and performing immutable updates). Only when you pass data from or to the outside of your own system, you would convert from JavaScript and back.

Query Functions

You can use get to get some value from the immutable data structure, and getIn to get a deeply nested value. Using the immutable object from above, this is how it works:

const a = immutable.get('x');        //returns 'a'
const b = immutable.getIn(['z', 1]); //returns 2

You can also run other functions that you would expect to work on collections, like forEach, filter, map, reduce and others.

Immutable Updates

OK, and now comes the interesting part. You can update those immutable data structures, but that will not modify them. Whenever you “update” one of these data structures, you get back a new object, and you can think of it as a complete copy of the old one, with the updated value modified (this is not how it works internally, see below).

const im = fromJS({ a: 1 });
console.log(im.get('a')); //prints 1

im.set('a', 2);
console.log(im.get('a')); //prints 1! Original data structure unchanged!

const im2 = im.set('a', 2);
console.log(im2.get('a')); //prints 2

When you need the previous value to calculate the new value of a field, you can use update, which takes a function as an argument that calculated the new value based on the old value.

const im = fromJS({ a: 1 });
const im2 = im.update('a', oldA => oldA+2);
console.log(im2.get('a')); //prints 3

And if you want to set or update deeply nested values, you can use setIn and updateIn.

const im = fromJS({ nested: { x: 1 }});
const im2 = im.updateIn(['nested', 'x'], oldX => oldX+2);
console.log(im2.getIn(['nested', 'x'])); //prints 3

And this Performs Well?

Yes. The immutable data structures try to copy as little as possible when doing mutable updates. They are implemented as trees that can share all the stuff that has not changed.

But, immutable data structures are slightly slower than using mutable JavaScript data structures. Just a few weeks ago, I was debugging a performance problem for a client. Updating their React/Redux app was too slow in some use cases. They were accessing immutable data structures a lot in their mapStateToProps.

As an experiment, I changed all those immutable data structures to JavaScript objects and arrays. Updates were slightly faster (~20%), but still not fast enough. Immutable.js was not our main performance bottleneck (and I think not even the second smallest bottleneck), so I put it back in. (I will write more about performance in later posts).

In exchange for that slight slowdown, you get a lot of safety. When you have immutable data, you can be sure it is unchanged, even when time has passed. And there are also some potential speed-ups: You never have to deeply compare data structures. If it is still the same object (pointer), the data is un-changed.

To Recap…

Immutable data structures can help you add safety to your programs. And when you use Redux, writing your reducers correctly will be easier and less error-prone when you use them.

But they are slightly slower than normal JavaScript data structures in some cases.

Also, this blog post was only a very quick overview of immutable.js and what you can do with it. If you want to learn more, check out the official site.

From Here Onward - 2018 Edition


The first few months of this year were an awesome ride. I accomplished some things that were hard to do for me, but also hat to take some hard decisions. Here, I want to write about a few things that happened so far, and how I plan to move on…

This is a mostly personal post. I hope you still find it interesting. Ping me on Twitter to give me feedback (link at the bottom)…

Accomplishment: A Book

I finished and self-published my second book this year. The first one happened by accient. Ok, not really by accident, but I did not really plan to write it.

The second book, Quick Glance At: Agile Anti-Patterns was a planned project.

I kind-of knew that I did not want to do it with a publisher, after a very stressful experience in the past. But after I recognized that self-publishing is not that hard, I decided to write a book about agile software development.

I started and wrote some chapters. And I did not really like what was emerging. Then I had the idea with the anti-patterns. So, I threw everything away and started over.

I poured a lot of effort and money into it (e.g. I hired a very talented illustrator). And in early 2018, I finished it. Have a look at it here (ebook and paperback available): Quick Glance At: Agile Anti-Patterns

Decision: Employee or Not?

One of the most interesting companies I know asked me to come to an interview. After that interview, I faced the toughest decision of this year (at least so far).

I never really considered full-time employment. I have been a freelance consultant and coach for over ten years now. And, had you asked me last year if I wanted to become and employee, I would probably have answered: “Not really, except maybe at [names of three or four interesting companies]”.

Don’t get me wrong: I think that many more companies do great work and work on interesting problems. But… I think, in many cases I can provide more value (and have a better chance of doing what I am good at) as an external consultant.

Anyways, when one of the very few companies invited me for an interview, I had to go. A short time later, I told them: For now, I want to stay independent.

This was a hard decision: They work on really interesting problems, have a great engineering culture and there would have been many people I could learn from.

But in the last few years I have tried to learn and improve a few things my clients usually need help with, and for now, I want to build on them.

I have tried to get good at coaching teams, teaching them technical practices like Test-Drive Development or refactoring, practicing pair-programming or mob-programming, dealing with legacy code and facilitating meetings and discussions. And, at least for now, I want to learn even more in those areas and help my clients to get better with that.

(Does agile coaching or technical coaching sound like something your team or organization might need? Let’s talk…)

Conferences, Meetups, …

I am speaking at a lot of conferences and meetups this year - At least compared to past years. While I really enjoy it, it’s also exhausting. And hard to coordinate with my family.

My topics this year are about software quality and it’s relationship to speed, cost and agility, agile anti-patterns, and also technical topics like React and Redux.

Do you want me to speak at your event? I would love to do that. Let’s talk… - Bonus points if you make sure that you have an enforcable Code of Conduct in place, don’t make me #paytospeak and if you at least try to create a diverse event.

SoCraTes Austria

I am, for the third time, co-organizing the Software Crafting and Testing (SoCraTes) Austria, together with Elisabeth Rosemann and Rene Pirringer. Ever after visiting the “original” SoCraTes Conference in Germany, I wanted to organize such an Event in Austria.

And I am very glad that Elisabeth and Rene (and some others who advise us) joined me, because without them, I could not do it.

And I am also glad that they share my passion about creating a diverse and safe event. We put some considerable energy into making this conference interesting and accessible for everyone. We serve vegan food, have an enforcable code of conduct, an accessible venue, and we offer student discounts (ask me) and a diversity tickets campaign. And we are working on getting better every year.

This year, things are looking good so far: We have already sold as many tickets as in the first year (2 years ago), and we hope the conference will be sold out for the first time (100 attendees). And we already have enough sponsors so we can make the conference happen, but there are still a few sponsor packages left.

Are you interested in coming? Buy your ticket now ;).

And Now?

This year, I want to do more blogging again.

I have migrated my two blogs (this one and devteams.at) to a new technology technology, and I have completely re-designed them. And now I want to come back to at least a weekly blogging schedule.

I will also revive my newsletter. So, if you want to stay in touch, subscribe here or follow me on Twitter.

And, almost most importantly: A really big contract will end this summer (because of a customer policy). So, I had to think about how to proceed professionally. But first, I probably will take a lot of time off in August in September, to spend with my family, go to conferences and to play with new technology.

Then, in mid-September or so, I want to start doing client work again. But this time, I would prefer more, smaller assignments instead of a single big one.

I think I can provide a lot of value for teams as an agile coach or a technical coach, even if I am there only for a few days per week - or even per month. We can pair-program or mob-program. By doing that, we will practice pair programming, mob programming, object oriented design, continuous refactoring, test driven development and more.

Or, I can help teams communicate better, find and solve problems and anti-patterns, facilitate meetings, …

Is this something that might be interesting? Autumn will be there sooner than you think, so let’s start to talk now.

React / Redux / Immutable.js / Flow


I have decided to collect some tips, tricks and things I have learned about working with React, Redux, immutable.js and Flow.

Those are very interesting technologies, and they work together really well. But there are some caveats, some things that we learned the hard way.

Here is what I have learned in the last 2+ years of working with those technologies:

Book: Software Testing Standard Requirements


I just found this interesting book I wanted to share with you…

The Art of Service’s Software Testing Standard Requirements Excel Dashboard and accompanying eBook is for managers, advisors, consultants, specialists, professionals and anyone interested in Software Testing assessment.

Software quality is a topic that is very important for me right now, and it should be important for every team. This book contains a self-assessment that can be useful for your organization to find out where you currently are. If you struggle to ask the right questions about your testing efforts, this book will give many of them, and a way to score your answers.