Archive for the ‘XP Day’ Category
These were some of the things that I learned from doing the presentation:
- The various graphs I presented in the talk have a resolution of 1680 x 1050 which is a much higher resolution than what was available on the projector.
As a result it was necessary to scroll up and down/side to side when demonstrating each visualisation so that people could actually see them.
Either I need to work out how to get the resolution of the projector higher or be able to shrink the images to the right size so they’d fit more naturally. I imagine the later would be easier to achieve.
- My machine refused to switch to Powerpoint when I was presenting so I had to wing it a bit from my memory of how the talk was meant to go.
As a result of not having the slides to show I ended up just showing the code that we’d written to create the graphs.
I didn’t think this would work very well but the feedback I got suggested that people enjoyed seeing the code behind the visualisations.
I had a discussion with people during the talk and with others at XP Day about how I could change the visualisations so that they were more useful. These were some of the ideas that other people had:
- Matt Jackson suggested that it would be interesting to graph how often the last ten builds were broken so you could see how it was trending.
- Actionable metrics – we had a discussion about what somebody is supposed to do as a result of seeing a visualisation of something i.e. what action do we want them to take.
We achieve this in some cases e.g. with the pair stair it’s clear who you haven’t paired with recently and the impetus is therefore on you to address that if you want to.
- Phil Parker suggested that metrics that you have an emotional response to are the most effective ones in his experience.
I think this links to the idea of them being actionable in that if you have an emotional response to something then it often makes you want to go and do something about it.
- I had an interesting discussion with Benjamin Mitchell in which he suggested that an interesting question to ask is ‘what would better look like?‘ and another one would be ‘what rule would we have to have in place for people to behave like this?‘.
We realised that in some cases it’s not really clear what better would look like since you can end up with two potentially competing ‘good’ practices e.g. checking in frequently is good but if everyone does it together then it can lead to the build breaking which isn’t as good.
We haven’t tried to tidy up any of the code that we used but it is available on the following github accounts if anyone’s interested:
At XP Day my colleague Liz Douglass and I presented the following experience report on our last 6 months working together on our project.
We wanted to focus on answering the following questions with our talk:
- Should the project have been done in Java?
- Does it really speed up development as was hoped?
- What features of the language and patterns of usage have been successes?
- Is it easier to maintain and extend than an equivalent Java code base?
We covered the testing approach we’ve taken, our transition from using Mustache as our templating language to using Jade and the different features of the language and how we’ve been using/abusing them.
The approach we used while presenting was to cover each topic in chronological order such that we showed how the code had evolved from June until November and the things we’d learned over that time.
It was actually an interesting exercise to go through while we were preparing the talk and I think it works reasonably well as it makes it possible to take people on the same journey that you’ve been on.
These were a few of the points that we focused on in the talk:
- In our code base at the moment we have 449 unit tests, 280 integration tests and 353 functional tests which is a much different ratio than I’ve seen on other code bases that I’ve worked on.
Normally we’d have way more unit tests and very few functional tests but a lot of the early functionality was transformations from XML to HTML and it was really easy to make a functional test pass so all the tests ended up there.
Unfortunately the build time has grown in line with the approach as you might expect!
- We originally started off using Mustache as our templating language but eventually switched to Jade because we were unable to call functions from Mustache templates. This meant that we ended up pushing view logic into our models.
The disadvantage of switching to Jade is that it becomes possible to put whatever logic you want into the Jade files so we have to remain more disciplined so we don’t create an untestable nightmare.
- On our team the most controversial language feature that we used was an implicit value which we created to pass the user’s language through the code base so we could display things in English or German.
Half the team liked it and the other half found it very confusing so we’ve been trying to refactor to a solution where we don’t have it anymore.
Our general approach to writing code in Scala has been to write it as Java-like as possible so that we don’t shoot ourselves in the foot before we know what we’re doing and it’s arguable that this is one time when we tried to be too clever.
- In the closing part of our talk we went through a code retrospective which we did with our whole team a couple of months ago.
In that retrospective we wrote down the things we liked about working with Scala and the things that we didn’t and then compared them with a similar list which had been created during the project inception.
Those are covered on the last few slides of the deck but it was interesting to note that most of the expected gains were being achieved and some of the doubts hadn’t necessarily materialised.
Our conclusion was that we probably would use Scala if we were to redo the project again, mainly because the data we’re working with is all XML and the support for that is much better in Scala then in Java.
There is much less code than there would be in an equivalent Java code base but I think the maintenance of it probably requires a bit of time working with Scala, it wouldn’t necessarily be something a Java developer would pick up immediately.
We’re still learning how to use traits and options but they’ve worked out reasonably well for us. We haven’t moved onto any of the complicated stuff such as what’s in scalaz and I’m not sure we really need to for a line of business application.
In terms of writing less code using Scala has sped up development but I’m not sure whether the whole finds Scala code as easy to read as they would the equivalent Java so it’s debatable whether we’ve succeeded on that front.
Another session that I attended at XP Day was one facilitated by Steve Freeman and Joseph Pelrine where we discussed the Cynefin model, something that I first came across earlier in the year at XP 2011.
We spent the first part of the session drawing out the model and coming up with some software examples which might fit into each domain.
- Simple – when you’re going to checkin run the build
- Complicated – certain types of architectural decisions
- Complex – task estimation
- Chaos – startup explosion
Steve pointed out that with simple/complicated the important thing to remember is that things on the right hand side are repeatable whereas on the other side we could do the same thing again and get a completely different result.
The most interesting part of the discussion for me was when Chris Matts joined in and suggested that in his experience people generally preferred to be in one of the quadrants more than the others.
He used Dan North as his example, suggesting that Dan prefers to be in chaotic situations.
I think I like being in the complex domain when you don’t really know what’s going to happen. I find it quite boring when things are predictable.
Traditional project managers would probably prefer to be in the simple/complicated domains because things are a bit more certain over that size.
Liz and I were discussing afterwards whether that tendency is what tends to lead to people becoming generalists rather than specialists.
If you were to become a specialist in a subject then it would suggest to me that a lot of your time would be spent in the complicated domain honing your skills.
Another discussion was around the desire when building systems to try and move the building of that system, which originally starts off being complex, into the complicated and finally into the simple domain.
Nat Pryce pointed out that we can often end up pushing a system back into chaos if we try and force it into the simple domain.
Pushing something into simple would suggest that anyone would be able to make changes to it without having any specialist/expert skills.
Someone else in the group pointed out that it’s often been thought that we can make the programming of systems something so simple that anyone can do it but that so far that theory has been proved false.
Overall this was an interesting session for me and it makes it a bit easier to understand some of the things that I see in the projects that I work on.
Recommended reading from the session
- Sense and Respond by Stephen Parry
- Joseph Perline on Social Complexity & Coaching Self Organising Teams
I’ve worked on a Scala project for the last 6 months and previously given a couple of talks about adopting a functional style of programming in C# so this is a subject area that I find quite interesting.
The talk focused on 5 refactorings that the presenters have identified to help move imperative code to a more functional style:
- Isolate mutation – keeping mutation in one place rather than leaking it everywhere
- Isolate predicate – making it possible to filter collections
- Separate loops – iterating over collections more than once if we’re doing more than one thing with the collection
- Decide on branches once – putting conditional logic into a map as functions
- Separate sequence of operations from execution of operations – composing functions and executing them at the end
Since they were coding in Java they made use of the Google Guava collections library to make it easier to work with collections in a functional way.
As you might imagine some of the code ends up being quite verbose due to the inability to pass functions around in Java.
I was reminded of a coding dojo we did a couple of years ago where we compared how code written using lambdaj would compare to Scala code.
Despite the verbosity it was interesting to see that it’s actually possible to achieve a similar style of programming to what you would expect in languages like Scala, F# and Clojure.
One interesting thing the speakers suggested is that they are better able to see data dependencies in their code when chaining functions together which they wanted to apply to that data.
I hadn’t really thought about the data dependencies before but I generally find code written using function composition to be easier to read than any other approach I’ve seen so far.
The main reason I picked up for why the authors thought we would want to adopt a functional approach to start with is the fact that it limits the number of things that we have to reason about.
Interestingly Jon Tirsen recently tweeted the following:
In my experience large purely functional codebases are very painful. Shared immutable, local mutable is the way to go.
We’ve mostly kept our Scala code base immutable but it’s not large by any measure (5,000 lines of production code so far) and probably not as complex as the domains Jon has worked with.
It’s an interesting observation though…immutability is no silver bullet!