Archive for the ‘Book Club’ Category
In this presentation he presents quite a number of different ideas that he has learned from his experiences in software development over the years.
These are some of my thoughts and our discussion:
- The first part of the presentation talks about method size and Feathers observes that there seems to be a power law with relation to the size of methods in code bases – i.e. there are a lot of small methods and fewer large methods but there will always be some a very small number of massive methods.
I wonder whether this observation is somehow linked to the broken window theory whereby if a method is large then it is likley to increase in size since it probably already has some problems so it doesn’t seem so bad to throw some more code into the method. With small methods this temptation might not exist.
From what I’ve noticed the messiest methods tend to be around the edges of the code base where we are integrating with other systems – in these cases there is usually a lot of mapping logic going on so perhaps the benefit of extracting small methods here is not as great as in other parts of the code base.
- I really like the observation that protection mechanisms in languages are used to solve a social problem rather than a technical problem.
For example if we don’t want a certain class to be used by another team then we might ensure that it isn’t accessible to them by ensuring it’s not public or if we dont’ want certain methods on a class to be called outside that class then we’d make those methods private.
I think this is a reasonable approach to take to protect us from although it was pointed out that in some languages, like Python, methods are publically accessible by default and the idea when using private methods is to make it difficult to access them from outside the class but not impossible. I guess this is the same with most languages as you could use some sort of metaprogramming to gain access to whatever you want if needs be.
There’s an interesting post on the Python mailing list which talks through some of the benefits of using languages which don’t impose too much control over what you’re trying to do.
- The observation that names are provisional is something that I’ve noticed quite a lot recently.
Feathers points out that we are often reluctant to change the names of classes even if the responsibility of that class has completely changed since it was originally created.
I’ve noticed this a lot on projects I’ve worked on and I wonder if this happens because we become used to certain types being in the code and there would be a period of adjustment for everyone in the team while getting used to the new names – it might also ruin the mental models that people have of the system.
Having said that I think it’s better to have names which describe what that class is actually doing now rather than keeping an old name which is no longer relevant.
- I quite like the idea that the physical architecture of a system can shape the logical architecture and that often we end up with a technological solution looking for a problem.
I’m not sure if it’s completely related but one way that this might happen is that in an environment where we structure a team in layers it’s possible that certain functionality will end up being implemented by the team that can turn it around the quickest rather than being implemented in the layer where it might best logically belong.
He also mentions Conway’s law which suggests “…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” – that seems to link in quite closely with the above ideas.
- Amongst the other ideas suggested I quite liked the idea that requirements are just design decisions but that they are product design decisions rather than technical design decisions.
I like this as it opens up the idea that requirements aren’t set in stone and that we can work with them to come up with solutions that actually solve business problems instead of ‘design by laundry list’ as Feathers coins it.
I think his recent post about the boutique software shop explains these ideas in more detail.
There’s a lot of other ideas in this talk as well but these are some of the ones that made me think the most.
In our latest technical book club we discussed a series of posts written by K. Scott Allen about getting your database under version control.
- Three rules for database work
- The baseline
- Change scripts
- Views, Stored Procedures and the like
- Branching and Merging
These are some of my thoughts and our discussion:
- We had an interesting discussion around when it’s ok to go and change checked in change scripts – on previous projects I’ve worked on we’ve actually had the rule that once you’ve checked in a change script to source control then you can no longer change it but instead need to add another change script that does what you want.
Several colleagues pointed out that this approach can lead to us having quite a lot of fine grained change scripts – each maybe adding or altering one column – and that an approach they had used with some success was to allow changes to be made up until a change script had been applied in an environment outside of the team’s control, typically UAT.
- On a previous project I worked on where we didn’t script the database, we made use of Hibernate annotations to specify the mappings between domain objects and the database so the SQL code for database creation was being auto generated by a Hibernate tool.
This doesn’t seem to fit in with the idea of allowing us to incrementally change our database but Tom pointed out that it might be quite a useful way to get an initial baseline for a database. After that we probably want to make use of change scripts.
- We discussed the coupling between the model of a system that we have in a database and the one that exists in our code when using ORM tools and how we often end up with these two different models being quite tightly coupled meaning that changing our domain model can become quite difficult.
Tom pointed out that the database is just another data source but that it often seems to be treated differently to other data sources in this respect.
He also suggested the idea of creating an anti corruption layer in our code between the database and our domain model if they start to become too different rather than trying to keep them as similar as possible by some imaginative use of ORM mappings.
- Another interesting are of discussion was around how to deal with test data and existing data in the system with respect to our change scripts.
On projects I’ve worked on if we had reference data that was required across all environments if we wanted to make changes to this data then we would just make use of another change script to do that.
Ahrum suggested that if we had environment specific test data then we’d probably have other environment specific scripts that we could run after the change scripts had been executed.
One of the more interesting problems when making changes to tables which already have data in is working out what we want to do with the existing data if we change the type of a column.
We can typically either create a temporary column and copy all the values there first before creating an update script that converts each of the values to the data type of the new column or we can just forget about the data which is a less acceptable options from what I’ve seen.
The effort involved in doing this often seems to mean that we are more reluctant to make changes to column types after their initial creation which means that we need to choose the type quite accurately early on.
- A couple of colleagues pointed out that one benefit they had found with taking this approach to working with databases actually helped to expose any problems in the process – there can’t be any under the table changes made to databases if they are being manipulated by change scripts otherwise we end up with different setups in different environments.
Since we will be recreating and updating databases quite frequently these problems will become obvious quite quickly.
- Dean pointed out that the change script approach is actually really useful for performance testing – a benefit of this approach that I hadn’t considered previously considered.
When doing this type of testing we know which version of the database we were using at the time and if there suddenly becomes a performance problem then it should be much easier to track down which change resulted in the problem.
I read most of the book a couple of years ago but I don’t always remember all of the principles when I’m coding so it was good to revisit them again.
These are some of my thoughts and our discussion:
- Something that we’ve noticed on the project I’m working on at the moment with respect to the single responsibility principle is that often classes start off adhering to this principle but as soon as changes come along that single responsibility is pretty much ruined.
It often seems like a new piece of functionality fits quite nicely onto an existing object but then when we look back on it in retrospect it didn’t really fit that well and then another bit of code that fits even less well is added and an object is carrying out 3 responsibilities before we know it.
One way to potentially get around this is to write the responsibility of a class at the top of the file so that hopefully people will read that before adding anything new.
In a recent coding dojo we were reading through some of the Jetlang code and that approach is followed on a lot of the classes which has the added benefit of making the code easier to follow if you’re new to the code base.
- Uncle Bob talks quite a lot about ensuring that we design flexibility into the areas of the code that are likely to change so that if we do have to change the code then we can do so easily.
I’ve read this type of advice before but I’m never sure how exactly you know where those areas of change are likely to be unless you’ve worked on a similar system before.
From what I’ve seen any code which relies on another system’s data/representations needs to be designed in this way as we don’t know when it is going to change and it might even change without much warning so we need to adapt to that situation fairly quickly.
- He didn’t mention the Interface Segregation Principle in this presentation but I find this one the most interesting, probably because I haven’t seen it followed on any projects I’ve worked on yet and I’m intrigued as what a code base that followed it would be like to work on.
I like the idea of having role based interfaces although I imagine it would probably be quite easy to abuse this principle and end up with interfaces that are so finely grained that we lose the domain’s meaning.
- While I have no doubt that if we followed these principles all the time when coding our code base would probably be much cleaner, it feels to me that these principles are quite difficult to remember when you’re coding.
From what I’ve noticed we find it much easier to follow ideas like Don’t Repeat Yourself, perhaps because it’s easier to see when we are breaking principles like this.
In addition, most people tend to agree about what makes repetition of code but when it comes to something like the Single Responsibility Principle, for example, people seem to have different opinions of what a responsibility is which makes it difficult to get consensus.
I quite like the newspaper metaphor to writing code which Uncle Bob describes in Clean Code and he elabroates on this further in a recent post about extracting method until you drop. I find ideas like that easier to follow when coding.
My current thinking is that the principles are really good for when we’re analysing code having taken a step back but when we’re actually coding they’re not necessarily always the first thing to come to mind, at least for me!
Perhaps that’s normal but I’m intrigued as to whether more experienced developers than me are able to keep these ideas in mind all the time?
In this weeks book club we discussed Arlo Belshee’s paper ‘Promiscuous Pairing and Beginner’s Mind‘ where he presents the idea of rotating pairs more frequently than we might usually, suggesting that the optimal rotation time is 90 minutes.
I remember coming across the idea of promiscuous pairing a couple of years ago but I hadn’t read the paper all the way through and so far haven’t worked on a team where we’ve really tried out his ideas.
These are some of my thoughts and our discussion of the paper:
- I found the section of the paper where he talks about skills and competencies quite interesting – the suggestion seems to be that for any given task the least skilled person for that task should be the one to do it but that the person should still have the necessary competency to execute it.
I’m not entirely sure of the distinction between skills and competencies – Belshee suggests:
The difference is simple. People can learn skills in a matter of months. People can’t learn competencies in less than several years. There aren’t many things that fall between — qualifications are almost always skills or competencies.
Software development wise this would suggest that a skill such as object orientation would be more likely to be a competency but what about a specific programming language?
It is possible to learn your way around a language to the point where you can do some productive things with it relatively quickly but for me at least there are still things I don’t know about in the languages I work with and I’ve used some of them for a few years now.
- There is a nice quote from the paper when discussing the idea of giving tasks to the least qualified person, that ‘the least qualified teams produced the code that had the fewest surprises‘ – I imagine this is probably down to the fact that the least qualified person probably doesn’t yet know how to do clever things with a language so they just do the most obvious implementation. I think this is certainly what we want to happen when a team is working on code together.
- I liked the discussion of beginner’s mind where he talks about it being a transitionary state that we move into when are in a situation that is unfamiliar but near the limits of our comfort zone and that we will move out of once we are comfortable with the current situation.
It seems like this state of mind links quite closely with the idea of deliberate practice that Geoff Colvin talks about in ‘Talent is Overrated‘ – the idea being that in order to improve most effectively we need to be doing activities which are just beyond our current competence.
- I’ve frequently noticed that people are reluctant to swap pairs until they have finished the story that they’re working on – Matt Dunn pointed out that this is probably linked to human’s natural desire to finish what they’ve started!
Belshee seems to get around this problem by ensuring that the tasks being worked on are sufficiently small in size that they can be completed within one or two pairing sessions.
A lot of the work that we do is integrating different systems where there is quite a bit of exploratory work to find out what we actually need to do first – it would be interesting to see if quicker rotations would be appropriate for this type of work or whether a lot of time would be spent bringing the new person on the pair up to speed with what’s going on.
- We had some discussion on pair programming in general – Raphael Speyer described the idea of ‘promiscuous keyboarding‘ as an approach to try within a single pair. The idea is that the keyboard switches between each person every minute which hopefully leads to both people being more engaged.
I find that quite often on teams people will roll in and out of pairs when there help is needed on something – Nic Snoek described this as being ‘guerilla pairing‘ and I think it is something that a technical lead of a team is most likely to engage in as they move around the room helping people out.
I often feel that pair programming is a skill that we take for granted and we assume that we can just put two people together and they’ll be able to work together effectively.
From what I’ve found this doesn’t always work out so I think it’s important to keep innovating and trying out different things in this area so that we can find approaches that allow us to work better together.
As Dave suggests it should be ‘guerilla’ and not ‘gorilla’ pairing that Nic suggested as being a useful idea.
In this week’s book club we continued with the idea of discussing videos, this week’s selection being Greg Young’s ‘Unshackle your Domain‘ presentation from QCon San Francisco in November 2008. He also did a version of this talk in the February European Alt.NET meeting.
In this presentation Greg talks about Command Query Separation at the architecture level and explicit state transitions amongst other things.
Jonathan Oliver has created a useful resource page of the material that’s been written about some of these ideas as well.
These are some of my thoughts from our discussion:
- I think the idea of eventual consistency is quite interesting and I can see how taking this approach instead of trying to create the impression that everything the user does is in real time we can make life much easier for ourselves.
I’m not sure how this type of idea works when users have the expectation that when they change information on a page that it is updated straight away though.
For example on a form I might decide to change my address and I would expect that if I reload the page with my address on that it would now display my new address instead of the old one. If that address was eventually consistent after 5 minutes for example the user might become quite confused and send in another update to try and change their address again.
Liz pointed out that with bank transactions it is often explicitly described to users that money transfers are ‘pending’ so perhaps the expectation that things aren’t done in real time has already been set in some domains.
Werner Vogels has a nice article about eventual consistency in distributed systems in which he references a paper by Seth Gilbert and Nancy Lynch which talks about the idea that “of three properties of shared-data systems; data consistency, system availability and tolerance to network partition one can only achieve two at any given time.”
- Dave pointed out that the idea of ‘POST redirect GET‘ often used when processing web form submissions seems to adhere quite nicely to the idea of Command Query Separation as described in the video.
I find it quite interesting that CQS at the method level in our code is usually quite closely adhered too but so often we’ll just bolt on getters onto domain objects so that we can access some data to display on the view.
The idea of not doing this and having a write only domain seems very interesting and seemed to make sense in the system that Greg described.
It would be interesting to know whether one would follow such an extreme approach at the architecture level if there weren’t such high performance requirements or the need to have all the operations performed on the system available for an audit.
- Greg’s idea of state transitions sounds quite similar although perhaps not exactly the same as Eric Evans’ ‘domain events’ which he discussed in last week’s book club.
It would be interesting to see what the code to process form submissions by the user would look like with this approach.
As Silvio pointed out, the complexity of this code would probably be much higher than in a more typical approach where we might just build our domain objects straight from the data the user entered.
Using Greg’s approach we would need to work out which state transitions had actually happened based on the user input which would presumably involve keeping a copy of the previous state of the domain objects in order to work out what had changed.
I like the idea of making concepts more explicit though and the idea of keeping all state transitions is something that is built into databases by way of their log by default so it’s not a completely new concept.
Pat Helland has a cool post titled ‘Accountants don’t use erasers‘ where he describes it in more detail.
This week book club became video club as we discussed Eric Evans’ QCon London presentation ‘What I’ve learned about DDD since the book‘.
I was lucky enough to be able to attend this presentation live and we previously ran a book club where I briefly summarised what I’d learnt but this gave everyone else an opportunity to see it first hand.
There are some of my thoughts and our discussion of the presentation:
- We spent a lot of time discussing domain events – one of the new ideas which Evans had observed since publishing the book – and Liz pointed out that a domain event often exists where there is some sort of interaction between objects which doesn’t seem to belong on either object.
The example Liz gave is where there is an accident between two cars and we realise that there should be a Collision object to describe what’s happened. Dave pointed out that commits in Git could be another example of a domain event.
We discussed some domain events which we had noticed in code bases we had worked on.
One system I worked on had the concept of transactions to describe interactions between different accounts. They were also used to calculate the balance of an account at any time in preference to keeping the balance of each account updated in real time.
The balance on the account would still be updated at the end of each day such that we would only need to make use of transactions since the last update to calculate a balance.
- What really stood out for me in our discussion is how a lot of the concepts that Eric Evans talks about are designed to make us explicitly state what we are doing in our code.
We have domain events for describing what happens in the domain, we try to define bounded contexts so we know what the object model means in specific areas of the code, we identify a core domain so we can spend our time on the most important code and so on.
Dave described this as conscious coding in that it makes us think very carefully about what we’re coding rather than just coding without real analysis and probably writing code which isn’t as useful as it could be.
- I’m still not sure that I really get what the core domain means but Dave described it as ‘If we stripped out all the other code in our system except for the core domain we would still be able to build something around that’ which actually has the side effect of pointing out that we need to ensure that this code is completely decoupled from any implementation details of the current way we are using it.
For example if we are currently writing a website and it is decided that we need to be able to display our content on a mobile phone, we should still be able to make use of the same core domain code to do that fairly easily.
- The section of the talk about the big ball of mud and the green house was also very interesting and Raphael Speyer came up with the analogy of only letting code from outside the green house inside if it had been hosed down and all the mud removed so that we can ensure that out important code is of comparable quality.
If I understand correctly the core domain is the code that should reside in that metaphorical green house and we will spend a lot of time on its modeling so that it will be easy to change in the future.
By contrast the modeling of the code outside the green house is not as important – a lot of the time the code out there is ‘just data’ and we want to display that data to users with minimal effort in modeling it in our code since we don’t get a great deal of value from doing so.
I’ve been trying to read Object Design for about a year since coming across the book while reading through the slides from JAOO Sydney 2008 but I’ve often found the reading to be quite abstract and have struggled to work out how to apply the ideas to the coding I do day to day.
This therefore seemed to be like a good opportunity to get some more opinions and discussion on at least part of the book.
There are some of my thoughts and our discussion of the article:
- Liz pointed out that at ThoughtWorks University (which we both attended) we were shown the idea of writing the ‘job’ of an object just above the class definition, the point of this being that we would describe the responsibility of the object and ensure that objects only had one responsibility.
Neither Liz nor I have ever seen this done on any of the projects that we’ve worked on but it seems quite related to the idea of responsibility driven development which is encouraged in the book. The different role stereotypes would form part of the responsibilities that an object might have.
Matt Dunn suggested that perhaps the tests we write fulfill this role instead in a more indirect way although I think we would probably need to be writing tests with the quality Jimmy Bogard describes in his post on getting value our of your unit tests, instead of the pale imitations we often end up writing, to achieve this goal.
- I find each of the individual stereotypes quite difficult to remember on their own but Jeremy pointed out that they fit into 3 categories:
- Knowing (Information Holder, Structurer)
- Doing (Service Provider, Interfacer)
- Deciding (Controller, Coordinator)
I think the object role stereotypes mix quite nicely with some of the ideas from Domain Driven Design – for example a value object would probably be an information holder; an entity might be a structurer and an information holder; a factory could be a service provider; a repository is possibly an interfacer although I think that may be more the case if we are using a repository as a DAO instead of a true DDD repository.
- I think it might actually be easier when looking at existing code to question whether or not a particular object is actually only doing one thing or not before analysing which of the stereotypes it is fulfilling. We discussed although didn’t come to a conclusion whether there are certain stereotypes that should not be mixed together in one object. For example maybe an object which acts as an interfacer wouldn’t store state and therefore might not be an information holder as well.
- We briefly discussed some other articles which cover similar ideas including Isaiah Perumalla’s article on role based objects and Udi Dahan’s idea of the roles being described more specifically in the interface name. Both of these articles have some good ideas and I find the latter particularly intriguing although I haven’t tried it out on any code I’ve written as yet.
- The article also has some great ideas around coding in general which I think make a lot of sense:
Don’t be afraid to create small objects instead of wallowing in the mud of primitive variables
Designing software is often an exercise in managing complexity…you can take steps to limit the complexity of any given class by only assigning it a discrete set of responsibilities
As I understand it, the article describes an architecture for our systems where the domain sits in the centre and other parts of the system depend on the domain while the domain doesn’t depend on anything concrete but is interacted with by various adapters.
These are some of my thoughts and our discussion of the article:
- It seems like the collection of adapters that Cockburn describes as interacting with the ‘application’ form lots of different anti corruption layers in Domain Driven Design language.
I think tools like Automapper and JSON.NET might be useful when writing some of these adaptors although Dave pointed out that we need to be careful that we’re not just copying every bit of data between different representations of our model otherwise we are indirectly creating the coupling that we intended to avoid.
This seems to lead towards an understanding of code as consisting of lots of different hexagons which interact with each other through pipes and filters.
- Dave described how designing our code according to the hexagonal architecture can help us avoid the zone of pain whereby we have lots of concrete classes inside a package and a lot of other packages depending on us. Scott Hanselman discusses this concept as part of a post on the different graphs & metrics NDepend provides.
From my understanding the idea seems to be to try not to have our application depending on unstable packages such as the data layer which we might decide to change and will have great difficulty in doing so if it is heavily coupled to our business code. Instead we should look to rely on an abstraction which sits inside the domain package and is implemented by one of the adaptors. I haven’t read the whole paper but it sounds quite similar to Uncle Bob’s Stable Dependencies Principle.
- I’m not sure whether these applications are following the hexagonal architecture but twitter, Google Maps and WordPress all have APIs which provide us with the ability to drive at least some part of their applications using adaptors that we need to write. This seems to be the way that quite a few applications are going which I imagine would influence the way that they organise their code in some way. In twitter’s case the external adaptors that drive their application are the main source of use.
In our latest book club we discussed the Dreyfus Model, a paper written in 1980 by Stuart and Hubert Dreyfus.
I’ve become quite intrigued by the Dreyfus Model particularly since reading about its applicability to software development in Andy Hunt’s Pragmatic Learning and Thinking and after looking through Pat Kua’s presentation on ‘Climbing the Dreyfus Ladder of Agile Practices‘ I thought it’d be interesting to study the original paper.
These are some of my thoughts and our discussion of the paper:
- We discussed what actually counts as experience at a skill with regards to the idea that it takes 10,000 hours of practice to become an expert at something, and Cam made an interesting observation that ‘if you don’t have a change of thinking from doing something then you haven’t had an experience‘. I really like this as a barometer to tell whether you’re actually learning something or just reapplying what you already know. I find pair programming is an excellent practice for encouraging this.
From reading Talent is Overrated I learnt that the important thing is that the tasks you are doing are slightly more difficult than your current ability so that you are always stretching yourself.
- I’ve learnt that it’s not actually possible to skip the levels on the Dreyfus model when learning a new skill – previously when I’ve looked at it I always thought that it should be possible to do that but from experience
I think you always need to spend some time at each stage while you are finding your feet and this time can be unbelievably frustrating. I think it’s useful to recognise that this frustration is because you are a novice and don’t have a good way to problem solve yet and that it will get easier as you keep practicing.
It’s interesting how this can be applied to some of the agile practices because all of them come from a higher level principle that we are trying to achieve but you won’t actually understand that principle until you have done the practice for a certain amount of time. At that stage you have a better understanding of why the practice is useful which allows you to choose when it is useful and when a different approach might be more effective.
- We discussed whether you need to be a master to be able to teach a skill to people. I don’t think these are correlated and that actually to be able to teach someone the more important thing is your skill at teaching on the Dreyfus model. At school for example I often found that the better teachers were the ones who had the ability to explain things in a way that students could understand rather than being the absolute best at the skill themselves.
- Dave mentioned that he is frequently asked how he knows a certain skill and has therefore started becoming more aware of how this happens so that he’s able to do this more effectively. I’ve often found that understanding the way that more experienced practitioners think about problems is far more useful than just asking them to solve a problem you’re having trouble with.
- Before the session we arranged for a few of us to try and come up with the behaviours needed for different skills and where they fitted on the Dreyfus model. There was a unanimous feeling that this was actually really difficult which also suggested that you can have a level of skill at using and understanding the Dreyfus model! I think it would be quite useful to identify the behaviours that you want to acquire so that you have some sort of roadmap for developing your ability at a certain skill but we also discussed the fact that the Dreyfus model is actually very useful as a reflection tool when working out where your ability with a certain skill lies and how you can change that. I think tracking your improvement against the Dreyfus model would be far more effective than a typical performance review.
- We spoke about beginner’s luck and how we ofen do things much better when we first start doing them as it is pretty much reflective without too much analysis which would potentially ruin our performance. At this stage we are unconsciously incompetent at the skill so we just do it. I think having some success at a skill due to this actually results in motivation to keep on improving and actually improving our skill level.
I think the Dreyfus model in general is a really cool thing to learn about although I found that the way it is presented in Pragmatic Learning and Thinking is more accessible than the original paper. It’s interesting that it was written about nearly 30 years ago and we don’t make use of it as much as we could.
Our latest book club session was a discussion on a paper written by my colleague Chris Stevenson and Andy Pols titled ‘An Agile Approach to a Legacy System‘ which I think was written in 2004. This paper was suggested by Dave Cameron.
These are some of my thoughts and our discussion of the paper:
- The first thing that was quite interesting was that the authors pointed out that if you just try and rewrite a part of a legacy system you are actually just writing legacy code yourself – we weren’t sure exactly what was meant by this since for me at least the definition of legacy code is ‘code which we are scared to change [because it has no tests]’ but presumably the new code did have tests so it wasn’t legacy in this sense. Rewriting the code doesn’t really add any value to the business though as they point out since all that code might not even being used in which case it is just wasted effort. The idea of not rewriting is something that Uncle Bob advocates and Eric Evans also mentions the dangers of trying to replace legacy systems in his latest presentations.
- I thought it was interesting that the team didn’t make use of automatic integration since they were frequently integrating on their own machines – I’m not sure how well this would work on a big team but certainly if you have a team of people fairly experienced with CI then I can imagine that it would work really well. Dean has written a post about serverless CI which covers the idea in more details.
- I liked the idea of putting up politics as user requirements on the story wall and then prioritising them just like any other requirement. More often the approach tends to be to try and address these problems as soon as they arise and then end up not really solving them and then getting burnt later on. This approach sounds much better.
- Another idea that I like is that the team didn’t get hung up on process – the teams I’ve been on which worked the best weren’t slaves to process and I’ve often heard it suggested that having a process is just a way of dealing with the fact that there is a lack of trust in the team. Jay Fields recently wrote about the idea of having more trust and less card wall and Ron Jeffries has a recent post where he talks about the light weight way that we should be making use of stories.
- Another really cool idea which I don’t remember seeing before is having the whole team involved in major refactorings until the whole refactoring has been completed. Quite often with refactorings like this one pair might go off and do it and then when they checkin later there are a lot of changes and the other pairs have trouble merging and now have a lot of code which they are unfamiliar with.
- The idea of having a self selected team sounds like an interesting idea as you then only have people on the team who actually want to be on it and want to make things happen. I’m not sure how often this would actually happen but it is a nice idea.
- The importance of testing the system against a live database before putting it into production is emphasised and this goes beyond just using production data in a test environment. The team also made use of data verification testing to ensure that the new system and the current ones were working identically.
Although this paper is relatively short it’s probably been the most interesting one that we’ve looked at so far. I think a lot of the ideas outlined can be used in any project and not just when dealing with legacy systems – definitely worth reading.