Archive for the ‘Books’ Category
The book starts by describing our two styles of thinking…
- System 1 – operates automatically and quickly, with little or no effort and no sense of voluntary control.
- System 2 – allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
…and the next 30+ chapters describe situations where System 1 can get us into trouble.
The book focuses on different cognitive biases that humans get fooled by and there was a spell of about 100 pages starting around chapter 4 where it started to feel repetitive and I felt like putting the book down.
That said, I kept going and enjoyed the remaining 2/3 of the book much more.
These were some of my favourite parts:
- Outcome bias – this bias describes the tendency to build a story around events after we already know the outcome and come to the conclusion that the outcome was inevitable. For example, many people claim that they ‘knew well before it happened that the 2008 financial crisis was inevitable’.
Kahneman points out that we only remember the intuitions which turned out to be true, noone talks about those intuitions which turned out to be false. He also cautions us to consider the role that luck plays in any success.
- Intuitions vs Formulas – Kahneman quotes Paul Meehl who suggests that most of the time human decision makers are inferior to a prediction formula even when they are given the score suggested by the formula.
In the world of software this reliance on intuition is often used when interviewing people for jobs and choosing who to hire based on an intuitive judgement. Kahneman suggests making a list of about 6 traits that are pre-requisites for the position and then ranking each candidate against those. The candidate who scores the highest is the best choice.
- Over reliance on representativeness – often we judge the probability of something based on stereotypes rather than taking the base rate of the population into account.
e.g. if we saw a person reading The New York Times on the subway which of the following is a better bet about this stranger?
- She has a PhD.
- She does not have a college degree.
If we judge based on representativeness we’ll bet on the PhD because our stereotypes tell us that a PhD student is more likely to be reading the New York Times than a person without a college degree. However, there are many more non graduates on the subway than PhDs so the likelihood is that the person doesn’t have a college degree.
Kahneman encourages us to take the base rates of the population into account even if we have evidence about the case at hand. He uses the example of predicting how long a project will take and suggests using the data from previous similar projects as a baseline and then adjusting that based on our specific case.
- Framing – Kahneman talks about the impact that the way information is presented to us can have on the way we react to it. For example, he describes a study in which the merits of surgery for a condition are considered and the descriptions of the surgery are like so:
- The one month survival rate is 90%.
- There is 10% mortality is in the first month.
People who were presented with the first option were much more likely to favour surgery than those shown the second option.
- The default option – people feel much more regret if they deviate from the normal choice and something goes wrong than if they stick with the default and something equally wrong happens. This theory is well known on forms where authors will make the option they want people to select the default.
There are many more insights and these are just a few that stood out for me.
I’d encourage you to read the book if you find human decision making interesting although I’d try and forget everything I’ve said as you’ve probably been inadvertently primed by me.
I first came across William Zinsser’s ‘On Writing Well‘ about a year ago, but put it down having flicked through a couple of the chapters that I felt were relevant.
It came back onto my radar a month ago and this time I decided to read it cover to cover as I was sure there were some insights that I’d missed due to my haphazard approach the first time around.
What stood out in my memory from my first reading of the book was the emphasis on how words like a bit, a little, kind of, quite, pretty much and too dilute our sentences and I’ve been trying to avoid them in my writing ever since.
Other things that stood out for me this time were:
- Avoid unnecessary words e.g. blared loudly or grinned widely. It’s difficult to blare in any other way and if you’re grinning it implied that your mouth is open widely.
- Delete troublesome phrases. If you’re having trouble working out the structure for a sentence, it might be beneficial to get rid of it and see if the rest of the paragraph still makes sense. Often it will.
- Rewriting. The author emphasises over and over again the importance of rewriting a piece of text until you’re happy with it. This consists mostly of reshaping and tightening previous drafts. The way it’s described sounds very similar to code refactoring. My colleague Ian Robinson recommended ‘Revising Prose‘ as a useful book to read next in this area.
- Assume the reader knows nothing. The advice in the chapter on science and technology was probably the most applicable for the type of writing I do and I thought this section was particularly apt:
Describing how a process works is valuable for two reasons. It forces you to make sure that you know how it works. Then it forces you to take the reader through the same sequence of ideas and deductions that made the process clear to you.
I’ve found this to be the case multiple times although you can achieve the same benefits by presenting a talk on a topic; the benefits aren’t unique to writing.
- Explain your judgements. Don’t say something is interesting. Instead explain what makes it interesting and let the reader decide whether or not it deserves that label.
- Use simple words, be specific, be human and use active verbs. I often fall into the trap of using passive verbs which makes it difficult for the reader to know which part of the sentence they apply to. We want to minimise the amount of translation the reader has to do to understand our writing.
- Use a paragraph as a logical unit. I remember learning this at secondary school but I’ve drifted towards a style of writing that treats the sentence as a logical block. My intuition tells me that people find it easier to read text when it’s in smaller chunks but I will experiment with grouping my ideas together in paragraphs where that seems sensible.
I gleaned these insights mostly from the first half of the book.
The second half focused on different forms of writing and showed how to apply the lessons from earlier in the book. Although not all the forms were applicable to me I still found it interesting to read as the author has a nice way with words and you want to keep reading the next sentence.
My main concern having read the book is ensuring that I don’t paralyse my ability to finish blog posts by rewriting ad infinitum.
I came across this book while idly browsing a book store and since I’ve found most introduction to algorithms books very dry I thought it’d be interesting to see what one aimed at the general public would be like.
Overall it was an enjoyable read and I quite like the pattern that the author used for each algorithm, which was:
- Describe the problem that it’s needed for.
- Explain a simplified version of the algorithm or use a metaphor to give the general outline.
- Explain which bits were simplified and how the real version addresses those simplifications.
The first step is often missed out in algorithms books which is a mistake for people like me who become more interested in a subject once a practical use case is explained.
Although the title claims 9 algorithms I counted the following 8 which made the cut:
- Search Engine Indexing – this chapter covers how you’d go about writing the part of a search engine which works out which pages are applicable for certain search terms. It effectively describes Lucene.
I didn’t realise that Sergey Brin and Larry Page had published a paper back in 1998 titled ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine‘ which explains the initial PageRank algorithm in more detail.
- Public Key Cryptography – this chapter mostly covers Diffie-Hellman key exchange which I realise is quite well explained on wikipedia as well.
- Error-Correcting Codes – I took the checksums included in somewhat for granted but in this chapter MacCormick goes through the problem of data getting lost or corrupted in transfer and iterates through potential solutions. The further reading from this chapter is ‘A Mathematical Theory of Communication‘, Hamming code and Reed-Solomon error correction.
- Pattern Recognition – this chapter covered a variety of machine learning algorithms initially focusing on digit recognition – something that Jen Smith and I spent a chunk of time working on last year for the Kaggle problem. The three algorithms covered are nearest neighbours, decision trees and neural networks, all of which we attempted! I recently came across the concept of convolutional neural networks and deep learning which I’ve yet to try out but are apparently even more accurate than plain neural networks.
- Data Compression – I imagine data compression would be one of the more familiar algorithms on this list since everybody knows how to ‘zip up a file’ and send it around. The author covers lossy algorithms such as ‘JPEG Leave it Out‘ which reduces image quality as well as size, as well as lossless algorithms such as LZ77, Shannon-Fano coding and Huffman coding. The latter is covered in Stanford’s Algorithms II and I think the explanation there is actually easier to understand than the book’s.
- Databases – this section mostly focused on how ACID compliant relational databases work and covered things like the write ahead log and index lookups. The recommended reading from this chapter is Transaction Processing: Concepts and Techniques. Given the fact that many of the popular websites that people use tend to use NoSQL stores these days I thought there might be some mention of that but it was left out.
- Digital Signatures – this chapter ties in quite closely with the one on public key cryptography. It focused on signing of software with a digital signature rather than the signing of emails which is what I expected the chapter to be about. The RSA algorithm is described and the link between the difficultly of factoring large numbers and the security of the algorithm is explained.
I enjoyed the book and I’ve got some interesting articles/papers to add to my reading list. Even if you already know all the algorithms I think it’s interesting to hear them described from a completely different angle to see if you learn something new.
Nate Silver is famous for having correctly predicted the winner of all 50 states in the 2012 United States elections and Sid recommended his book so I could learn more about statistics for the A/B tests that we were running.
I thought the book was a really good introduction to applied statistics and by using real life examples which most people would be able to relate to it makes a potentially dull subject interesting.
Reasonably early on the author points out that there’s a difference between making a prediction and making a forecast:
- Prediction – a definitive and specific statement about when and where something will happen e.g. a major earthquake will hit Kyoto, Japan, on June 28.
- Forecast – a probabilistic statement over a longer time scale e.g. there is a 60% chance of an earthquake in Southern California over the next 30 years.
The book mainly focuses on the latter.
We then move onto quite an interesting section about over fitting which is where we mistake noise for signal in our data.
It’s not a problem when we combine lots of decision trees together and use a majority wins algorithm to make our prediction but if we use just one of them its predictions on any new data will be completely wrong.
Later on in the book he points out that a lot of conspiracy theories come when we look at data retrospectively and can easily detect signal from noise in data when at the time it was much more difficult.
He also points out that sometimes there isn’t actually any signal, it’s all noise, and we can fall into the trap of looking for something that isn’t there. I think this ‘noise’ is what we’d refer to as random variation in the context of an A/B test.
Silver also encourages us to make sure that we understand the theory behind any inference we make:
Statistical inferences are much stronger when backed up by theory or at least some deeper thinking about their root causes.
When we were running A/B tests Sid encouraged people to think whether a theory about why conversion had changed made logical sense before assuming it was true which I think covers similar ground.
A big chunk of the book covers Bayes’ theorem and how often when we’re making forecasts we have prior beliefs which it forces us to make explicit.
For example there is a section which talks about the probability a lady is being cheated on given that she’s found some underwear that she doesn’t recognise in her house.
In order to work out the probability she’s being cheated on we need to know the probability that she was being cheated on before she found the underwear. Silver suggests that since 4% of married partners cheat on their spouses that would be a good number to use.
He then goes on to show multiple other problems throughout the book that we can apply Bayes’ theorem to.
Some other interesting things I picked up are that if we’re good at forecasting then being given more information should make our forecast better and that when we don’t have any special information we’re better off following the opinion of the crowd.
Silver also showed a clever trick for inferring data points on a data set which follows a power law i.e. the long tail distribution where there are very few massive events but lots of really small ones.
We have a power law distribution when modelling the number of terrorists attacks vs number of fatalities but if we change both scales to be logarithmic we can come up with a probability of how likely more deadly attacks are.
There is then some discussion of how we can make changes in the way that we treat terrorism to try and impact the shape of the chart e.g. in Israel Silver suggests that they really want to avoid a very deadly attack but at the expense of there being more smaller attacks.
A lot of the book is spent discussing weather/earthquake forecasting which is very interesting to read about but I couldn’t quite see a link back to the software context.
Overall though I found it an interesting read although there are probably a few places that you can skim over the detail and still get the gist of what he’s saying.
My colleague Pat Kua recently published a book he’s been working on for the first half of the year titled ‘The Retrospective Handbook‘ – a book in which Pat shares his experiences with retrospectives and gives advice to budding facilitators.
I was intrigued what the book would be like because the skill gap between Pat and me with respect to facilitating retrospectives is huge and I’ve often found that experts in a subject can have a tendency to be a bit preachy when writing about their subject!
In actual fact Pat has done a great job making the topic accessible to all skill levels and several times covers typical problems with retrospectives before describing possible solutions.
These were some of the things that I took away:
- One of the most interesting parts of the book was a section titled ‘Be Aware of Cultural Dimensions’ where Pat covers some of the different challenges we have when people from different cultures work together.
I found the power distance index (PDI) especially interesting:
The extent to which the less powerful members of organisations and institutions accept and expect that power is distributed unequally
If you come from a culture with a low PDI you’re more likely to challenge something someone else said regardless of their role but if you’re from a culture with a high PDI you probably won’t say anything.
The US/UK tend to have low PDI whereas India has a high PDI – something I found fascinating when participating in retrospectives in India in 2010/2011. I think the facilitator needs to be aware of this otherwise they might make someone very uncomfortable by pushing them too hard to share their opinion.
- A theme across the book is that retrospectives aren’t about the facilitator – the facilitator’s role is to help guide the team through the process and keep things moving, they shouldn’t be the focal point. In my opinion if a facilitator is doing that well then they’d be almost invisible much like a football referee when they’re having a good game!
- The ‘First-Time Facilitation Tips’ chapter is particularly worth reading and reminded me that part of the facilitator’s role is to encourage equal participation from the group:
A common, shared picture is only possible if all participants give their input freely and share their view of the story. This is difficult if one or two people are allowed to dominate discussions. Part of your role as a facilitator is to use whatever techniques you can to ensure a balanced conversation occurs.
I think this is much easier for an external facilitator to do as they won’t have the burden of inter team politics/hierarchy to deal with.
Pat goes on to suggests splitting the group up into smaller groups as one technique to get people involved, an approach I’ve found works and from my experience this works really well and gets around the problem that many people aren’t comfortable discussing things in big groups.
There’s nothing more boring than doing the same retrospective week after week, nor is there a quicker way to completely put people off them, so I was pleased to see that Pat dedicated a chapter to keeping retrospectives fresh.
He suggests a variety of different techniques including bringing food or running the retrospective in a different location to normal to keep it interesting. I’ve heard of colleagues in Brazil doing their retrospectives outside which is another angle on this theme!
- Another good tip is that when creating actions we don’t need to spend time getting someone to sign up for them right there and then – an alternative is to encourage people to walk the wall and pick ones they feel they can take care of.
I think this book compliments Esther Derby/Diana Larsen’s ‘Agile Retrospectives‘ really well.
I find their book really useful for finding exercises to use in retrospectives to keep it interesting whereas Pat’s book is more about the challenges you’re likely to face during the retrospective itself.
There’s lots of other useful tips and tricks in the book – these are just a few of the ones that stood out for me – it’s well worth a read if you’re a participant/facilitator in retrospectives on your team.
I’d heard about The Lean Startup for a long time before I actually read it, mainly from following the ‘Startup Lessons Learned‘ blog, but I didn’t get the book until a colleague suggested a meetup to discuss how we might apply the ideas on our projects.
My general learning from the book is that we need to take the idea of creating tight feedback loops, which we’ve learnt in the agile/lean worlds, and apply it to product development.
Eric Ries talks about the ideas of the Minimum Viable Product (MVP), which is something that I’ve heard mentioned a lot in the last few projects I’ve worked on so I thought I knew what it meant.
I’d always considered the MVP to effectively be the first release of any product you were building but Ries’ frames it as the minimum product you can release to get feedback on whether your idea is viable or not. For example Dropbox’s MVP was a video demonstrating how it would work when the team had written the code to sync files on all operating systems.
On a lot of the projects that I’ve worked on we start after the point at which the business has decided what their product vision is and we’re responsible for implementing it. They haven’t necessarily then gone on to make a big return from building the product which I always found strange.
The most frequent argument I’ve heard against releasing an ‘incomplete’ product early in the organisations that I’ve worked for is that it could ruin their brand if they took this approach. One suggestion the book makes is to release the product under a spin off subsidiary if we’re worried about that.
The book also discussed the ways that we need to treat early adopters of a product and mainstream customers differently.
For example early adopters won’t mind/may actually prefer to play with an unfinished product if they can help influence its future direction.
By the time we have proved that we have a viable product and are looking to aim it at the mainstream market it will need to be more feature complete and polished in order to please that crowd.
There is a big focus on making data driven decisions such that we gather metrics showing how our product is actually being used by customers rather than just guessing/going on intuition as to what we should be doing next.
Facebook released an interesting video where towards the end they describe the metrics which they have around their application such that they can tell whether a deployment is losing them money and therefore needs to be rolled back.
One particular thing that the book talks about is cohort analysis:
A cohort analysis is a tool that helps measure user engagement over time. It helps UX designers know whether user engagement is actually getting better over time or is only appearing to improve because of growth.
We tend to use metrics to help us see the quality of code and which things we might want to work on there but I think the idea of using it to measure user engagement is really cool and should help us to build a more useful product.
I especially enjoyed the parts of the book where Ries talks about ways that some of the ideas have been applied with startups which are doing well at the moment although I think it’d be fair to say that the lean startup framework has been retrospectively fitted to explain these stories.
I think the danger of thinking that they were following lean startup principles is that it can lead to us not thinking through problems ourselves which I guess is the same problem with any framework/methodology.
I’m intrigued as to whether it will make a difference to the overall success rate of startups or not if they follow the ideas from the book.
I imagine we’ll see some ideas failing much more quickly than they might have otherwise and the suggestion is that when this happens we need to pivot and try and find another approach that will make money. Despite that, there there will come a point when the startup runs out of money without finding a way to monetise their product and it’ll be game over.
Overall the book is quite easy reading and worth a flick through as it has some cool ideas which can help us to spend less time building products which don’t actually get used.
I came across the work of Chris Argyris at the start of the year and in a twitter conversation with Benjamin Mitchell he suggested that Bill Noonan’s ‘Discussing the Undiscussable‘ was the most accessible text for someone new to the subject.
In the book Noonan runs through a series of different tools that Chris Argyris originally came up with for helping people to handle difficult conversational situations more effectively.
I really like the way the book is written.
A lot of books of this ilk come across to me as being very idealistic but Noonan avoids that by describing his own mistakes in trying to implement Argyris’ ideas. This makes the book much more accessible to me.
He also repeatedly points out that even though you might understand the tools that doesn’t mean that you’ll be an expert in using them unless you spend a significant amount of time practicing.
These were some of the ideas that stood out for me from my reading over the last few months:
- Advocacy/Inquiry – Noonan suggests that when we’re discussing a topic it’s important to advocate our opinion but also be open to people challenging it so that we can learn if there are any gaps in our understanding or anything that we’re missing.
This seems quite similar to Bob Sutton’s ‘Strong Opinions, Weakly Held‘ which I’ve come across several times in the past.
One anti pattern which comes from not doing this is known as ‘easing in‘ where we try to get the other person to advocate our opinion through the use of various leading questions.
The problem is that they tend to know exactly what we’re doing and it can come across as being quite manipulative.
- The Ladder of Inference – I’ve written about this previously and it describes the way that humans tend to very quickly draw conclusions about other people based on fairly minimal data and without even talking to the other person first!
When Jim and I worked together at ThoughtWorks University we were constantly pointing out when the other was climbing the inference ladder and it was quite surprising to me how often you end up doing so even when you don’t realise it!
What I find most interesting is that even when I was absolutely sure that my inference about a situation was correct it was still frequently wrong when I discussed it with the other person. They nearly always had a different perception of what was going on than I did.
I think it’s a step too far to believe that I won’t ever climb the inference ladder again but it’s useful to know how frequently I do it so at least I’m aware that I might need to climb down from time to time.
- Recovery time – There is constant reference throughout the book to our recovery time i.e. how quickly do we realise that we’ve made a mistake by participating in a defensive routine.
Argyris’ tools are quite useful for helping us to reduce our recovery time because they are reflective in nature and when we reflect on a situation we tend to see where we’ve gone wrong!
Noonan suggests that it’s inevitable we’ll make mistakes but the key is to try and detect our mistakes sooner and then hopefully reduce the number that we make.
Of course there are several other tools that Noonan describes, such as the left hand right hand case study approach, double loop learning, espoused theory vs theory in action and the mutual learning model.
I still make loads of the mistakes that the book points out and I’ve noticed that I only really reflect on how my conversations are going when I’ve been flicking through the book relatively recently.
It’s also useful to be hanging around other people who are studying Argyris’ work as you can then help each other out.
One of the initial books that Chris Argyris published describing these tools was ‘Action Science‘ (available as a free PDF).
I initially tried reading that before this book but I found it a bit hard to follow but I’ll probably try it again at some stage.
Something which I frequently forget while reading books is that it’s actually quite useful to know exactly why you’re reading it i.e. what knowledge are you trying to gain by doing so.
Implicitly I knew that I just wanted to get a rough idea of what sort of things it’s telling people but I somewhat foolishly just started reading it cover to cover.
I only realised that I’d been doing this when I’d got a third of the way through and realised that I hadn’t really learnt that much since the book effectively describes the way that ThoughtWorks delivers projects.
- Survey – scan the table of contents and chapter summaries
- Question – note any questions you have
- Read – read in entirety
- Recite – Summarise and take notes in your own words
- Review – Re-read, expand notes, discuss with colleagues
I don’t think it always needs to be quite as organised as this but I’ve certainly found it useful to scan the chapter headings and see which ones interest me and then skip the ones which don’t seem worth reading.
When reading The Art of Unix Programming I felt that I was learning a lot of different things for the first ten chapters or so but then it started to get quite boring for me so I skimmed the rest of the book and ended up reading just half of the chapters completely.
The amusing thing for me is that I knew about this technique a couple of years ago but I still don’t use it which I think that comes down to having a bit of a psychological thing about needing to finish books.
At the moment I have around 15 books which I’ve partially read and at the back of my mind I know that I want to go and read the rest of them even though there will undoubtably be varying returns from doing that!
I need to just let them go…
I read Dan Pink’s A Whole New Mind earlier in the year but I hadn’t heard of The Adventures of Johnny Bunko until my colleague Sumeet Moghe mentioned it in a conversation during ThoughtWorks India’s XConf, an internal conference run here.
The book is written in the Manga format so it’s incredibly quick to read and it gives 6 ideas around building a career.
I’m generally not a fan of the idea of ‘building a career’ – generally when I hear that phrase it involves having a ‘five year’ plan and other such concepts which I consider to be pointless.
I much prefer to just focus on what I’m most interested in at the moment and then have the flexibility to be able to focus on something else if that interests me more.
Luckily this book is not at all like that and these are the ideas Dan Pink suggests:
There is no plan
The most interesting part of this piece of advice is Pink’s suggestion that when we make career decisions we make them for two different types of reasons:
- Instrumental reasons – we do something because we think it’s going to lead to something else regardless of whether we enjoy it or think it’s worthwhile.
- Fundamental reasons – we so something because it’s inherently valuable regardless of what it may or may not lead to.
I think this pretty much makes sense – it’s very difficult to predict what’s going to happen so you might as well make sure you’re doing what you want at any given time.
Think strengths not weaknesses
Marcus Buckingham has several books on this subject but the general idea is that we should look to focus on things that we’re good at or are really motivated to become good at.
By inference this means that we want to avoid spending time on things that we’re not good at and/or not interested in becoming good at.
I recently came across a blog post written by Robbie Maciver where he suggests that we can use a retrospective to help us identify team member’s strengths and work out how best to utilise them.
I think there’s sometimes a reluctance for people to volunteer their strengths so we end up with people working on tasks that they hate that other people in the same team would enjoy.
It’s not about you
Pink then goes on to point out that even if we do work out how to play to our strengths we still need to ensure that we’re focusing on our customer/client.
Pink calls this focusing outward not inward.
In terms of working in technology consulting that would therefore be along the lines of ensuring that we’re focused on solving our clients’ problems in the best way for them rather than the most interesting way for us.
Persistence trumps talent
The underlying idea is that people aren’t born brilliant at any skill, it only comes through practice and if we’re intrinsically motivated to do something then we’re much more likely to put in the time it requires to become really good.
To add to that I think intrinsic motivation is highest when we’re focusing on our strengths so this piece of advice is linked with the earlier one.
Make excellent mistakes
Pink talks about making excellent mistakes which he describes as ‘mistakes that come from having high aspirations, from trying to do something nobody else has done’.
I think this is somewhat linked to the idea of failing fast although it seems to be perhaps one level beyond that.
Sumeet gave a talk at XConf where he described the new approach to ThoughtWorks University and pointed out that one of the most important goals was to allow attendees to make ‘excellent mistakes’
One observation I have here is that smaller companies seems more willing to make excellent mistakes. As organisations grow the aversion to risk because of worries about loss of reputation increases which makes excellent mistakes less likely.
I find Eric Ries’ lean startup ideas particularly interesting with respect to failing fast and my former colleague Antonio Terreno recently linked to a video where the General Manager of Forward 3D explains how they do this.
Leave an imprint
Pink talks about the importance of doing work which actually has some meaning or leaves the world in a better place than it was before.
This is probably the one that I can relate to least. I’m writing software which is what I’m passionate about but I wouldn’t say the problems I work on are having much impact on the world.
This one therefore leaves me with the most to think about.
This is the first book that I’ve read on the iPad’s Kindle application and it was a reasonably good reading experience – I particularly like the fact that you can make notes and highlight certain parts of the text.
It’s also possible to see which areas of the book other Kindle users have highlighted and you can see your highlights/notes via the Kindle website.
The sub-title of the book: ‘Are you indispensable? How to drive your career and create a remarkable future’ didn’t really appeal to me all that much but I’ve found that these types of things are often just for provocation and don’t necessarily detract from the message.
As with the other Godin books I’ve read, he spends a lot of time making his original point and in this one it felt like I’d already agreed with what he was saying for at least 50 pages before we moved onto something else.
Despite that I think there were some good points made in the book. These were some of the more interesting ones for me:
- One of my favourite quotes from the book is the following about fire:
Fire is hot. That’s what it does. If you get burned by fire, you can be annoyed at yourself, but being angry at the fire doesn’t do you much good. And trying to teach the fire a lesson so it won’t be hot next time is certainly not time well spent.
Our inclination is to give fire a pass, because it’s not human. But human beings are similar, in that they’re not going to change any time soon either.
One of the things I’ve noticed over time is that it’s very unlikely that the core way a person behaves is likely to change regardless of the feedback that they get from other people.
Pat touches on this in a post where he points out that we need to be prepared for feedback to not be accepted.
I don’t think this means we should stop giving feedback to people if we think it will help them be more effective but it does seem useful to keep in mind and help us avoid frustration when we can’t change a person to behave in the way we’d like them to.
- Godin makes an interesting point about the value of the work that we do:
A day’s work for a day’s pay (work <=> pay). I hate this approach to life. It cheapens us.
This is exactly the consulting model – billing by the hour or by the day – and although I’ve come across a couple of alternative approaches I’m not sure that they work better than this model.
Value based pricing is something which I’ve read about but it seems to me that there needs to be a certain degree of trust between the two parties for that to work out. The other is risk/reward pricing and I’ve heard good/bad stories about this approach.
It seems to me that we’d need to have a situation where both parties were really involved in the long term outcome of the project/system so if one party is only invested for a short % of this time then it’s going to be difficult to make it work.
- Shipping is another area he touches on in a way that makes sense to me:
I think the discipline of shipping is essential in the long-term path to becoming indispensable.
The only purpose of starting is to finish, and while the projects we do are never really finished, they must ship. Shipping means hitting the publish button on your blog, showing a presentation to the sales team, answering the phone, selling the muffins, sending out your references. Shipping is the collision between your work and the outside world.
While this is just generally applicable, in software terms this would be about the sort of thing that my colleagues Rolf Russell & Andy Duncan cover in their talk ‘Rapid and Reliable Releases‘.
I also had an interesting discussion with Jon Spalding about the importance of just getting something out there with respect to the ‘Invisible Hand‘ browser plugin that he’s currently working on. Jon pointed out that it’s often best to ship and then rollback if there are problems rather than spending a lot of time trying to make sure something is perfect before shipping.
- Godin spends a lot of time pointing out how important human interactions and relationships are and I think this is something that is very important for software delivery teams. One of the most revealing quotes is this:
You might be parroting the words from that negotiation book or the public-speaking training you went to, but every smart person you encounter knows that you’re winging it or putting us on.
Virtually all of us make our living engaging directly with other people. When the interactions are genuine and transparent, they usually work. When they are artificial or manipulative, they fail.
I attended a consulting training course when I first started working at ThoughtWorks 4 years ago and I’ve always found it impossible to actually use any of the ideas because it doesn’t feel natural. It’s interesting that Godin points out why this is!
Overall this book feels to me like a more general version of Chad Fowler’s ‘The Passionate Programmer‘ which is probably a more applicable book for software developers.
This one is still an interesting read although it primarily points out stuff that you probably already knew in a very clear and obvious way.