Mark Needham

Thoughts on Software Development


with 3 comments

I recently read the Graphite chapter of The Architecture of Open Source Applications book which mostly tells the story of how Chris Davis incrementally built out Graphite – a pretty cool tool that can be used to do real time graphing of metrics.

The whole chapter is a very good read but I found the design reflections especially interesting:

One of Graphite’s greatest strengths and greatest weaknesses is the fact that very little of it was actually “designed” in the traditional sense. [It] evolved gradually, hurdle by hurdle, as problems arose. Many times the hurdles were foreseeable and various pre-emptive solutions seemed natural. However it can be useful to avoid solving problems you do not actually have yet, even if it seems likely that you soon will.

One of the main success criteria of the application that I’m currently working on is its performance – it doesn’t have millisecond-ish latency requirements but it does need to do a lot of calculations and return within a reasonable amount of time.

Several times we’ve been working on a bit of code and have written something which is easy to read but runs in quadratic time because we have a nested iteration over a collection.

Having this performance requirement in mind has made us think twice about whether we should be looking for a better way of writing that code but that would seem to be a premature optimisation since we don’t actually know that this bit of code is going to be the bottleneck.

Instead we’ve started to put a performance testing infrastructure in place so that we can actually gather some data that tells us where the problems in the code are.

Davis goes on to state the following in his article:

The reason is that you can learn much more from closely studying actual failures than from theorizing about superior strategies. Problem solving is driven by both the empirical data we have at hand and our own knowledge and intuition. I’ve found that doubting your own wisdom sufficiently can force you to look at your empirical data more thoroughly.

As I mentioned in my last post the main logic in our application involves loading a bunch of data from Oracle and then processing various calculations on it.

At the moment we’re getting sub second responses from most requests and where that isn’t the case the bottleneck has been in one of our database queries rather than in the code.

Maybe eventually we’ll want to optimise those bits of code which have quadratic running time and the good thing is that we’ve abstracted those bits of code so we could optimise that code with minimal impact to the rest of the code base.

I’ve been working through Algo Class and one of the things that I’ve learnt is that you only really see problems with algorithms of quadratic running time if you are iterating through really big collections – otherwise it’s barely noticeable.

In our case the collections will probably be small so we unlikely to see that problem.

On the other hand we are likely to be repeating the same calculations multiple times so we will probably look to add some caching if that starts to become a problem.

Of course when we do that we’ll need to check our performance tests before and after the change to ensure that we’ve actually addressed a real problem rather than theorised about it!

For now though we’re continuing to write our code in the way that is most readable/easy to understand and waiting for our performance tests to force us to change that approach.


I’ve not worked closely on anything which had proprietary trading style latency requirements so if your experience is different please let me know in the comments.

Be Sociable, Share!

Written by Mark Needham

March 29th, 2012 at 6:45 am

  • David Turner

    Hi Mark,

    I’ve been in the same boat re. performance tuning. One thing that has come up repeatedly for me is that the easy-to-read quadratic-time (or exponential-time in my case) algorithm is worth keeping around as a reference implementation so you can test your optimised algorithms against it for correctness.

    A neat trick you can do with that is to make a counterexample generator, which takes in a dataset that exposes a discrepancy between the implementations and automatically cut bits out of it until no more can be removed without failing to demonstrate a bug, at which point you’ve got a tiny failing test case whose cause is very easy to identify. By repeatedly doing this and fixing the discrepancies one at a time we have collected a very comprehensive set of test cases for our algorithm – much more comprehensive than anything we would have written by hand – and moreover the test cases are all pretty orthogonal.

    Another point that is worth remembering is that sometimes an asymptotically worse algorithm is better if you’re nowhere near the asymptote – if you’re sorting a billion 4-element lists, you may well want bubble sort (quadratic) rather than a more exotic O(n log n) sort with a costlier setup for each application.

  • @8ba6e5de7bc28c280457a075f599677d:disqus yeh that’s actually what some of the guys on the algo-class forums have been suggesting for those problems – seems like a good idea!

    I don’t know enough about algorithm run times to understand exactly what you meant in the last paragraph. Could you elaborate a bit?

  • David Turner

    Ah, yes, I see the ambiguity in what I wrote.

    The O(f(n)) notation is called asymptotic notation because it describes the behaviour of something as n approaches an asymptote (typically infinity in a computing context). If you are sorting a collection of a billion things, you’re ‘close to the asymptote’ in the sense that n is large, so an O(n log n) algorithm is probably faster than bubblesort, an O(n^2) algorithm. However, if n == 4 then an O(n log n) algorithm may be slower than an O(n^2) algorithm, as the O(f(n)) notation doesn’t give any useful information about behaviour for small n.

    Therefore if you are in a situation where you must sort a small list, but you must do so again and again, picking a ‘fast’ sorting algorithm is not as simple as comparing their asymptotic behaviour.

    A real-world example of this is in a double-entry accounting system, transactions are made of credits and debits which add up to zero within each transaction. Nearly every transaction has one credit and one equal-and-opposite debit, but a small proportion of transactions have more entries than two. There are certain calculations that need to iterate over the transactions (in any order) and for each transaction, iterate over its entries in a particular order. With billions of transactions, going for a simpler and asymptotically worse algorithm like bubblesort can be significantly faster than using a more complex O(n log n) algorithm.