Mark Needham

Thoughts on Software Development

Archive for the ‘Software Development’ Category

Hadoop: HDFS – ava.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.(Ljava/util/zip/Checksum;II)V

without comments

I wanted to write a little program to check that one machine could communicate a HDFS server running on the other and adapted some code from the Hadoop wiki as follows:

package org.playground;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HadoopDFSFileReadWrite {
    static void printAndExit(String str) {
        System.err.println( str );
    public static void main (String[] argv) throws IOException {
        Configuration conf = new Configuration();
        conf.addResource(new Path("/Users/markneedham/Downloads/core-site.xml"));
        FileSystem fs = FileSystem.get(conf);
        Path inFile = new Path("hdfs://");
        Path outFile = new Path("hdfs://" + System.currentTimeMillis());
        // Check if input/output are valid
        if (!fs.exists(inFile))
            printAndExit("Input file not found");
        if (!fs.isFile(inFile))
            printAndExit("Input should be a file");
        if (fs.exists(outFile))
            printAndExit("Output already exists");
        // Read from and write to new file
        byte buffer[] = new byte[256];
        try ( FSDataInputStream in = inFile ); FSDataOutputStream out = fs.create( outFile ) )
            int bytesRead = 0;
            while ( (bytesRead = buffer )) > 0 )
                out.write( buffer, 0, bytesRead );
        catch ( IOException e )
            System.out.println( "Error while copying file" );

I initially thought I only had the following in my POM file:


But when I ran the script I got the following exception:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(
	at org.apache.hadoop.hdfs.DFSClient.create(
	at org.apache.hadoop.hdfs.DFSClient.create(
	at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(
	at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.playground.HadoopDFSFileReadWrite.main(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.intellij.rt.execution.application.AppMain.main(

From following the stack trace I realised I’d made a mistake and had accidentally pulled in a dependency on hadoop-hdfs 2.4.1. If we don’t have the hadoop-hdfs dependency we’d actually see this error instead:

Exception in thread "main" No FileSystem for scheme: hdfs
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
	at org.apache.hadoop.fs.FileSystem.createFileSystem(
	at org.apache.hadoop.fs.FileSystem.access$200(
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
	at org.apache.hadoop.fs.FileSystem$Cache.get(
	at org.apache.hadoop.fs.FileSystem.get(
	at org.apache.hadoop.fs.FileSystem.get(
	at org.playground.HadoopDFSFileReadWrite.main(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.intellij.rt.execution.application.AppMain.main(

Now let’s add the correct version of the dependency and make sure it all works as expected:


When we run that a new file is created in HDFS on the other machine with the current timestamp:

$ date +%s000
$ hdfs dfs -ls
-rw-r--r--   3 markneedham supergroup       9249 2015-11-01 00:13 output-1446337098257

Written by Mark Needham

October 31st, 2015 at 11:58 pm

Posted in Software Development

Tagged with ,

jq: error – Cannot iterate over null (null)

without comments

I’ve been playing around with the jq library again over the past couple of days to convert the JSON from the Stack Overflow API into CSV and found myself needing to deal with an optional field.

I’ve downloaded 100 or so questions and stored them as an array in a JSON array like so:

$ head -n 100 so.json
        "has_more": true,
        "items": [
                "is_answered": false,
                "delete_vote_count": 0,
                "body_markdown": "...",
                "tags": [
                "question_id": 33023306,
                "title": "How to delete multiple nodes by specific ID using Cypher",
                "down_vote_count": 0,
                "view_count": 8,
                "answers": [

I wrote the following command to try and extract the answer meta data and the corresponding question_id:

$ jq -r \
 '.[] | .items[] |
 { question_id: .question_id, answer: .answers[] } |
 [.question_id, .answer.answer_id, .answer.title] |
 @csv' so.json
33023306,33024189,"How to delete multiple nodes by specific ID using Cypher"
33020796,33021958,"How do a general search across string properties in my nodes?"
33018818,33020068,"Neo4j match nodes related to all nodes in collection"
33018818,33024273,"Neo4j match nodes related to all nodes in collection"
jq: error (at so.json:134903): Cannot iterate over null (null)

Unfortunately this results in an error since some questions haven’t been answered yet and therefore don’t have the ‘answers’ property.

While reading the docs I came across the alternative operation ‘//’ which can be used to provide defaults – in this case I thought I could plugin an empty array of answers if a question hadn’t been answered yet:

$ jq -r \
 '.[] | .items[] |
 { question_id: .question_id, answer: (.answers[] // []) } |
 [.question_id, .answer.answer_id, .answer.title] |
 @csv' so.json
33023306,33024189,"How to delete multiple nodes by specific ID using Cypher"
33020796,33021958,"How do a general search across string properties in my nodes?"
33018818,33020068,"Neo4j match nodes related to all nodes in collection"
33018818,33024273,"Neo4j match nodes related to all nodes in collection"
jq: error (at so.json:134903): Cannot iterate over null (null)

Still the same error! Reading down the page I noticed the ? operator which provides syntactic sugar for handling/catching errors. I gave it a try:

$ jq -r  '.[] | .items[] |
 { question_id: .question_id, answer: .answers[]? } |
 [.question_id, .answer.answer_id, .answer.title] |
 @csv' so.json | head -n10
33023306,33024189,"How to delete multiple nodes by specific ID using Cypher"
33020796,33021958,"How do a general search across string properties in my nodes?"
33018818,33020068,"Neo4j match nodes related to all nodes in collection"
33018818,33024273,"Neo4j match nodes related to all nodes in collection"
33015714,33021482,"Upgrade of spring data neo4j 3.x to 4.x Relationship Operations"
33011477,33011721,"Why does Neo4j OGM delete method return void?"
33011102,33011565,"Neo4j and algorithms"
33011102,33013260,"Neo4j and algorithms"
33010859,33011505,"Importing data into an existing database in neo4j"
33009673,33010942,"How do I use Spring Data Neo4j to persist a Map (java.util.Map) object inside an NodeEntity?"

As far as I can tell we are just skipping any records that don’t contain ‘answers’ which is exactly the behaviour I’m after so that’s great – just what we need!

Written by Mark Needham

October 9th, 2015 at 6:34 am

Posted in Software Development

Tagged with

Mac OS X: Installing the PROJ.4 – Cartographic Projections Library

without comments

I’ve been following Scott Barnham’s guide to transforming UK postcodes into (lat, long) coordinates and needed to install the PROJ.4 Cartographic Projections library which I initially struggled with.

The first step is to download a tar.gz version which is linked from the wiki page:

$ wget

Next we’ll unpack the file and then build the binaries:

$ tar -xvf proj-4.9.1.tar.gz
$ cd proj-4.9.1
$ ./configure --prefix ~/projects/land-registry/proj-4.9.1
$ make
$ make install

The files we need are in the bin directory…

$ ls -alh bin/
total 184
drwxr-xr-x   8 markneedham  staff   272B  5 Oct 23:07 .
drwxr-xr-x@ 41 markneedham  staff   1.4K  5 Oct 20:46 ..
-rwxr-xr-x   1 markneedham  staff    20K  5 Oct 23:07 cs2cs
-rwxr-xr-x   1 markneedham  staff    16K  5 Oct 23:07 geod
lrwxr-xr-x   1 markneedham  staff     4B  5 Oct 23:07 invgeod -> geod
lrwxr-xr-x   1 markneedham  staff     4B  5 Oct 23:07 invproj -> proj
-rwxr-xr-x   1 markneedham  staff    13K  5 Oct 23:07 nad2bin
-rwxr-xr-x   1 markneedham  staff    21K  5 Oct 23:07 proj

…now let’s give it a try. We need to feed in OSGB36 grid reference values and then we’ll get back WGS84 Lat/Lng values. We can grab some grid reference values from the Ordnance Survey website.

e.g. the Neo4j London office has the post code SE1 0NZ which translates to coordinates 531950,180195. Let’s try those out with PROJ.4:

$ ./proj-4.9.1/bin/cs2cs -f '%.7f' +proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +towgs84=446.448,-125.157,542.060,0.1502,0.2470,0.8421,-20.4894 +units=m +no_defs +to +proj=latlong +ellps=WGS84 +towgs84=0,0,0 +nodefs
531950 180195
-0.1002020	51.5052917 46.0810195

So it’s suggested a (lat, long) pairing of (51.5052917, -0.1002020). And if we plug that into Google maps

2015 10 05 23 29 09

…it’s pretty much spot on!

Written by Mark Needham

October 5th, 2015 at 10:41 pm

Record Linkage: Playing around with Duke

without comments

I’ve become quite interesting in record linkage recently and came across the Duke project which provides some tools to help solve this problem. I thought I’d give it a try.

The typical problem when doing record linkage is that we have two records from different data sets which represent the same entity but don’t have a common key that we can use to merge them together. We therefore need to come up with a heuristic that will allow us to do so.

Duke has a few examples showing it in action and I decided to go with the linking countries one. Here we have countries from Dbpedia and the Mondial database and we want to link them together.

The first thing we need to do is build the project:

export JAVA_HOME=`/usr/libexec/java_home`
mvn clean package -DskipTests

At the time of writing this will put a zip fail containing everything we need at duke-dist/target/. Let’s unpack that:

unzip duke-dist/target/

Next we need to download the data files and Duke configuration file:


Now we’re ready to give it a go:

java -cp "duke-dist-1.3-SNAPSHOT/lib/*" no.priv.garshol.duke.Duke --testfile=countries-test.txt --testdebug --showmatches countries.xml
ID: '7706', NAME: 'guatemala', AREA: '108890', CAPITAL: 'guatemala city',
MATCH 0.9825124555160142
ID: '10052', NAME: 'pitcairn islands', AREA: '47', CAPITAL: 'adamstown',
ID: '', NAME: 'pitcairn islands', AREA: '47', CAPITAL: 'adamstown',
Correct links found: 200 / 218 (91.7%)
Wrong links found: 0 / 24 (0.0%)
Unknown links found: 0
Percent of links correct 100.0%, wrong 0.0%, unknown 0.0%
Records with no link: 18
Precision 100.0%, recall 91.74311926605505%, f-number 0.9569377990430622

We can look in countries.xml to see how the similarity between records is being calculated:


So we’re working out similarity of the capital city and country by calculating their Levenshtein distance i.e. the minimum number of single-character edits required to change one word into the other

This works very well if there is a typo or difference in spelling in one of the data sets. However, I was curious what would happen if the country had two completely different names e.g Cote d’Ivoire is sometimes know as Ivory Coast. Let’s try changing the country name in one of the files:

"19147","Cote dIvoire","Yamoussoukro","322460"
java -cp "duke-dist-1.3-SNAPSHOT/lib/*" no.priv.garshol.duke.Duke --testfile=countries-test.txt --testdebug --showmatches countries.xml
ID: '19147', NAME: 'ivory coast', AREA: '322460', CAPITAL: 'yamoussoukro',

I also tried it out with the BBC and ESPN match reports of the Man Utd vs Tottenham match – the BBC references players by surname, while ESPN has their full names.

When I compared the full name against surname using the Levenshtein comparator there were no matches as you’d expect. I had to split the ESPN names up into first name and surname to get the linking to work.

Equally when I varied the team name’s to be ‘Man Utd’ rather than ‘Manchester United’ and ‘Tottenham’ rather than ‘Tottenham Hotspur’ that didn’t work either.

I think I probably need to write a domain specific comparator but I’m also curious whether I could come up with a bunch of training examples and then train a model to detect what makes two records similar. It’d be less deterministic but perhaps more robust.

Written by Mark Needham

August 8th, 2015 at 10:50 pm

The Willpower Instinct: Reducing time spent mindlessly scrolling for things to read

with one comment

I recently finished reading Kelly McGonigal’s excellent book ‘The Willpower Instinct‘ having previously watched her Google talk of the same title

My main takeaway from the book is that there are things that we want to do (or not do) but doing them (or not as the case may be) isn’t necessarily instinctive and so we need to develop some strategies to help ourselves out.

In one of the early chapters she suggests picking a habit that you want to do less off and write down on a piece of paper every time you want to do it and how you’re feeling at that point.

After writing it down you’re free to then follow through and do it but you don’t have to if you change your mind.

I was quite aware of the fact that I spend a lot of time idly scrolling from email to Twitter to Facebook to LinkedIn to news websites and back again so I thought it’d be interesting to track when/why I was doing this. The annoying thing about this habit is that it can easily eat up 20-30 minutes at a time without you even noticing.

I’ve been tracking myself for about three weeks and in the first few days I noticed that the first thing I did as soon as I woke up was grab my phone and get into the cycle.

It was quite frustrating to be lured in so early in the day but one of the suggestions in the book is that feeling guilty about something is actually detrimental to our progress. Instead we should note why it happened and then move on – the day isn’t a write off because of one event!

Kelly suggests that if we can work out the times when we’re most likely to fall into our habits then we can pre-plan a mitigation strategy.

From looking over my notes the following are the reasons why I want to start mindlessly scrolling:

  • I’m stuck on the problem I’m working on
  • I’m bored
  • I’m tired
  • I’m hungry
  • I’m getting distracted by notifications
  • I want to not think for a while

The notifications bullet is easy to address – I turn off notifications on my phone for 4 hours at a time so I don’t even know there’s anything to read.

I was intrigued to note that I got distracted when stuck on a problem – the main take away here is to check whether the urge to scroll mindlessly is being driven by having to think hard. If it is then I can choose to either get back to it or go for a short walk and then come back. But definitely don’t start scrolling!

I often find myself bored on my commute to work so I’ve addressed this by working out a book/paper I’m going to read the night before and then having that ready for the journey. Lunch time is prime time for mindless scrolling as well so I’ve filled that time with various computer science/data science videos.

Since I started tracking my scrolling I’ve found myself sleeping earlier so my assumption is that the extra hours awake were being spent mindlessly scrolling which led to being more tired so a win all around in that respect.

Something I’ve noticed is that I’m sometimes wasting time on other activities which I’m are not ‘forbidden’ but are equally unconstructive e.g. chat applications / watching music videos.

The former are obviously useful for communicating with people so I’ve been trying to use them only when I actually want to chat to someone rather than mindlessly looking for messages to read.

I also find myself not wanting to write down the times I’ve mindlessly scrolled when I’m doing it a lot on a given day. Being aware of this is helpful as I just write it down anyway and get on with the day.

The summary of my experience so far is it seems beneficial – I don’t think I’ve lost anything by not checking those mediums so often and I’ve definitely read a lot more than I usually do and been more focused as well.

Now I need to go and try out some of the other exercises from the book – if you’ve read it / tried out any of the tips I’d love to hear what’s worked well for you.

Written by Mark Needham

June 12th, 2015 at 11:12 pm

Deliberate Practice: Building confidence vs practicing

with 4 comments

A few weeks ago I wrote about the learning to cycle dependency graph which described some of the skills required to become proficient at riding a bike.

IMG 20150430 073120

While we’ve been practicing various skills/sub skills I’ve often found myself saying the following:

if it’s not hard you’re not practicing

me, April 2015

i.e. you should find the skill you’re currently practicing difficult otherwise you’re not stretching yourself and therefore aren’t getting better.

For example, in cycling you could be very comfortable riding with both hands on the handle bars and find using one hand a struggle. However, if you don’t practice that you won’t be able to indicate and turn corners.

This ties in with all my reading about deliberate practice which suggests that the type of exercises you do while deliberately practicing aren’t intended to be fun and are meant to expose your lack of knowledge.

In an ideal world we would spend all our time practicing these challenging skills but in reality there’s some part of us that wants to feel that we’re actually improving by spending some of the time doing things that we’re good at. Doing things you’re not good at is a bit of a slog as well so we might find that we have less motivation for this type of thing.

We therefore need to find a balance between doing challenging exercises and having fun building something or writing code that we already know how to do. I’ve found the best way to do this is to combine the two types of work into mini projects which contain some tasks that we’re already good at and some that require us to struggle.

For me this might involved cleaning up and importing a data set into Neo4j, which I’m comfortable with, and combining that with something else that I want to learn.

For example in the middle of last year I did some meetup analysis which involved creating a Neo4j graph of London’s NoSQL meetups and learning a bit about R, dplyr and linear regression along the way.

In January I built a How I met your mother graph and then spent a few weeks learning various algorithms for extracting topics from free text to give even more ways to explore the dataset.

Most recently I’ve been practicing exercises from Think Bayes and while it’s good practice I think I’d probably spend more time doing it if I linked it into a mini project with something I’m already comfortable with.

I’ll go off and have a think what that should be!

Written by Mark Needham

April 30th, 2015 at 7:48 am

Deliberate Practice: Watching yourself fail

with 5 comments

Think bayes cover medium

I’ve recently been reading the literature written by K. Anders Eriksson and co on Deliberate Practice and one of the suggestions for increasing our competence at a skill is to put ourselves in a situation where we can fail.

I’ve been reading Think Bayes – an introductory text on Bayesian statistics, something I know nothing about – and each chapter concludes with a set of exercises to practice, a potentially perfect exercise in failure!

I’ve been going through the exercises and capturing my screen while I do so, an idea I picked up from one of the papers:

our most important breakthrough was developing a relatively inexpensive and efficient way for students to record their exercises on video and to review and analyze their own performances against well-defined criteria

Ideally I’d get a coach to review the video but that seems too much of an ask of someone. Antonios has taken a look at some of my answers, however, and made suggestions for how he’d solve them which has been really helpful.

After each exercise I watch the video and look for areas where I get stuck or don’t make progress so that I can go and practice more in that area. I also try to find inefficiencies in how I solve a problem as well as the types of approaches I’m taking.

These are some of the observations from watching myself back over the last week or so:

  • I was most successful when I had some idea of what I was going to try first. Most of the time the first code I wrote didn’t end up being correct but it moved me closer to the answer or ruled out an approach.

    It’s much easier to see the error in approach if there is an approach! On one occasion where I hadn’t planned out an approach I ended up staring at the question for 10 minutes and didn’t make any progress at all.

  • I could either solve the problems within 20 minutes or I wasn’t going to solve them and needed to chunk down to a simpler problem and then try the original exercise again.

    e.g. one exercise was to calculate the 5th percentile of a posterior distribution which I flailed around with for 15 minutes before giving up. Watching back on the video it was obvious that I hadn’t completely understood what a probability mass function was. I read the Wikipedia entry and retried the exercise and this time got the answer.

  • Knowing that you’re going to watch the video back stops you from getting distracted by email, twitter, Facebook etc.
  • It’s a painful experience watching yourself struggle – you can see exactly which functions you don’t know or things you need to look up on Google.
  • I deliberately don’t copy/paste any code while doing these exercises. I want to see how well I can do the exercises from scratch so that would defeat the point.

One of the suggestions that Eriksson makes for practice sessions is to focus on ‘technique’ during practice sessions rather than only on outcome but I haven’t yet been able to translate what exactly that would involved in a programming context.

If you have any ideas or thoughts on this approach do let me know in the comments.

Written by Mark Needham

April 25th, 2015 at 10:26 pm

InetAddressImpl#lookupAllHostAddr slow/hangs

without comments

Since I upgraded to Yosemite I’ve noticed that attempts to resolve localhost on my home network have been taking ages (sometimes over a minute) so I thought I’d try and work out why.

This is what my initial /etc/hosts file looked like based on the assumption that my machine’s hostname was teetotal:

$ cat /etc/hosts
# Host Database
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##	localhost	broadcasthost
::1             localhost
#fe80::1%lo0	localhost	wuqour.local       teetotal

I setup a little test which replicated the problem:

public class LocalhostResolution
    public static void main( String[] args ) throws UnknownHostException
        long start = System.currentTimeMillis();
        InetAddress localHost = InetAddress.getLocalHost();
        System.out.println(System.currentTimeMillis() - start);

which has the following output:

Exception in thread "main" teetotal-2: teetotal-2: nodename nor servname provided, or not known
	at LocalhostResolution.main(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.intellij.rt.execution.application.AppMain.main(
Caused by: teetotal-2: nodename nor servname provided, or not known
	at Method)
	... 6 more

Somehow my hostname has changed to teetotal-2 so I added the following entry to /etc/hosts:	teetotal-2

Now if we run the program we see this output instead:


It’s still taking 5 seconds to resolve which is much longer than I’d expect. After break pointing through the code it seems like it’s trying to do an IPv6 resolution rather than IPv4 so I added an /etc/hosts entry for that too:

::1             teetotal-2

Now resolution is much quicker:


Happy days!

Written by Mark Needham

March 29th, 2015 at 12:31 am

One month of mini habits

without comments

I recently read a book in the ‘getting things done’ genre written by Stephen Guise titled ‘Mini Habits‘ and although I generally don’t like those types of books I quite enjoyed this one and decided to give his system a try.

The underlying idea is that there are two parts of actually doing stuff:

  • Planning what to do
  • Doing it

We often get stuck in between the first and second steps because what we’ve planned to do is too big and overwhelming.

Guise’s approach for overcoming this inaction is to shrink the amount of work to do until it’s small enough that we don’t feel any resistance to getting started.

It should be something that you can do in 1 or 2 minutes – stupidly small – something that you can do even on your worst day when you have no time/energy.

I’m extremely good at procrastinating so I thought I’d give it a try and see if it helped. Guise suggests starting with one or two habits but I had four things that I want to do so I’ve ignored that advice for now.

My attempted habits are the following:

  • Read one page of a data science related paper/article a day
  • Read one page of a computer science related paper/article a day
  • Write one line of data science related code a day
  • Write 50 words on blog a day

Sooooo….has it helped?

In terms of doing each of the habits I’ve been successful so far – today is the 35th day in a row that I’ve managed to do each of them. Having said that, there have been some times when I’ve got back home at 11pm and realised that I haven’t done 2 of the habits and need to quickly do the minimum to ‘tick them off’.

The habit I’ve enjoyed doing the most is writing one line of data science related code a day.

My initial intention was that this was going to only involved writing machine learning code but at the moment I’ve made it a bit more generic so it can include things like the Twitter Graph or other bits and pieces that I want to get started on.

The main problem I’ve had with making progress on mini projects like that is that I imagine its end state and it feels too daunting to start on. Committing to just one line of code a day has been liberating in some way.

One tweak I have made to all the habits is to have some rough goal of where all the daily habits are leading as I noticed that the stuff I was doing each day was becoming very random. Michael pointed me at Amy Hoy’s ‘Guide to doing it backwards‘ which describes a neat technique for working back from a goal and determining the small steps required to achieve it.

Writing at least 50 words a day has been beneficial for getting blog posts written. Before the last month I’ve found myself writing most of my posts at the end of month but I have a more regular cadence now which feels better.

Computer science wise I’ve been picking up papers which have some sort of link to databases to try and learn more of the low level detail there. e.g. I’ve read the LRU-K cache paper which Neo4j 2.2’s page cache is based on and have been flicking through the original CRDTs paper over the last few days.

I also recently came across the Papers We Love repository so I’ll probably work through some of the distributed systems papers they’ve collated next.

Other observations

I’ve found that if I do stuff early in the morning it feels better as you know it’s out of the way and doesn’t linger over you for the rest of the day.

I sometimes find myself wanting to just tick off the habits for the day even when it might be interesting to spend more time on one of the habits. I’m not sure what to make of this really – perhaps I should reduce the number of habits to the ones I’m really interested in?

With the writing it does sometimes feel like I’m just writing for the sake of it but it is a good habit to get into as it forces me to explain what I’m working on and get ideas from other people so I’m going to keep doing it.

I’ve enjoyed my experience with ‘mini habits’ so far although I think I’d be better off focusing on fewer habits so that there’s still enough time in the day to read/learn random spontaneous stuff that doesn’t fit into these habits.

Written by Mark Needham

March 17th, 2015 at 1:32 am

Docker/Neo4j: Port forwarding on Mac OS X not working

without comments

Prompted by Ognjen Bubalo’s excellent blog post I thought it was about time I tried running Neo4j on a docker container on my Mac Book Pro to make it easier to play around with different data sets.

I got the container up and running by following Ognien’s instructions and had the following ports forwarded to my host machine:

$ docker ps
CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                                              NAMES
c62f8601e557        tpires/neo4j:latest   "/bin/bash -c /launc   About an hour ago   Up About an hour>1337/tcp,>7474/tcp   neo4j

This should allow me to access Neo4j on port 49154 but when I tried to access that host:port pair I got a connection refused message:

$ curl -v http://localhost:49154
* Adding handle: conn: 0x7ff369803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7ff369803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 49154 (#0)
*   Trying ::1...
*   Trying
*   Trying fe80::1...
* Failed connect to localhost:49154; Connection refused
* Closing connection 0
curl: (7) Failed connect to localhost:49154; Connection refused

My first thought was the maybe Neo4j hadn’t started up correctly inside the container so I checked the logs:

$ docker logs --tail=10 c62f8601e557
10:59:12.994 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@2edfbe28{/webadmin,jar:file:/usr/share/neo4j/system/lib/neo4j-server-2.1.5-static-web.jar!/webadmin-html,AVAILABLE}
10:59:13.449 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@192efb4e{/db/manage,null,AVAILABLE}
10:59:13.699 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@7e94c035{/db/data,null,AVAILABLE}
10:59:13.714 [main] INFO  o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
10:59:13.715 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@3e84ae71{/browser,jar:file:/usr/share/neo4j/system/lib/neo4j-browser-2.1.5.jar!/browser,AVAILABLE}
10:59:13.807 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@4b6690b1{/,null,AVAILABLE}
10:59:13.819 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@495350f0{HTTP/1.1}{c62f8601e557:7474}
10:59:13.900 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@23ad0c5a{SSL-HTTP/1.1}{c62f8601e557:7473}
2014-11-27 10:59:13.901+0000 INFO  [API] Server started on: http://c62f8601e557:7474/
2014-11-27 10:59:13.902+0000 INFO  [API] Remote interface ready and available at [http://c62f8601e557:7474/]

Nope! It’s up and running perfectly fine which suggested the problemw was with port forwarding.

I eventually found my way to Chris Jones’ ‘How to use Docker on OS X: The Missing Guide‘ which explained the problem:

The Problem: Docker forwards ports from the container to the host, which is boot2docker, not OS X.

The Solution: Use the VM’s IP address.

So to access Neo4j on my machine I need to use the VM’s IP address rather than localhost. We can get the VM’s IP address like so:

$ boot2docker ip
The VM's Host only interface IP address is:

Let’s strip out that surrounding text though:

$ boot2docker ip 2> /dev/null

Now if we cURL using that IP instead:

$ curl -v
* About to connect() to port 49154 (#0)
*   Trying
* Adding handle: conn: 0x7fd794003a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fd794003a00) send_pipe: 1, recv_pipe: 0
* Connected to ( port 49154 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host:
> Accept: */*
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=UTF-8
< Access-Control-Allow-Origin: *
< Content-Length: 112
* Server Jetty(9.0.5.v20130815) is not blacklisted
< Server: Jetty(9.0.5.v20130815)
  "management" : "",
  "data" : ""
* Connection #0 to host left intact

Happy days!

Chris has solutions to lots of other common problems people come across when using Docker with Mac OS X so it’s worth having a flick through his post.

Written by Mark Needham

November 27th, 2014 at 12:28 pm

Posted in Software Development

Tagged with