Mark Needham

Thoughts on Software Development

Archive for the ‘R’ Category

R: ggplot – Plotting back to back charts using facet_wrap

without comments

Earlier in the week I showed a way to plot back to back charts using R’s ggplot library but looking back on the code it felt like it was a bit hacky to ‘glue’ two charts together using a grid.

I wanted to find a better way.

To recap, I came up with the following charts showing the RSVPs to Neo4j London meetup events using this code:

2014 07 20 17 42 40

The first thing we need to do to simplify chart generation is to return ‘yes’ and ‘no’ responses in the same cypher query, like so:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         WITH e, COLLECT(response) AS yeses
         MATCH (e)<-[:TO]-(response {response: 'no'})<-[:NEXT]-()
         WITH e, COLLECT(response) + yeses AS responses
         UNWIND responses AS response
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime, response.response AS response"
allRSVPs = cypher(graph, query)
allRSVPs$time = timestampToDate(allRSVPs$time)
allRSVPs$eventTime = timestampToDate(allRSVPs$eventTime)
allRSVPs$difference = as.numeric(allRSVPs$eventTime - allRSVPs$time, units="days")

The query is a bit because we want to capture the ‘no’ responses when they initially said yes which is why we check for a ‘NEXT’ relationship when looking for the negative responses.

Let’s inspect allRSVPs:

> allRSVPs[1:10,]
                  time           eventTime response difference
1  2014-06-13 21:49:20 2014-07-22 18:30:00       no   38.86157
2  2014-07-02 22:24:06 2014-07-22 18:30:00      yes   19.83743
3  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
4  2014-06-23 21:07:11 2014-07-22 18:30:00      yes   28.89084
5  2014-06-06 15:09:29 2014-07-22 18:30:00      yes   46.13925
6  2014-05-31 13:03:09 2014-07-22 18:30:00      yes   52.22698
7  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
8  2014-07-02 12:28:22 2014-07-22 18:30:00      yes   20.25113
9  2014-06-30 23:44:39 2014-07-22 18:30:00      yes   21.78149
10 2014-06-06 15:35:53 2014-07-22 18:30:00      yes   46.12091

We’ve returned the actual response with each row so that we can distinguish between responses. It will also come in useful for pivoting our single chart later on.

The next step is to get ggplot to generate our side by side charts. I started off by plotting both types of response on the same chart:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1)

2014 07 25 22 14 28

This one stacks the ‘yes’ and ‘no’ responses on top of each other which isn’t what we want as it’s difficult to compare the two.

What we need is the facet_wrap function which allows us to generate multiple charts grouped by key. We’ll group by ‘response’:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1) + 
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 34 46

The only thing we’re missing now is the red and green colours which is where the scale_fill_manual function comes in handy:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 39 56

If we want to show the ‘yes’ chart on top we can pass in an extra parameter to facet_wrap to change where it places the highest value:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1, as.table = FALSE)

2014 07 25 22 43 29

We could go one step further and group by response and day. First let’s add a ‘day’ column to our data frame:

allRSVPs$dayOfWeek = format(allRSVPs$eventTime, "%A")

And now let’s plot the charts using both columns:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response + dayOfWeek, as.table = FALSE)

2014 07 25 22 49 57

The distribution of dropouts looks fairly similar for all the days – Thursday is just at an order of magnitude below the other days because we haven’t run many events on Thursdays so far.

At a glance it doesn’t appear that so many people sign up for Thursday events on the day or one day before.

One potential hypothesis is that people have things planned for Thursday whereas they decide more last minute what to do on the other days.

We’ll have to run some more events on Thursdays to see whether that trend holds out.

The code is on github if you want to play with it

Written by Mark Needham

July 25th, 2014 at 9:57 pm

Posted in R

Tagged with

R: ggplot – Plotting back to back bar charts

without comments

I’ve been playing around with R’s ggplot library to explore the Neo4j London meetup and the next thing I wanted to do was plot back to back bar charts showing ‘yes’ and ‘no’ RSVPs.

I’d already done the ‘yes’ bar chart using the following code:

query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allYesRSVPs = cypher(graph, query)
allYesRSVPs$time = timestampToDate(allYesRSVPs$time)
allYesRSVPs$eventTime = timestampToDate(allYesRSVPs$eventTime)
allYesRSVPs$difference = as.numeric(allYesRSVPs$eventTime - allYesRSVPs$time, units="days")
 
ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")
2014 07 20 01 15 39

The next step was to create a similar thing for people who’d RSVP’d ‘no’ having originally RSVP’d ‘yes’ i.e. people who dropped out:

query = "MATCH (e:Event)<-[:TO]-(response {response: 'no'})<-[:NEXT]-()
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allNoRSVPs = cypher(graph, query)
allNoRSVPs$time = timestampToDate(allNoRSVPs$time)
allNoRSVPs$eventTime = timestampToDate(allNoRSVPs$eventTime)
allNoRSVPs$difference = as.numeric(allNoRSVPs$eventTime - allNoRSVPs$time, units="days")
 
ggplot(allNoRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="red")
2014 07 20 17 25 03

As expected if people are going to drop out they do so a day or two before the event happens. By including the need for a ‘NEXT’ relationship we only capture the people who replied ‘yes’ and changed it to ‘no’. We don’t capture the people who said ‘no’ straight away.

I thought it’d be cool to be able to have the two charts back to back using the same scale so I could compare them against each other which led to my first attempt:

yes = ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")
no = ggplot(allNoRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="red") + scale_y_reverse()
library(gridExtra)
grid.arrange(yes,no,ncol=1,widths=c(1,1))

scale_y_reverse() flips the y axis so we’d see the ‘no’ chart upside down. The last line plots the two charts in a grid containing 1 column which forces them to go next to each other vertically.

2014 07 20 17 29 27

When we compare them next to each other we can see that the ‘yes’ replies are much more spread out whereas if people are going to drop out it nearly always happens a week or so before the event happens. This is what we thought was happening but it’s cool to have it confirmed by the data.

One annoying thing about that visualisation is that the two charts aren’t on the same scale. The ‘no’ chart only goes up to 100 days whereas the ‘yes’ one goes up to 120 days. In addition, the top end of the ‘yes’ chart is around 200 whereas the ‘no’ is around 400.

Luckily we can solve that problem by fixing the axes for both plots:

yes = ggplot(allYesRSVPs, aes(x=difference)) + 
  geom_histogram(binwidth=1, fill="green") +
  xlim(0,120) + 
  ylim(0, 400)
 
no = ggplot(allNoRSVPs, aes(x=difference)) +
  geom_histogram(binwidth=1, fill="red") +
  xlim(0,120) + 
  ylim(0, 400) +
  scale_y_reverse()

Now if we re-render it looks much better:

2014 07 20 17 42 40

From having comparable axes we can see that a lot more people drop out of an event (500) as it approaches than new people sign up (300). This is quite helpful for working out how many people are likely to show up.

We’ve found that the number of people RSVP’d ‘yes’ to an event will drop by 15-20% overall from 2 days before an event up until the evening of the event and the data seems to confirm this.

The only annoying thing about this approach is that the axes are repeated due to them being completely separate charts.

I expect it would look better if I can work out how to combine the two data frames together and then pull out back to back charts based on a variable in the combined data frame.

I’m still working on that so suggestions are most welcome. The code is on github if you want to play with it.

Written by Mark Needham

July 20th, 2014 at 4:50 pm

Posted in R

Tagged with

R: ggplot – Don’t know how to automatically pick scale for object of type difftime – Discrete value supplied to continuous scale

without comments

While reading ‘Why The R Programming Language Is Good For Business‘ I came across Udacity’s ‘Data Analysis with R‘ courses – part of which focuses exploring data sets using visualisations, something I haven’t done much of yet.

I thought it’d be interesting to create some visualisations around the times that people RSVP ‘yes’ to the various Neo4j events that we run in London.

I started off with the following query which returns the date time that people replied ‘Yes’ to an event and the date time of the event:

library(Rneo4j)
query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allYesRSVPs = cypher(graph, query)
allYesRSVPs$time = timestampToDate(allYesRSVPs$time)
allYesRSVPs$eventTime = timestampToDate(allYesRSVPs$eventTime)
 
> allYesRSVPs[1:10,]
                  time           eventTime
1  2011-06-05 12:12:27 2011-06-29 18:30:00
2  2011-06-05 14:49:04 2011-06-29 18:30:00
3  2011-06-10 11:22:47 2011-06-29 18:30:00
4  2011-06-07 15:27:07 2011-06-29 18:30:00
5  2011-06-06 20:21:45 2011-06-29 18:30:00
6  2011-07-04 19:49:04 2011-07-27 19:00:00
7  2011-07-05 16:40:10 2011-07-27 19:00:00
8  2011-08-19 07:41:10 2011-08-31 18:30:00
9  2011-08-24 12:47:40 2011-08-31 18:30:00
10 2011-08-18 09:56:53 2011-08-31 18:30:00

I wanted to create a bar chart showing the amount of time in advance of a meetup that people RSVP’d ‘yes’ so I added the following column to my data frame:

allYesRSVPs$difference = allYesRSVPs$eventTime - allYesRSVPs$time
 
> allYesRSVPs[1:10,]
                  time           eventTime    difference
1  2011-06-05 12:12:27 2011-06-29 18:30:00 34937.55 mins
2  2011-06-05 14:49:04 2011-06-29 18:30:00 34780.93 mins
3  2011-06-10 11:22:47 2011-06-29 18:30:00 27787.22 mins
4  2011-06-07 15:27:07 2011-06-29 18:30:00 31862.88 mins
5  2011-06-06 20:21:45 2011-06-29 18:30:00 33008.25 mins
6  2011-07-04 19:49:04 2011-07-27 19:00:00 33070.93 mins
7  2011-07-05 16:40:10 2011-07-27 19:00:00 31819.83 mins
8  2011-08-19 07:41:10 2011-08-31 18:30:00 17928.83 mins
9  2011-08-24 12:47:40 2011-08-31 18:30:00 10422.33 mins
10 2011-08-18 09:56:53 2011-08-31 18:30:00 19233.12 mins

I then tried to use ggplot to create a bar chart of that data:

> ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")

Unfortunately that resulted in this error:

Don't know how to automatically pick scale for object of type difftime. Defaulting to continuous
Error: Discrete value supplied to continuous scale

I couldn’t find anyone who had come across this problem before in my search but I did find the as.numeric function which seemed like it would put the difference into an appropriate format:

allYesRSVPs$difference = as.numeric(allYesRSVPs$eventTime - allYesRSVPs$time, units="days")
> ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")

that resulted in the following chart:

2014 07 20 01 15 39

We can see there is quite a heavy concentration of people RSVPing yes in the few days before the event and then the rest are scattered across the first 30 days.

We usually announce events 3/4 weeks in advance so I don’t know that it tells us anything interesting other than that it seems like people sign up for events when an email is sent out about them.

The date the meetup was announced (by email) isn’t currently exposed by the API but hopefully one day it will be.

The code is on github if you want to have a play – any suggestions welcome.

Written by Mark Needham

July 20th, 2014 at 12:21 am

Posted in R

Tagged with

R: Apply a custom function across multiple lists

without comments

In my continued playing around with R I wanted to map a custom function over two lists comparing each item with its corresponding items.

If we just want to use a built in function such as subtraction between two lists it’s quite easy to do:

> c(10,9,8,7,6,5,4,3,2,1) - c(5,4,3,4,3,2,2,1,2,1)
 [1] 5 5 5 3 3 3 2 2 0 0

I wanted to do a slight variation on that where instead of returning the difference I wanted to return a text value representing the difference e.g. ’5 or more’, ’3 to 5′ etc.

I spent a long time trying to figure out how to do that before finding an excellent blog post which describes all the different ‘apply’ functions available in R.

As far as I understand ‘apply’ is the equivalent of ‘map’ in Clojure or other functional languages.

In this case we want the mapply variant which we can use like so:

> mapply(function(x, y) { 
    if((x-y) >= 5) {
        "5 or more"
    } else if((x-y) >= 3) {
        "3 to 5"
    } else {
        "less than 5"
    }    
  }, c(10,9,8,7,6,5,4,3,2,1),c(5,4,3,4,3,2,2,1,2,1))
 [1] "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"      "3 to 5"      "less than 5"
 [8] "less than 5" "less than 5" "less than 5"

We could then pull that out into a function if we wanted:

summarisedDifference <- function(one, two) {
  mapply(function(x, y) { 
    if((x-y) >= 5) {
      "5 or more"
    } else if((x-y) >= 3) {
      "3 to 5"
    } else {
      "less than 5"
    }    
  }, one, two)
}

which we could call like so:

> summarisedDifference(c(10,9,8,7,6,5,4,3,2,1),c(5,4,3,4,3,2,2,1,2,1))
 [1] "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"      "3 to 5"      "less than 5"
 [8] "less than 5" "less than 5" "less than 5"

I also wanted to be able to compare a list of items to a single item which was much easier than I expected:

> summarisedDifference(c(10,9,8,7,6,5,4,3,2,1), 1)
 [1] "5 or more"   "5 or more"   "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"     
 [8] "less than 5" "less than 5" "less than 5"

If we wanted to get a summary of the differences between the lists we could plug them into ddply like so:

> library(plyr)
> df = data.frame(x=c(10,9,8,7,6,5,4,3,2,1), y=c(5,4,3,4,3,2,2,1,2,1))
> ddply(df, .(difference=summarisedDifference(x,y)), summarise, count=length(x))
   difference count
1      3 to 5     3
2   5 or more     3
3 less than 5     4

Written by Mark Needham

July 16th, 2014 at 5:04 am

Posted in R

Tagged with

R/plyr: ddply – Error in vector(type, length) : vector: cannot make a vector of mode ‘closure’.

without comments

In my continued playing around with plyr’s ddply function I was trying to group a data frame by one of its columns and return a count of the number of rows with specific values and ran into a strange (to me) error message.

I had a data frame:

n = c(2, 3, 5) 
s = c("aa", "bb", "cc") 
b = c(TRUE, FALSE, TRUE) 
df = data.frame(n, s, b)

And wanted to group and count on column ‘b’ so I’d get back a count of 2 for TRUE and 1 for FALSE. I wrote this code:

ddply(df, "b", function(x) { 
  countr <- length(x$n) 
  data.frame(count = count) 
})

which when evaluated gave the following error:

Error in vector(type, length) : 
  vector: cannot make a vector of mode 'closure'.

It took me quite a while to realise that I’d just made a typo in assigned the count to a variable called ‘countr’ instead of ‘count’.

As a result of that typo I think the R compiler was trying to find a variable called ‘count’ somwhere else in the lexical scope but was unable to. If I’d defined the variable ‘count’ outside the call to ddply function then my typo wouldn’t have resulted in an error but rather an unexpected resulte.g.

> count = 10
> ddply(df, "b", function(x) { 
+   countr <- length(x$n) 
+   data.frame(count = count) 
+ })
      b count
1 FALSE     4
2  TRUE     4

Once I spotted the typo and fixed it things worked as expected:

> ddply(df, "b", function(x) { 
+   count <- length(x$n) 
+   data.frame(count = count) 
+ })
      b count
1 FALSE     1
2  TRUE     2

Written by Mark Needham

July 7th, 2014 at 6:07 am

Posted in R

Tagged with

R/plyr: ddply – Renaming the grouping/generated column when grouping by date

without comments

On Nicole’s recommendation I’ve been having a look at R’s plyr package to see if I could simplify my meetup analysis and I started by translating my code that grouped meetup join dates by day of the week.

To refresh, the code without plyr looked like this:

library(Rneo4j)
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinDate"
meetupMembers = cypher(graph, query)
meetupMembers$joined <- timestampToDate(meetupMembers$joinDate)
 
dd = aggregate(meetupMembers$joined, by=list(format(meetupMembers$joined, "%A")), function(x) length(x))
colnames(dd) = c("dayOfWeek", "count")

which returns the following:

> dd
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

We need to use plyr’s ddply function which takes a data frame and transforms it into another one.

To refresh, this is what the initial data frame looks like:

> meetupMembers[1:10,]
       joinDate              joined
1  1.376572e+12 2013-08-15 14:13:40
2  1.379491e+12 2013-09-18 08:55:11
3  1.349454e+12 2012-10-05 17:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 10:27:40
6  1.330295e+12 2012-02-26 22:27:00
7  1.379676e+12 2013-09-20 12:22:39
8  1.398462e+12 2014-04-25 22:41:19
9  1.331734e+12 2012-03-14 14:11:43
10 1.396874e+12 2014-04-07 13:32:26

Most of the examples of using ddply show how to group by a specific ‘column’ e.g. joined but I want to group by part of the value in that column and eventually came across an example which showed how to do it:

> ddply(meetupMembers, .(format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  format(joined, "%A") count
1               Friday   135
2               Monday   287
3             Saturday    80
4               Sunday   102
5             Thursday   187
6              Tuesday   286
7            Wednesday   211

Unfortunately the generated column heading for the group by key isn’t very readable and it took me way longer than it should have to work out how to name it as I wanted! This is how you do it:

> ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  })
  dayOfWeek count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

If we want to sort that in descending order by ‘count’ we can wrap that ddply in another one:

> ddply(ddply(meetupMembers, .(dayOfWeek=format(joined, "%A")), function(x) {
    count <- length(x$joined)
    data.frame(count = count)
  }), .(count = count* -1))
  dayOfWeek count
1    Monday   287
2   Tuesday   286
3 Wednesday   211
4  Thursday   187
5    Friday   135
6    Sunday   102
7  Saturday    80

From reading a bit about ddply I gather that its slower than using some other approaches e.g. data.table but I’m not dealing with much data so it’s not an issue yet.

Once I got the hang of how it worked ddply was quite nice to work with so I think I’ll have a go at translating some of my other code to use it now.

Written by Mark Needham

July 2nd, 2014 at 6:30 am

Posted in R

Tagged with

R: Aggregate by different functions and join results into one data frame

without comments

In continuing my analysis of the London Neo4j meetup group using R I wanted to see which days of the week we organise meetups and how many people RSVP affirmatively by the day.

I started out with this query which returns each event and the number of ‘yes’ RSVPS:

library(Rneo4j)
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
query = "MATCH (g:Group {name: \"Neo4j - London User Group\"})-[:HOSTED_EVENT]->(event)<-[:TO]-({response: 'yes'})<-[:RSVPD]-()
         WHERE (event.time + event.utc_offset) < timestamp()
         RETURN event.time + event.utc_offset AS eventTime, COUNT(*) AS rsvps"
events = cypher(graph, query)
events$datetime <- timestampToDate(events$eventTime)
      eventTime rsvps            datetime
1  1.314815e+12     3 2011-08-31 19:30:00
2  1.337798e+12    13 2012-05-23 19:30:00
3  1.383070e+12    29 2013-10-29 18:00:00
4  1.362474e+12     5 2013-03-05 09:00:00
5  1.369852e+12    66 2013-05-29 19:30:00
6  1.385572e+12    67 2013-11-27 17:00:00
7  1.392142e+12    35 2014-02-11 18:00:00
8  1.364321e+12    23 2013-03-26 18:00:00
9  1.372183e+12    22 2013-06-25 19:00:00
10 1.401300e+12    60 2014-05-28 19:00:00

I wanted to get a data frame which had these columns:

Day of Week | RSVPs | Number of Events

Getting the number of events for a given day was quite easy as I could use the groupBy function I wrote last time:

groupBy = function(dates, format) {
  dd = aggregate(dates, by=list(format(dates, format)), function(x) length(x))
  colnames(dd) = c("key", "count")
  dd
}
 
> groupBy(events$datetime, "%A")
        key count
1  Thursday     9
2   Tuesday    24
3 Wednesday    35

The next step is to get the sum of RSVPs by the day which we can get with the following code:

dd = aggregate(events$rsvps, by=list(format(events$datetime, "%A")), FUN=sum)
colnames(dd) = c("key", "count")

The difference between this and our previous use of the aggregate function is that we’re passing in the number of RSVPs for each event and then grouping by the day and summing up the values for each day rather than counting how many occurrences there are.

If we evaluate ‘dd’ we get the following:

> dd
        key count
1  Thursday   194
2   Tuesday   740
3 Wednesday  1467

We now have two data tables with a very similar shape and it turns out there’s a function called merge which makes it very easy to convert these two data frames into a single one:

x = merge(groupBy(events$datetime, "%A"), dd, by = "key")
colnames(x) = c("day", "events", "rsvps")
> x
        day events rsvps
1  Thursday      9   194
2   Tuesday     24   740
3 Wednesday     35  1467

We could now choose to order our new data frame by number of events descending:

> x[order(-x$events),]
        day events rsvps
3 Wednesday     35  1467
2   Tuesday     24   740
1  Thursday      9   194

We might also add an extra column to calculate the average number of RSVPs per day:

> x$rsvpsPerEvent = x$rsvps / x$events
> x
        day events rsvps rsvpsPerEvent
1  Thursday      9   194      21.55556
2   Tuesday     24   740      30.83333
3 Wednesday     35  1467      41.91429

I’m still getting the hang of it but already it seems like the combination of R and Neo4j allows us to quickly get insights into our data and I’ve barely scratched the surface!

Written by Mark Needham

June 30th, 2014 at 10:47 pm

Posted in R

Tagged with

R: Order by data frame column and take top 10 rows

without comments

I’ve been doing some ad-hoc analysis of the Neo4j London meetup group using R and Neo4j and having worked out how to group by certain keys the next step was to order the rows of the data frame.

I wanted to drill into the days on which people join the group and see whether they join it at a specific time of day. My feeling was that most people would join on a Monday morning.

The first step was to run the query using RNeo4j and then group by day and hour:

library(Rneo4j)
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinDate"
 
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")
 
meetupMembers = cypher(graph, query)
meetupMembers$joined <- timestampToDate(meetupMembers$joinDate)
 
groupBy = function(dates, format) {
  dd = aggregate(dates, by= list(format(dates, format)), function(x) length(x))
  colnames(dd) = c("key", "count")
  dd
}
 
byDayTime = groupBy(meetupMembers$joined, "%A %H:00")

This returned quite a few rows so we’ll just display a subset of them:

> byDayTime[12:25,]
            key count
12 Friday 14:00    12
13 Friday 15:00     8
14 Friday 16:00    11
15 Friday 17:00    10
16 Friday 18:00     3
17 Friday 19:00     1
18 Friday 20:00     3
19 Friday 21:00     4
20 Friday 22:00     7
21 Friday 23:00     2
22 Monday 00:00     3
23 Monday 01:00     1
24 Monday 03:00     1
25 Monday 05:00     3

The next step was to order by the ‘count’ column which wasn’t too difficult:

> byDayTime[order(byDayTime$count),][1:10,]
              key count
2    Friday 03:00     1
3    Friday 04:00     1
4    Friday 05:00     1
5    Friday 07:00     1
17   Friday 19:00     1
23   Monday 01:00     1
24   Monday 03:00     1
46 Saturday 03:00     1
66   Sunday 06:00     1
67   Sunday 07:00     1

If we run the order function on its own we’ll see that it returns the order in which the current rows in the data frame should appear:

> order(byDayTime$count)
  [1]   2   3   4   5  17  23  24  46  66  67 109 128 129   1  21  44  47  48  81  86  87  88 108 130  16  18  22  25  45  53  64  71  75 107  19  26  49  51  55  56  58  59  61
 [44]  65  68  77  79  85 106 110 143  50  52  54  82  84 101 127 146  27  57  60  62  63  69  70  73  99 103 126 145   6  20  76  83  89 105 122 131 144   7  13  40  43  72  80
 [87] 102  39  78 100 132 147  15  94 121 123 142  14  42  74 104 137 140  12  38  92  93 111 124   8   9  11  90  96 125 139  10  32  34  36  95  97  98  28 135 136  33  35 112
[130] 113 116 134  91 141  41 115 120 133  37 119 138  31 117 118  30 114  29

The first 4 rows in our sorted data frame will be rows 2-5 from the initial data frame, which are:

           key count
2 Friday 03:00     1
3 Friday 04:00     1
4 Friday 05:00     1
5 Friday 07:00     1

So that makes sense! In our case we want to sort in descending order which we can do by prefixing the sorting variable with a minus sign:

> byDayTime[order(-byDayTime$count),][1:10,]
                key count
29     Monday 09:00    34
30     Monday 10:00    28
114   Tuesday 11:00    28
31     Monday 11:00    27
117   Tuesday 14:00    27
118   Tuesday 15:00    27
138 Wednesday 14:00    23
119   Tuesday 16:00    22
37     Monday 17:00    21
115   Tuesday 12:00    20

As expected Monday morning makes a strong showing although Tuesday afternoon is also popular which is unexpected. We’ll need to do some more investigation to figure out what’s going on there.

Written by Mark Needham

June 30th, 2014 at 9:44 pm

Posted in R

Tagged with

Neo4j/R: Grouping meetup members by join timestamp

without comments

I wanted to do some ad-hoc analysis on the join date of members of the Neo4j London meetup group and since cypher doesn’t yet have functions for dealings with dates I thought I’d give R a try.

I started off by executing a cypher query which returned the join timestamp of all the group members using Nicole White’s RNeo4j package:

> library(Rneo4j)
 
> query = "match (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
RETURN membership.joined AS joinDate"
 
> meetupMembers = cypher(graph, query)
 
> meetupMembers[1:5,]
[1] 1.389107e+12 1.376572e+12 1.379491e+12 1.349454e+12 1.383127e+12

I realised that if I was going to do any date manipulation I’d need to translate the timestamp into an R friendly format so I wrote the following function to help me do that:

> timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01")

I added another column to the data frame with this date representation:

> meetupMembers$joined <- timestampToDate(meetupMembers$joinDate)
 
> meetupMembers[1:5,]
      joinDate              joined
1 1.389107e+12 2014-01-07 15:08:40
2 1.376572e+12 2013-08-15 14:13:40
3 1.379491e+12 2013-09-18 08:55:11
4 1.349454e+12 2012-10-05 17:28:04
5 1.383127e+12 2013-10-30 09:59:03

Next I wanted to group those timestamps by the combination of month + year for which the aggregate and format functions came in handy:

> dd = aggregate(meetupMembers$joined, by=list(format(meetupMembers$joined, "%m-%Y")), function(x) length(x))
> colnames(dd) = c("month", "count")
> dd
     month count
1  01-2012     4
2  01-2013    52
3  01-2014    88
4  02-2012     7
5  02-2013    52
6  02-2014    91
7  03-2012    12
8  03-2013    23
9  03-2014    93
10 04-2012     3
11 04-2013    34
12 04-2014   119
13 05-2012     9
14 05-2013    69
15 05-2014   102
16 06-2011    14
17 06-2012     5
18 06-2013    39
19 06-2014   114
20 07-2011     4
21 07-2012    16
22 07-2013    20
23 08-2011     2
24 08-2012    34
25 08-2013    50
26 09-2012    14
27 09-2013    52
28 10-2011     2
29 10-2012    29
30 10-2013    42
31 11-2011     2
32 11-2012    31
33 11-2013    34
34 12-2012     7
35 12-2013    19

I wanted to be able to group by different date formats so I created the following function to make life easier:

groupBy = function(dates, format) {
  dd = aggregate(dates, by= list(format(dates, format)), function(x) length(x))
  colnames(dd) = c("key", "count")
  dd
}

Now we can find the join dates grouped by year:

> groupBy(meetupMembers$joined, "%Y")
   key count
1 2011    24
2 2012   171
3 2013   486
4 2014   607

or by day:

> groupBy(meetupMembers$joined, "%A")
        key count
1    Friday   135
2    Monday   287
3  Saturday    80
4    Sunday   102
5  Thursday   187
6   Tuesday   286
7 Wednesday   211

or by month:

> groupBy(meetupMembers$joined, "%m")
   key count
1   01   144
2   02   150
3   03   128
4   04   156
5   05   180
6   06   172
7   07    40
8   08    86
9   09    66
10  10    73
11  11    67
12  12    26

I found the ‘by day’ grouping interesting as I had the impression that the huge majority of people joined meetup groups on a Monday but the difference between Monday and Tuesday isn’t significant. 60% of the joins happen between Monday and Wednesday.

The ‘by month’ grouping is a bit skewed by the fact we’re only half way into 2014 and there have been a lot more people joining this year than in previous years.

If we exclude this year then the spread is more uniform with a slight dip in December:

> groupBy(meetupMembers$joined[format(meetupMembers$joined, "%Y") != 2014], "%m")
   key count
1   01    56
2   02    59
3   03    35
4   04    37
5   05    78
6   06    58
7   07    40
8   08    86
9   09    66
10  10    73
11  11    67
12  12    26

Next up I think I need to get some charts going on and perhaps compare the distributions of join dates of various London meetup groups against each other.

I’m an absolute R newbie so if anything I’ve done is stupid and can be done better please let me know.

Written by Mark Needham

June 30th, 2014 at 12:06 am

Posted in neo4j,R

Tagged with ,

Neo4j/R: Analysing London NoSQL meetup membership

with 4 comments

In my spare time I’ve been working on a Neo4j application that runs on tops of meetup.com’s API and Nicole recently showed me how I could wire up some of the queries to use her Rneo4j library:

The query used in that visualisation shows the number of members that overlap between each pair of groups but a more interesting query is the one which shows the % overlap between groups based on the unique members across the groups.

The query is a bit more complicated than the original:

MATCH (group1:Group), (group2:Group)
OPTIONAL MATCH (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
WITH group1, group2, COUNT(*) as commonMembers
MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
WITH group1, group2, commonMembers, group1Members, group2Members
 
UNWIND(group1Members + group2Members) AS combinedMember
WITH DISTINCT group1, group2, commonMembers, combinedMember
 
WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage		 
ORDER BY group1.name, group1.name

The next step is to wire that up to use Rneo4j and ggplot2. First we’ll get the libraries installed and loaded:

install.packages("devtools")
devtools::install_github("nicolewhite/Rneo4j")
install.packages("ggplot2")
 
library(Rneo4j)
library(ggplot2)

And now we’ll execute the query and create a chart from the results:

graph = startGraph("http://localhost:7474/db/data/")
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH p = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
         WITH group1, group2, COLLECT(p) AS paths
         RETURN group1.name, group2.name, LENGTH(paths) as commonMembers
         ORDER BY group1.name, group2.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=commonMembers)) + 
geom_bin2d() +
geom_text(aes(label = commonMembers)) +
labs(x= "Group", y="Group", title="Member Group Member Overlap") +
scale_fill_gradient(low="white", high="red") +
theme(axis.text = element_text(size = 12, color = "black"),
      axis.title = element_text(size = 14, color = "black"),
      plot.title = element_text(size = 16, color = "black"),
      axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
 
// as percentage
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH path = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
         WITH group1, group2, COLLECT(path) AS paths
 
         WITH group1, group2, LENGTH(paths) as commonMembers
         MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
         WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
         MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
         WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
         WITH group1, group2, commonMembers, group1Members, group2Members
 
         UNWIND(group1Members + group2Members) AS combinedMember
         WITH DISTINCT group1, group2, commonMembers, combinedMember
 
         WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
         RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage
 
         ORDER BY group1.name, group1.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=percentage)) + 
  geom_bin2d() +
  geom_text(aes(label = percentage)) +
  labs(x= "Group", y="Group", title="Member Group Member Overlap") +
  scale_fill_gradient(low="white", high="red") +
  theme(axis.text = element_text(size = 12, color = "black"),
        axis.title = element_text(size = 14, color = "black"),
        plot.title = element_text(size = 16, color = "black"),
        axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
2014 05 31 21 54 42

A first glance at the visualisation suggests that the Hadoop, Data Science and Big Data groups have the most overlap which seems to make sense as they do cover quite similar topics.

Thanks to Nicole for the library and the idea of the visualisation. Now we need to do some more analysis on the data to see if there are any more interesting insights.

Written by Mark Needham

May 31st, 2014 at 9:32 pm

Posted in R

Tagged with ,