Mark Needham

Thoughts on Software Development

Archive for the ‘aws’ tag

AWS: Spinning up a Neo4j instance with APOC installed

without comments

One of the first things I do after installing Neo4j is install the APOC library, but I find it’s a bit of a manual process when spinning up a server on AWS so I wanted to simplify it a bit.

There’s already a Neo4j AMI which installs Neo4j 3.2.0 and my colleague Michael pointed out that we could download APOC into the correct folder by writing a script and sending it as UserData.

I’ve been doing some work in JavaScript over the last two weeks so I thought I’d automate all the steps using the AWS library. You can find the full script on GitHub.

The UserData part of the script is actually very simple:

This script creates a key pair, security group, opens up that security group on ports 22 (SSH), 7474 (HTTP), 7473 (HTTPS), and 7687 (Bolt). The server created is m3.medium, but you can change that to something else if you prefer.

#!/bin/bash
curl -L https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/3.2.0.3/apoc-3.2.0.3-all.jar -O
sudo cp apoc-3.2.0.3-all.jar /var/lib/neo4j/plugins/

We can run it like this:

$ node neo4j-with-apoc.js 
Creating a Neo4j server
Key pair created. Save this to a file - you'll need to use it if you want to ssh into the Neo4j server
-----BEGIN RSA PRIVATE KEY-----
<Private key details>
-----END RSA PRIVATE KEY-----
Created Group Id:<Group Id>
Opened Neo4j ports
Instance Id: <Instance Id>
Your Neo4j server is now ready!
You'll need to login to the server and change the default password:
https://ec2-ip-address.compute-1.amazonaws.com:7473 or http://ec2-ip-address.compute-1.amazonaws.com:7474
User:neo4j, Password:<Instance Id>

We’ll need to wait a few seconds for Neo4j to spin up, but it’ll be accessible at the URI specified.

Once it’s accessible we can login with the username neo4j and password . We’ll then be instructed to choose a new password.

We can then run the following query to check that APOC has been installed:

call dbms.procedures() YIELD name
WHERE name starts with "apoc"
RETURN count(*)
 
╒══════════╕
│"count(*)"│
╞══════════╡
│214       │
└──────────┘

Cool, it worked and we can now Neo4j and APOC to our heart’s content! If we want to SSH into the server we can do that as well by first saving the private key printed on the command line to a file and then executing the following command:

$ cat aws-private-key.pem
-----BEGIN RSA PRIVATE KEY-----
<Private key details>
-----END RSA PRIVATE KEY-----
 
$ chmod 600 aws-private-key.pem
 
$ ssh -i aws-private-key.pem ubuntu@ec2-ip-address.compute-1.amazonaws.com
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-1013-aws x86_64)
 
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
 
  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud
 
106 packages can be updated.
1 update is a security update.
 
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

You can start/stop neo4j by running the following command:

$ /etc/init.d/neo4j 
Usage: /etc/init.d/neo4j {start|stop|status|restart|force-reload}

The other commands you may be used to finding in the bin folder can be found here:

$ ls -lh /usr/share/neo4j/bin/
total 48K
-rwxr-xr-x 1 neo4j adm   15K May  9 09:22 neo4j
-rwxr-xr-x 1 neo4j adm  5.6K May  9 09:22 neo4j-admin
-rwxr-xr-x 1 root  root  612 May 12 00:03 neo4j-awspasswd
-rwxr-xr-x 1 neo4j adm  5.6K May  9 09:22 neo4j-import
-rwxr-xr-x 1 neo4j adm  5.6K May  9 09:22 neo4j-shell
drwxr-xr-x 2 neo4j adm  4.0K May 11 22:13 tools

Let me know if this is helpful and if you have any suggestions/improvements.

Written by Mark Needham

September 30th, 2017 at 9:23 pm

Posted in neo4j

Tagged with , , ,

Serverless: Building a mini producer/consumer data pipeline with AWS SNS

with one comment

I wanted to create a little data pipeline with Serverless whose main use would be to run once a day, call an API, and load that data into a database.

It’s mostly used to pull in recent data from that API, but I also wanted to be able to invoke it manually and specify a date range.

I created the following pair of lambdas that communicate with each other via an SNS topic.

The code

serverless.yml

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180
  iamRoleStatements:
    - Effect: 'Allow'
      Action:
        - "sns:Publish"
      Resource:
        - ${self:custom.BlogTopic}

custom:
  BlogTopic:
    Fn::Join:
      - ":"
      - - arn
        - aws
        - sns
        - Ref: AWS::Region
        - Ref: AWS::AccountId
        - marks-blog-topic

functions:
  message-consumer:
    name: MessageConsumer
    handler: handler.consumer
    events:
      - sns:
          topicName: marks-blog-topic
          displayName: Topic to process events
  message-producer:
    name: MessageProducer
    handler: handler.producer
    events:
      - schedule: rate(1 day)

handler.py

import boto3
import json
import datetime
from datetime import timezone
 
def producer(event, context):
    sns = boto3.client('sns')
 
    context_parts = context.invoked_function_arn.split(':')
    topic_name = "marks-blog-topic"
    topic_arn = "arn:aws:sns:{region}:{account_id}:{topic}".format(
        region=context_parts[3], account_id=context_parts[4], topic=topic_name)
 
    now = datetime.datetime.now(timezone.utc)
    start_date = (now - datetime.timedelta(days=1)).strftime("%Y-%m-%d")
    end_date = now.strftime("%Y-%m-%d")
 
    params = {"startDate": start_date, "endDate": end_date, "tags": ["neo4j"]}
 
    sns.publish(TopicArn= topic_arn, Message= json.dumps(params))
 
 
def consumer(event, context):
    for record in event["Records"]:
        message = json.loads(record["Sns"]["Message"])
 
        start_date = message["startDate"]
        end_date = message["endDate"]
        tags = message["tags"]
 
        print("start_date: " + start_date)
        print("end_date: " + end_date)
        print("tags: " + str(tags))

Trying it out

We can simulate a message being received locally by executing the following command:

$ serverless invoke local \
    --function message-consumer \
    --data '{"Records":[{"Sns": {"Message":"{\"tags\": [\"neo4j\"], \"startDate\": \"2017-09-25\", \"endDate\": \"2017-09-29\"  }"}}]}'
 
start_date: 2017-09-25
end_date: 2017-09-29
tags: ['neo4j']
null

That seems to work fine. What about if we invoke the message-producer on AWS?

$ serverless invoke --function message-producer
 
null

Did the consumer received the message?

$ serverless logs --function message-consumer
 
START RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f Version: $LATEST
start_date: 2017-09-29
end_date: 2017-09-30
tags: ['neo4j']
END RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f
REPORT RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f	Duration: 0.46 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 32 MB

Looks like it! We can also invoke the consumer directly on AWS:

$ serverless invoke \
    --function message-consumer \
    --data '{"Records":[{"Sns": {"Message":"{\"tags\": [\"neo4j\"], \"startDate\": \"2017-09-25\", \"endDate\": \"2017-09-26\"  }"}}]}'
 
null

And now if we check the consumer’s logs we’ll see both messages:

$ serverless logs --function message-consumer
 
START RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f Version: $LATEST
start_date: 2017-09-29
end_date: 2017-09-30
tags: ['neo4j']
END RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f
REPORT RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f	Duration: 0.46 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 32 MB	
 
START RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3ed Version: $LATEST
start_date: 2017-09-25
end_date: 2017-09-26
tags: ['neo4j']
END RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3ed
REPORT RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3ed	Duration: 16.46 ms	Billed Duration: 100 ms 	Memory Size: 1024 MB	Max Memory Used: 32 MB

Success!

Written by Mark Needham

September 30th, 2017 at 7:51 am

Serverless: S3 – S3BucketPermissions – Action does not apply to any resource(s) in statement

without comments

I’ve been playing around with S3 buckets with Serverless, and recently wrote the following code to create an S3 bucket and put a file into that bucket:

const AWS = require("aws-sdk");
 
let regionParams = { 'region': 'us-east-1' }
let s3 = new AWS.S3(regionParams);
 
let s3BucketName = "marks-blog-bucket";
 
console.log("Creating bucket: " + s3BucketName);
let bucketParams = { Bucket: s3BucketName, ACL: "public-read" };
 
s3.createBucket(bucketParams).promise()
  .then(console.log)
  .catch(console.error);
 
var putObjectParams = {
    Body: "<html><body><h1>Hello blog!</h1></body></html>",
    Bucket: s3BucketName,
    Key: "blog.html"
   };
 
s3.putObject(putObjectParams).promise()
  .then(console.log)
  .catch(console.error);

When I tried to cURL the file I got a permission denied exception:

$ curl --head --silent https://s3.amazonaws.com/marks-blog-bucket/blog.html
HTTP/1.1 403 Forbidden
x-amz-request-id: 512FE36798C0BE4D
x-amz-id-2: O1ELGMJ0jjs11WCrNiVNF2z2ssRgtO4+M4H2QQB5025HjIpC54VId0eKZryYeBYN8Pvb8GsolTQ=
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Fri, 29 Sep 2017 05:42:03 GMT
Server: AmazonS3

I wrote the following code to try and have Serverless create a bucket policy that would make all files in my bucket publicly accessible:

serverless.yml

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180

resources:
  Resources:
    S3BucketPermissions:
      Type: AWS::S3::BucketPolicy
      Properties:
        Bucket: marks-blog-bucket
        PolicyDocument:
          Statement:
            - Principal: "*"
              Action:
                - s3:GetObject
              Effect: Allow
              Sid: "AddPerm"
              Resource: arn:aws:s3:::marks-blog-bucket
 
...

Let’s try to deploy it:

./node_modules/serverless/bin/serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.3 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
........
Serverless: Operation failed!
 
  Serverless Error ---------------------------------------
 
  An error occurred: S3BucketPermissions - Action does not apply to any resource(s) in statement.

D’oh! That didn’t do what I expected.

I learnt that this message means:

Some services do not let you specify actions for individual resources; instead, any actions that you list in the Action or NotAction element apply to all resources in that service. In these cases, you use the wildcard * in the Resource element.

To fix it we need to use the wildcard * to indicate that the s3:GetObject permission should apply to all values in the bucket rather than to the bucket itself.

Take 2!

serverless.yml

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180

resources:
  Resources:
    S3BucketPermissions:
      Type: AWS::S3::BucketPolicy
      Properties:
        Bucket: marks-blog-bucket
        PolicyDocument:
          Statement:
            - Principal: "*"
              Action:
                - s3:GetObject
              Effect: Allow
              Sid: "AddPerm"
              Resource: arn:aws:s3:::marks-blog-bucket/*
 
...

Let’s deploy again and try to access the file:

$ curl --head --silent https://s3.amazonaws.com/marks-blog-bucket/blog.html
HTTP/1.1 200 OK
x-amz-id-2: uGwsLLoFHf+slXADGYkqW0bLfQ7EPG/kqzV3l2k7SMex4NlMEpNsNN/cIC9INLPohDtVFwUAa90=
x-amz-request-id: 7869E21760CD50F1
Date: Fri, 29 Sep 2017 06:05:11 GMT
Last-Modified: Fri, 29 Sep 2017 06:01:33 GMT
ETag: "57bac87219812c2f9a581943da34cfde"
Accept-Ranges: bytes
Content-Type: application/octet-stream
Content-Length: 46
Server: AmazonS3

Success! And if we check in the AWS console we can see that the bucket policy has been applied to our bucket:

2017 09 29 07 06 13

Written by Mark Needham

September 29th, 2017 at 6:09 am

Posted in Software Development

Tagged with , ,

AWS Lambda: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory’

without comments

I’ve been working on an AWS lambda job to convert a HTML page to PDF using a Python wrapper around the wkhtmltopdf library but ended up with the following error when I tried to execute it:

b'/bin/sh: ./binary/wkhtmltopdf: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory\n': Exception
Traceback (most recent call last):
File "/var/task/handler.py", line 33, in generate_certificate
wkhtmltopdf(local_html_file_name, local_pdf_file_name)
File "/var/task/lib/wkhtmltopdf.py", line 64, in wkhtmltopdf
wkhp.render()
File "/var/task/lib/wkhtmltopdf.py", line 56, in render
raise Exception(stderr)
Exception: b'/bin/sh: ./binary/wkhtmltopdf: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory\n'

It turns out this is the error you get if you run a 32 bit binary on a 64 bit operating system which is what AWS uses.

If you are using any native binaries in your code, make sure they are compiled in this environment. Note that only 64-bit binaries are supported on AWS Lambda.

I changed to the 64 bit binary and am now happily converting HTML pages to PDF.

Written by Mark Needham

August 3rd, 2017 at 5:24 pm

Posted in Software Development

Tagged with ,

AWS Lambda: Programmatically scheduling a CloudWatchEvent

with 2 comments

I recently wrote a blog post showing how to create a Python ‘Hello World’ AWS lambda function and manually invoke it, but what I really wanted to do was have it run automatically every hour.

To achieve that in AWS Lambda land we need to create a CloudWatch Event. The documentation describes them as follows:

Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams.

2017 04 05 23 06 36

This is actually really easy from the Amazon web console as you just need to click the ‘Triggers’ tab and then ‘Add trigger’. It’s not obvious that there are actually three steps are involved as they’re abstracted from you.

So what are the steps?

  1. Create rule
  2. Give permission for that rule to execute
  3. Map the rule to the function

I forgot to do step 2) initially and then you just end up with a rule that never triggers, which isn’t particularly useful.

The following code creates a ‘Hello World’ lambda function and runs it once an hour:

import boto3
 
lambda_client = boto3.client('lambda')
events_client = boto3.client('events')
 
fn_name = "HelloWorld"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'
 
fn_response = lambda_client.create_function(
    FunctionName=fn_name,
    Runtime='python2.7',
    Role=fn_role,
    Handler="{0}.lambda_handler".format(fn_name),
    Code={'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(), },
)
 
fn_arn = fn_response['FunctionArn']
frequency = "rate(1 hour)"
name = "{0}-Trigger".format(fn_name)
 
rule_response = events_client.put_rule(
    Name=name,
    ScheduleExpression=frequency,
    State='ENABLED',
)
 
lambda_client.add_permission(
    FunctionName=fn_name,
    StatementId="{0}-Event".format(name),
    Action='lambda:InvokeFunction',
    Principal='events.amazonaws.com',
    SourceArn=rule_response['RuleArn'],
)
 
events_client.put_targets(
    Rule=name,
    Targets=[
        {
            'Id': "1",
            'Arn': fn_arn,
        },
    ]
)

We can now check if our trigger has been configured correctly:

$ aws events list-rules --query "Rules[?Name=='HelloWorld-Trigger']"
[
    {
        "State": "ENABLED", 
        "ScheduleExpression": "rate(1 hour)", 
        "Name": "HelloWorld-Trigger", 
        "Arn": "arn:aws:events:us-east-1:[your-aws-id]:rule/HelloWorld-Trigger"
    }
]
 
$ aws events list-targets-by-rule --rule HelloWorld-Trigger
{
    "Targets": [
        {
            "Id": "1", 
            "Arn": "arn:aws:lambda:us-east-1:[your-aws-id]:function:HelloWorld"
        }
    ]
}
 
$ aws lambda get-policy --function-name HelloWorld
{
    "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"HelloWorld-Trigger-Event\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:[your-aws-id]:function:HelloWorld\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:[your-aws-id]:rule/HelloWorld-Trigger\"}}}]}"
}

All looks good so we’re done!

Written by Mark Needham

April 5th, 2017 at 11:49 pm

Posted in Software Development

Tagged with , ,

AWS Lambda: Programatically create a Python ‘Hello World’ function

with 2 comments

I’ve been playing around with AWS Lambda over the last couple of weeks and I wanted to automate the creation of these functions and all their surrounding config.

Let’s say we have the following Hello World function:

def lambda_handler(event, context):
    print("Hello world")

To upload it to AWS we need to put it inside a zip file so let’s do that:

$ zip HelloWorld.zip HelloWorld.py
$ unzip -l HelloWorld.zip 
Archive:  HelloWorld.zip
  Length     Date   Time    Name
 --------    ----   ----    ----
       61  04-02-17 22:04   HelloWorld.py
 --------                   -------
       61                   1 file

Now we’re ready to write a script to create our AWS lambda function.

import boto3
 
lambda_client = boto3.client('lambda')
 
fn_name = "HelloWorld"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'
 
lambda_client.create_function(
    FunctionName=fn_name,
    Runtime='python2.7',
    Role=fn_role,
    Handler="{0}.lambda_handler".format(fn_name),
    Code={'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(), },
)

[your-aws-id] needs to be replaced with the identifier of our AWS account. We can find that out be running the following command against the AWS CLI:

$ aws ec2 describe-security-groups --query 'SecurityGroups[0].OwnerId' --output text
123456789012

Now we can create our function:

$ python CreateHelloWorld.py

2017 04 02 23 07 38

And if we test the function we’ll get the expected output:

2017 04 02 23 02 59

Written by Mark Needham

April 2nd, 2017 at 10:11 pm

Posted in Software Development

Tagged with ,

s3cmd: put fails with “Connection reset by peer” for large files

with 2 comments

I recently wanted to copy some large files from an AWS instance into an S3 bucket using s3cmd but ended up with the following error when trying to use the ‘put’ command:

$ s3cmd put /mnt/ebs/myfile.tar s3://mybucket.somewhere.com
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
     1077248 of 12185313280     0% in    1s   937.09 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
     1183744 of 12185313280     0% in    1s  1062.18 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
      417792 of 12185313280     0% in    1s   378.75 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.05)
WARNING: Waiting 9 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       94208 of 12185313280     0% in    1s    81.04 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.25)
WARNING: Waiting 12 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       28672 of 12185313280     0% in    1s    18.40 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       12288 of 12185313280     0% in    2s     4.41 kB/s  failed
ERROR: Upload of '/mnt/ebs/myfile.tar' failed too many times. Skipping that file.

I tried with a smaller file just to make sure I wasn’t doing anything stupid syntax wise and that transferred without a problem which lead me to believe the problem might be when uploading larger files – the one I was uploading was around ~10GB in size.

I eventually came across this StackOverflow thread which suggested that files >5GB in size need to make use of the ‘multi part method’ which was released in version 1.1.0 of s3cmd.

The Ubuntu repository comes with version 1.0.0 so I needed to find a way of getting a newer version onto the machine.

I eventually ended up downloading version 1.5.0 from sourceforge but I couldn’t get a direct URI to download it so I ended up downloading it to my machine, uploading to the S3 bucket through the web UI and then pulling it back down again using a ‘s3cmd get’. #epic

In retrospect the s3cmd PPA might have been a better option.

Anyway, when I used this s3cmd it uploaded using multi part fine:

...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [part 761 of 775, 15MB]
 15728640 of 15728640   100% in    3s     4.12 MB/s  done
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [part 762 of 775, 15MB]
...

Written by Mark Needham

July 30th, 2013 at 4:20 pm

Posted in Software Development

Tagged with ,

Wiring up an Amazon S3 bucket to a CNAME entry – The specified bucket does not exist

with 2 comments

Jason and I were setting up an internal static website using an S3 bucket a couple of days ago and wanted to point a more friendly domain name at it.

We initially called our bucket ‘static-site’ and then created a CNAME entry using zerigo to point our sub domain at the bucket.

The mapping was something like this:

our-subdomain.somedomain.com -> static-site.s3-website-eu-west-1.amazonaws.com

When we tried to access the site through our-subdomain.somedomain.com we got the following error:

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName></BucketName>
<RequestId></RequestId>
<HostId>

A bit of googling led us to this thread which suggested that we needed to ensure that our bucket was named after the sub domain that we wanted to serve the site from.

In this case we needed to rename our bucket to ‘our-subdomain.somedomain.com” and then our CNAME entry to:

our-subdomain.somedomain.com -> our-subdomain.somedomain.com.s3-website-eu-west-1.amazonaws.com

And then everything was happy.

Written by Mark Needham

March 21st, 2013 at 10:39 pm