Mark Needham

Thoughts on Software Development

Archive for the ‘s3’ tag

Serverless: S3 – S3BucketPermissions – Action does not apply to any resource(s) in statement

without comments

I’ve been playing around with S3 buckets with Serverless, and recently wrote the following code to create an S3 bucket and put a file into that bucket:

const AWS = require("aws-sdk");
 
let regionParams = { 'region': 'us-east-1' }
let s3 = new AWS.S3(regionParams);
 
let s3BucketName = "marks-blog-bucket";
 
console.log("Creating bucket: " + s3BucketName);
let bucketParams = { Bucket: s3BucketName, ACL: "public-read" };
 
s3.createBucket(bucketParams).promise()
  .then(console.log)
  .catch(console.error);
 
var putObjectParams = {
    Body: "<html><body><h1>Hello blog!</h1></body></html>",
    Bucket: s3BucketName,
    Key: "blog.html"
   };
 
s3.putObject(putObjectParams).promise()
  .then(console.log)
  .catch(console.error);

When I tried to cURL the file I got a permission denied exception:

$ curl --head --silent https://s3.amazonaws.com/marks-blog-bucket/blog.html
HTTP/1.1 403 Forbidden
x-amz-request-id: 512FE36798C0BE4D
x-amz-id-2: O1ELGMJ0jjs11WCrNiVNF2z2ssRgtO4+M4H2QQB5025HjIpC54VId0eKZryYeBYN8Pvb8GsolTQ=
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Fri, 29 Sep 2017 05:42:03 GMT
Server: AmazonS3

I wrote the following code to try and have Serverless create a bucket policy that would make all files in my bucket publicly accessible:

serverless.yml

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180

resources:
  Resources:
    S3BucketPermissions:
      Type: AWS::S3::BucketPolicy
      Properties:
        Bucket: marks-blog-bucket
        PolicyDocument:
          Statement:
            - Principal: "*"
              Action:
                - s3:GetObject
              Effect: Allow
              Sid: "AddPerm"
              Resource: arn:aws:s3:::marks-blog-bucket
 
...

Let’s try to deploy it:

./node_modules/serverless/bin/serverless deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.3 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
........
Serverless: Operation failed!
 
  Serverless Error ---------------------------------------
 
  An error occurred: S3BucketPermissions - Action does not apply to any resource(s) in statement.

D’oh! That didn’t do what I expected.

I learnt that this message means:

Some services do not let you specify actions for individual resources; instead, any actions that you list in the Action or NotAction element apply to all resources in that service. In these cases, you use the wildcard * in the Resource element.

To fix it we need to use the wildcard * to indicate that the s3:GetObject permission should apply to all values in the bucket rather than to the bucket itself.

Take 2!

serverless.yml

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180

resources:
  Resources:
    S3BucketPermissions:
      Type: AWS::S3::BucketPolicy
      Properties:
        Bucket: marks-blog-bucket
        PolicyDocument:
          Statement:
            - Principal: "*"
              Action:
                - s3:GetObject
              Effect: Allow
              Sid: "AddPerm"
              Resource: arn:aws:s3:::marks-blog-bucket/*
 
...

Let’s deploy again and try to access the file:

$ curl --head --silent https://s3.amazonaws.com/marks-blog-bucket/blog.html
HTTP/1.1 200 OK
x-amz-id-2: uGwsLLoFHf+slXADGYkqW0bLfQ7EPG/kqzV3l2k7SMex4NlMEpNsNN/cIC9INLPohDtVFwUAa90=
x-amz-request-id: 7869E21760CD50F1
Date: Fri, 29 Sep 2017 06:05:11 GMT
Last-Modified: Fri, 29 Sep 2017 06:01:33 GMT
ETag: "57bac87219812c2f9a581943da34cfde"
Accept-Ranges: bytes
Content-Type: application/octet-stream
Content-Length: 46
Server: AmazonS3

Success! And if we check in the AWS console we can see that the bucket policy has been applied to our bucket:

2017 09 29 07 06 13

Written by Mark Needham

September 29th, 2017 at 6:09 am

Posted in Software Development

Tagged with , ,

s3cmd: put fails with “Connection reset by peer” for large files

with 2 comments

I recently wanted to copy some large files from an AWS instance into an S3 bucket using s3cmd but ended up with the following error when trying to use the ‘put’ command:

$ s3cmd put /mnt/ebs/myfile.tar s3://mybucket.somewhere.com
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
     1077248 of 12185313280     0% in    1s   937.09 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
     1183744 of 12185313280     0% in    1s  1062.18 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
      417792 of 12185313280     0% in    1s   378.75 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.05)
WARNING: Waiting 9 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       94208 of 12185313280     0% in    1s    81.04 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.25)
WARNING: Waiting 12 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       28672 of 12185313280     0% in    1s    18.40 kB/s  failed
WARNING: Upload failed: /myfile.tar ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [1 of 1]
       12288 of 12185313280     0% in    2s     4.41 kB/s  failed
ERROR: Upload of '/mnt/ebs/myfile.tar' failed too many times. Skipping that file.

I tried with a smaller file just to make sure I wasn’t doing anything stupid syntax wise and that transferred without a problem which lead me to believe the problem might be when uploading larger files – the one I was uploading was around ~10GB in size.

I eventually came across this StackOverflow thread which suggested that files >5GB in size need to make use of the ‘multi part method’ which was released in version 1.1.0 of s3cmd.

The Ubuntu repository comes with version 1.0.0 so I needed to find a way of getting a newer version onto the machine.

I eventually ended up downloading version 1.5.0 from sourceforge but I couldn’t get a direct URI to download it so I ended up downloading it to my machine, uploading to the S3 bucket through the web UI and then pulling it back down again using a ‘s3cmd get’. #epic

In retrospect the s3cmd PPA might have been a better option.

Anyway, when I used this s3cmd it uploaded using multi part fine:

...
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [part 761 of 775, 15MB]
 15728640 of 15728640   100% in    3s     4.12 MB/s  done
/mnt/ebs/myfile.tar -> s3://mybucket.somewhere.com/myfile.tar  [part 762 of 775, 15MB]
...

Written by Mark Needham

July 30th, 2013 at 4:20 pm

Posted in Software Development

Tagged with ,