Direct Upload to S3 (using AWS Signature v4 & PHP)

Direct Upload to S3 (using AWS Signature v4 & PHP)

   99 Comments   
http://bit.ly/19cVkzb

The contents of this article has been replaced by a PHP Composer package, hope you find it useful.

  View on Github

This article is specifically about directly uploading files to S3 using the AWS Signature Version 4, which is mandatory for new S3 regions, like Frankfurt (EU). It will also become required on other regions at some point as Amazon migrate over, so it’s recommended to use this method where ever possible. For more info, you can read about Signature v4 in their documentation.

Specifically around the AWS Signature V4, the documentation is brief but complete. The one page in particular which is helpful to explain the process of building your policy, creating a signature and building a form is this one.

This code uses PHP to generate the policy and signature, but all before the initial request is sent and the upload made, so the file itself won’t be transferred through your server. This has many advantages, especially on cloud platforms where uploading a large file would cause unnecessary performance issues. Once the AWS signature and policy have been built, this method will then send the file with jQuery fileupload plugin to manage the ajax request for us, reporting back on it’s progress.

Instead of splitting the code up and explaining each bit, like we did in the previous post, below is a complete copy of the code. There’s also a download link and a github repo if you want to see it in action.


Download (zip)     View on Github

The Result:

Screencast from 03-12-15 20_17_49

Full Code: (single page)

<?php

// TODO Enter your AWS credentials
// Note: these can be set as environment variables (with the same name) or constants.
define('AWS_ACCESS_KEY', '');
define('AWS_SECRET', '');

// TODO Enter your bucket and region details (see details below)
$s3FormDetails = getS3Details('', '');

/**
 * Get all the necessary details to directly upload a private file to S3
 * asynchronously with JavaScript using the Signature V4.
 *
 * @param string $s3Bucket your bucket's name on s3.
 * @param string $region   the bucket's location/region, see here for details: http://amzn.to/1FtPG6r
 * @param string $acl      the visibility/permissions of your file, see details: http://amzn.to/18s9Gv7
 *
 * @return array ['url', 'inputs'] the forms url to s3 and any inputs the form will need.
 */
function getS3Details($s3Bucket, $region, $acl = 'private') {

    // Options and Settings
    $awsKey = (!empty(getenv('AWS_ACCESS_KEY')) ? getenv('AWS_ACCESS_KEY') : AWS_ACCESS_KEY);
    $awsSecret = (!empty(getenv('AWS_SECRET')) ? getenv('AWS_SECRET') : AWS_SECRET);

    $algorithm = "AWS4-HMAC-SHA256";
    $service = "s3";
    $date = gmdate("Ymd\THis\Z");
    $shortDate = gmdate("Ymd");
    $requestType = "aws4_request";
    $expires = "86400"; // 24 Hours
    $successStatus = "201";
    $url = "//{$s3Bucket}.{$service}-{$region}.amazonaws.com";

    // Step 1: Generate the Scope
    $scope = [
        $awsKey,
        $shortDate,
        $region,
        $service,
        $requestType
    ];
    $credentials = implode('/', $scope);

    // Step 2: Making a Base64 Policy
    $policy = [
        'expiration' => gmdate('Y-m-d\TG:i:s\Z', strtotime('+6 hours')),
        'conditions' => [
            ['bucket' => $s3Bucket],
            ['acl' => $acl],
            ['starts-with', '$key', ''],
            ['starts-with', '$Content-Type', ''],
            ['success_action_status' => $successStatus],
            ['x-amz-credential' => $credentials],
            ['x-amz-algorithm' => $algorithm],
            ['x-amz-date' => $date],
            ['x-amz-expires' => $expires],
        ]
    ];
    $base64Policy = base64_encode(json_encode($policy));

    // Step 3: Signing your Request (Making a Signature)
    $dateKey = hash_hmac('sha256', $shortDate, 'AWS4' . $awsSecret, true);
    $dateRegionKey = hash_hmac('sha256', $region, $dateKey, true);
    $dateRegionServiceKey = hash_hmac('sha256', $service, $dateRegionKey, true);
    $signingKey = hash_hmac('sha256', $requestType, $dateRegionServiceKey, true);

    $signature = hash_hmac('sha256', $base64Policy, $signingKey);

    // Step 4: Build form inputs
    // This is the data that will get sent with the form to S3
    $inputs = [
        'Content-Type' => '',
        'acl' => $acl,
        'success_action_status' => $successStatus,
        'policy' => $base64Policy,
        'X-amz-credential' => $credentials,
        'X-amz-algorithm' => $algorithm,
        'X-amz-date' => $date,
        'X-amz-expires' => $expires,
        'X-amz-signature' => $signature
    ];

    return compact('url', 'inputs');
}

?>

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Direct Upload Example</title>
        <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/3.0.3/normalize.min.css">
        <link rel="stylesheet" href="style.css">
    </head>
    <body>

        <div class="container">
            <h1>Direct Upload</h1>

            <!-- Direct Upload to S3 Form -->
            <form action="<?php echo $s3FormDetails['url']; ?>"
                  method="POST"
                  enctype="multipart/form-data"
                  class="direct-upload">

                <?php foreach ($s3FormDetails['inputs'] as $name => $value) { ?>
                    <input type="hidden" name="<?php echo $name; ?>" value="<?php echo $value; ?>">
                <?php } ?>

                <!-- Key is the file's name on S3 and will be filled in with JS -->
                <input type="hidden" name="key" value="">
                <input type="file" name="file" multiple>

                <!-- Progress Bars to show upload completion percentage -->
                <div class="progress-bar-area"></div>

            </form>

            <!-- This area will be filled with our results (mainly for debugging) -->
            <div>
                <h3>Files</h3>
                <textarea id="uploaded"></textarea>
            </div>

        </div>

        <!-- Start of the JavaScript -->
        <!-- Load jQuery & jQuery UI (Needed for the FileUpload Plugin) -->
        <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"></script>
        <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js"></script>

        <!-- Load the FileUpload Plugin (more info @ https://github.com/blueimp/jQuery-File-Upload) -->
        <script src="https://cdnjs.cloudflare.com/ajax/libs/blueimp-file-upload/9.5.7/jquery.fileupload.js"></script>

        <script>
            $(document).ready(function () {

                // Assigned to variable for later use.
                var form = $('.direct-upload');
                var filesUploaded = [];

                // Place any uploads within the descending folders
                // so ['test1', 'test2'] would become /test1/test2/filename
                var folders = [];

                form.fileupload({
                    url: form.attr('action'),
                    type: form.attr('method'),
                    datatype: 'xml',
                    add: function (event, data) {

                        // Show warning message if your leaving the page during an upload.
                        window.onbeforeunload = function () {
                            return 'You have unsaved changes.';
                        };

                        // Give the file which is being uploaded it's current content-type (It doesn't retain it otherwise)
                        // and give it a unique name (so it won't overwrite anything already on s3).
                        var file = data.files[0];
                        var filename = Date.now() + '.' + file.name.split('.').pop();
                        form.find('input[name="Content-Type"]').val(file.type);
                        form.find('input[name="key"]').val((folders.length ? folders.join('/') + '/' : '') + filename);

                        // Actually submit to form to S3.
                        data.submit();

                        // Show the progress bar
                        // Uses the file size as a unique identifier
                        var bar = $('<div class="progress" data-mod="'+file.size+'"><div class="bar"></div></div>');
                        $('.progress-bar-area').append(bar);
                        bar.slideDown('fast');
                    },
                    progress: function (e, data) {
                        // This is what makes everything really cool, thanks to that callback
                        // you can now update the progress bar based on the upload progress.
                        var percent = Math.round((data.loaded / data.total) * 100);
                        $('.progress[data-mod="'+data.files[0].size+'"] .bar').css('width', percent + '%').html(percent+'%');
                    },
                    fail: function (e, data) {
                        // Remove the 'unsaved changes' message.
                        window.onbeforeunload = null;
                        $('.progress[data-mod="'+data.files[0].size+'"] .bar').css('width', '100%').addClass('red').html('');
                    },
                    done: function (event, data) {
                        window.onbeforeunload = null;

                        // Upload Complete, show information about the upload in a textarea
                        // from here you can do what you want as the file is on S3
                        // e.g. save reference to your server using another ajax call or log it, etc.
                        var original = data.files[0];
                        var s3Result = data.result.documentElement.children;
                        filesUploaded.push({
                            "original_name": original.name,
                            "s3_name": s3Result[2].innerHTML,
                            "size": original.size,
                            "url": s3Result[0].innerHTML
                        });
                        $('#uploaded').html(JSON.stringify(filesUploaded, null, 2));
                    }
                });
            });
        </script>
    </body>
</html>


Note: If you’re looking for an ASP.NET version then I. Auty has created a port over on GitHub.

99 responses to “Direct Upload to S3 (using AWS Signature v4 & PHP)

  1. Thanks for updating this post and the code to go with v4. I’m trying to implement this, but wonder if I have the CORS policy wrong for the bucket — assuming I need it the same as your previous post, yes?

      1. Edd, thanks for your reply – I was finally able to get free of other work to check back on this, and it was a simple issue: I’m working in US-East-1, and so needed to drop out the S3_REGION value from your form action. Works like a charm now.

        But I have another small issue I’ll work on this morning — I’m trying to use this code to help admins upload any *massive* supporting files for their website directly to S3, but some of those are images. This code uploads to S3 so that all files are seen as “Content-Type: binary/octet-stream” but it would be good if I can read the actual mime type before upload and then set accordingly — image/gif, image/png, application/pdf, and so on. This may have come up in the comments of the previous article so I’ll search there, but am going to work at that little snippet as I have time today.

        Thanks for a great piece of code here, it’s quick and streamlined. I’m glad not to have to reinvent it.

        1. Neal, I’m able to extract the Content-Type out of the File Upload plugin via “data.originalFiles[0].type” within the “add” event. From there I set the value in a hidden “Content-Type” input to appease AWS. It seems to be working pretty well thus far.

          1. Ted, thanks for the comment back. I’d be interested to hammer this out as you describe. Just below the realName var within the add event, I have added var contType = data.originalFiles[0].type; to grab the Content-Type, then down a few lines added form.find('input[name=Content-Type]').val(contType); to throw that value up into a hidden input field named Content-Type. But I’m definitely missing something small — just getting a 403 Forbidden back from S3. Any suggestions?

        2. You’re welcome, Neal!

          Hmm, for the 403 error, did you add a new condition in the policy for $Content-Type? For example, when I added the new Content-Type hidden input to the “direct-upload” form I also added ['starts-with', '$Content-Type', ''] to $policy, per the following from http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html#sigv4-PolicyConditions:

          “Each form field that you specify in a form (except x-amz-signature, file, policy, and field names that have an x-ignore- prefix) must appear in the list of conditions.”

          Hopefully this does the trick for you.

          1. KAPOW! That did the trick, Ted. I was definitely forgetting to make a corresponding element in the policy statement to go with the Content-Type input. Works flawlessly now. I’m particularly glad to bypass all PHP size/post/max limitations with this code, as well as offload massive files off of our Gluster storage nodes.

            Thanks so much, I appreciate you taking the time.

  2. Edd, thanks so much for this post! With your help I was able to unravel a few mysteries and adapt this approach to a Java environment.

    1. Thanks Ted, glad to hear you got it working. I think code snippets like this should be in the AWS documentation, but instead you have to piece it together from like 4 different places.

  3. Thanks so much for an awesome post! Worked like a charm! Could you please tell me how to upload multiple files in a single form?

    Also I need the form to be submitted by a button instead of auto upload. Could you please help me with this?

    Thanks!

    1. Hi Raja, good idea. I’ll have a look at multiple file upload when I can – I’m sure it’s possible. Uploading with a button would be a case of using a click listener instead of the form submit.

  4. In this code you are generating date strings formatted to say they are UTC (GMT) dates but you are using date(), which generates local datetime strings. All date() functions need to be replaced with gmdate() or the policy expiry will not make sense.

  5. Hi Guys
    Firstly Thanks Edd for the script. I am very new to this AWS and like to take advantage of their S3 services for storing files. I am trying to follow their latest technology or requirement lets say when using their SDK and putting a script lets say, but I am all confused.
    My main concern is now to upload few files at a time and not sure I must use this “Signature Version 4” or not using PHP.
    I hope I made sense.
    Appreciate any reply.

    1. Hi Ben! First of all, is direct upload the best solution to your problem (or would you be happy for the file to hit your server first)? If you’re happy for the file to go to your server first there are some good AWS PHP libraries which do much of the heavy work for you.

      It’s possible to upload multiple files (but I haven’t posted an example yet), just by replicating the form and tinkering with the JS.

      The signature version 4 is AWS’s way of verifying you (it’s the PHP at the top of the script), version 4 is coming in to replace version 2 and it depends which S3 location (US/Ireland/Germany) your using as to whether you have to use v4 or can use v2 still.

  6. Hi Edd
    Thank for the reply. Yes, I like to directly upload to AWS S3 as there are other problems such as timeout and file size issues on virtual servers for example. I will be using Sydney for the location, but eventually would think they would change to Signature base as well.

    I have also used some third-party S3 Classes, but would think if I create something for a client it would be best to stick to the main provider preferences than continually updating their script.

    As for multiple upload, not multiple part upload, I will try something in line of jQuery, but not really sure how AWS would react to that.

    But before I go where would you recommend that would be a good place to get more examples for this product and resources.

    Thank you kindly for looking into this post.

    1. In this example we’ve used the jQuery File Upload plugin, but another one off the top of my head is dropzone (http://www.dropzonejs.com/) which I know supports multiple files quite well (and I’ve used it with direct upload). As for examples of multiple file uploads with direct upload, I don’t know of any, I’ll have to write one 😉 – let me know how it goes.

  7. Thanks for a very useful article. With this one, I can put file to bucket. But how can I put a file to a folder in bucket?
    (bucket > folder_one > text.txt).
    I try this “//s3-eu-west-1.amazonaws.com/bucket_name/folder_one”. But It does not work.
    So please reply to me.
    Regards

    1. Afternoon Kin, instead of changing the URL, I think you’re going to have to change the input[name="key"] – like we do with Content-Type. The key is the filename but adding a folder name and a forward slash to it should put it in the folder. Let us know if that works 🙂

  8. Can you post a demo of this? This isn’t working at all for me. I copied everything as is, and added my correct credentials to the provided constants.

  9. Im getting following error while uploading the media file “a non-empty Access Key (AKID) must be provided in the credential.” , Actually this server attached with IAM Role, So It doesn’t have the access key and secret key , Please help me to direct upload to this server.

  10. how can i direct upload to s3 bucket for that instance attached with IAM role. without credentials how we do the direct upload for s3 bucket.

  11. Ec2 Instance launched as attached with IAM Role, so we didn’t get any access key and secret. If its possible how i can get that , please help me to fix this

  12. Hi , Edd

    Thanks for putting so much effort into this and un-complicating alot of stuff . I would like to ask one question regarding the flow of the application…

    1. The form uploading script has the url as — form.attr(‘action’) —which in turn is the s3 url
    right ?

    So we that form is submitted, who will fill up the hidden fields ? Aren’t those fields meant to replaced by what your php script sends ?

    I am a bit confused here…

    1. Hi rahul, yep the php should be filling all the hidden inputs. The form should upload a file even if no JS is loaded (it just won’t work async).

      Some like Content-type and key don’t get replaced however until the form is submitted with JS.

  13. Hi Edd,
    This is very detailed tutorial. Thank you for that.
    I just have a small question.
    How can I receive fully qualified URL for the file I uploaded ?
    Currently I see it only returns the file name and not full url to file.
    Thanks,

    1. Hi Khushi, you know the url because you’ve specified it within the form’s action – and you know the filename (or key), so the two together should give you the full url. Thanks for the comment 🙂

      1. Hi Edd,
        Thank you for the response.

        What you mentioned works sometimes.
        However, S3 changes the spaces in file name to “+” symbol. I have not tried every other possible symbol in file name.

        Hence, it is always better to read the tag from the response xml.

        Can you please advice on how can I get it?
        Thanks,

        1. Hello again Khushi, I actually updated the script the other week – allowing for multiple files (hopefully) anyway, long story short, the new version should indeed read the name of the file from the response xml. Let me know if I misunderstood, enojy 🙂

  14. Hello, I am trying this script and it’s not working anymore. I am getting 403 error. I think url structure has been changed mybucket.s3-us-west-2.amazonaws.com is not working anymore.

  15. Hi Edd,

    Thank you for the tutorial, but I get the following error: Uncaught TypeError: $.widget is not a function

    It is situated in the jquery.fileupload.js file. I googled a bit, but looks like the jquery ui has been included, so I don’t know why this doesn’t work…

    1. Hi, are you loading it over https? Also, are you running this through a web server? It sounds like either jQuery UI or jQuery itself hasn’t been loaded (hence the function wouldn’t exist).

      1. Looks like I included the jquery fileupload twice in different places in my project. That solved it!

        Got it all working now. Wonderful!

  16. Perhaps another question.

    I added some custom fields to the form and added them in the policy too like this: ['starts-with', '$x-amz-meta-spot-name', ''], but as soon as I try to upload a file, I get the following error in my console:

    net::ERR_CONNECTION_RESET
    

    Any guesses why this happens? I only added the extra fields in the form…

  17. Hi Edd, thanks for this awesome script and tutorial but for some reason I can’t get it to work.
    I’m getting this error: net::ERR_NAME_NOT_RESOLVED
    Any idea why this happens?

      1. One more question Edd. I want to limit the number of files to be upload it to 1. Right now even if I remove ‘multiple’ from , it will upload all selected files or even if I select a file and then go to choose file again to replace it with a different one, it will upload both of them.

  18. Was anybody able to run it in Chrome? Works fine in Firefox (after CORS configuration edited). Still does not work in Chrome. Using ubuntu so have no idea what going on in IE.
    Getting x-amz-id-2 and x-amz-request-id headers in OPTION request while in Firefox there are some additional ex. Content-Length, Vary, Server etc.

  19. I had the same error within Chrome, after some changes it started working with this policy:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <CORSRule>
            <AllowedOrigin>*</AllowedOrigin>
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>PUT</AllowedMethod>
            <MaxAgeSeconds>3000</MaxAgeSeconds>
            <AllowedHeader>*</AllowedHeader>
        </CORSRule>
    </CORSConfiguration>
    
  20. Hi Edd, thanks! Works great even locally but got a problem since my site works on https protocol. Is it possible to fix this (not sure it is).
    getting security error in Firefox:

    img.k.ua.s3.eu-central-1.amazonaws.com uses an invalid security certificate.
    
    The certificate is only valid for the following names:
      *.s3.eu-central-1.amazonaws.com, *.s3-eu-central-1.amazonaws.com, s3-eu-central-1.amazonaws.com, s3.eu-central-1.amazonaws.com  
    
    (Error code: ssl_error_bad_cert_domain) 
    

    in Chrome something like net::ERR_INSECURE_RESPONCE

    1. solved this. may be usefull.
      for sites with https bucket name should not contain “.” (dot). otherwise there’ll be error described above

  21. This is a great tutorial and I’m working through it to adapt it for my form that has other fields. Essentially I’m having the file field upload the file and then put the S3 key in a text field that is submitted with the other data.

    I’m having a problem with the file upload, getting the error: "Invalid Policy: Invalid 'conditions' value: must be a List."

    I’m running a PHP5.3 server so I’ve had to change all the array brackets to parentheses like this:

    // Step 2: Making a Base64 Policy
    $policy = array(
    	'expiration' => gmdate('Y-m-dTG:i:sZ', strtotime('+6 hours')),
    	'conditions' => array(
    		'bucket' => $s3Bucket,
    		'acl' => $acl,
    		array('starts-with', '$key', ''),
    		array('starts-with', '$Content-Type', ''),
    		'success_action_status' => $successStatus,
    		'x-amz-credential' => $credentials,
    		'x-amz-algorithm' => $algorithm,
    		'x-amz-date' => $date,
    		'x-amz-expires' => $expires,
    	)
    );
    

    (Sorry I don’t know the markup to make that look fancy.) Any ideas what could be throwing this error? From what I can tell I’m sending the exact same array of data.

    1. OK wait, I think I’m onto something. Since my file field is inside my main form (it’s not in its own form), it’s uploading all the other fields along with the file itself. Ted mentioned above that I need a condition for every field in the form. This sounds like a maintenance headache. Is there a way to tell it to ignore “any other fields?”

    2. I tried putting this in a form all by itself and it still returns the “Invalid Policy: Invalid ‘conditions’ value: must be a List.” error. The funny thing is that if you Google this error, you get a SINGLE result, which is a git diff entry for a git project.

    3. In the end I’ve upgraded my server to PHP 5.5, and used the new array syntax (square brackets [ ]) and it seems to be working now.

      I think the problem was that not every “element” in the array I was passing was in it’s own array. In other words, it should’ve looked like this:

      // Step 2: Making a Base64 Policy
      $policy = array(
       ‘expiration’ => gmdate(‘Y-m-dTG:i:sZ’, strtotime(‘+6 hours’)),
       ‘conditions’ => array(
         array(‘bucket’ => $s3Bucket),
         array(‘acl’ => $acl),
         array(array(‘starts-with’, ‘$key’, ”)),
         array(‘starts-with’, ‘$Content-Type’, ”),
         array(‘success_action_status’ => $successStatus),
         array(‘x-amz-credential’ => $credentials),
         array(‘x-amz-algorithm’ => $algorithm), 
         array(‘x-amz-date’ => $date), 
         array(‘x-amz-expires’ => $expires),
        )
      );
      
    4. Hi Josh, pleased to hear it seems to be working now. I’m not too sure on the min version for this code, but you might have just found that out for me 🙂

      ps. I’ll make the code look ‘fancy’ now 😉

  22. Hi, thanks for the work buddy!

    Do you know how to predefine the filename and don’t allow the user change it?

    Thanks again!

    1. No worries Salvador, if you set the key input to be anything other than ${filename} and remove all JavaScript attempts to change it when whatever the key value is will be the filename. Good luck

  23. Thanks Edd, I am so happy to be able to use your example to upload to my s3.

    But before that I had this error “400 – Bad Request”. This is cause by having two ‘name=”key”‘ in the html.

    It seem like your function getFormInputsAsHtml() already generated out the ‘name=”key”‘.

    So solve the error, I removed away the extra ‘name=”key”‘.

    And it works.

    1. No worries, pleased it works. Didn’t know whether to include the key input in that function but decided to, with the option of not having it if you pass in false as a parameter. Thanks for the feedback 🙂

  24. Hi Edd,

    This code has been great, I’m using it in an internal site for uploading a podcast. I can upload the mp3 audio file just fine since the user selects it from a local source, the issue I’m hoping to work through is uploading the XML file that is appended and saved server side. Any ideas on how to go about this, is it even possible with this project since the file is server side? Thanks for any thoughts!

    1. Hi Zack, so if I’ve got this right, your mp3 is on S3 and your xml file is on your server – but you want your xml file also on s3? Sorry if I’m not quite understanding. Feel free to drop me an email edd at designedbyaturtle.co.uk – happy to help.

  25. Hi Edd.

    I read your blog post and have actually written a wrapper around the signature generating wrapper. The result in term of the policy, signature and the hidden inputs are the same.

    The issue I’m having is that I’m always getting the “Access Denied” error. I’ve set my CORS to:

        
            *
            GET
            POST
            PUT
            3000
            *
        
    

    And set the bucket policy to:

    {
    	"Id": "Policy1477655077323",
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "Stmt1477655076022",
    			"Action": [
    				"s3:PutObject"
    			],
    			"Effect": "Allow",
    			"Resource": "arn:aws:s3:::bucket-name/subfolder/*",
    			"Principal": "*"
    		}
    	]
    }
    

    What am I missing? I can’t seem to figure this out :/

    1. Okay, seems like using the triple (`) doesn’t make it look like code. Sorry for that. Can’t find the edit button as-well.

  26. Hi Edd!

    Just wanted to let you know I’ve used your great post to create a composer package for generating signatures with a little bit more functionality and slightly more modular.
    I’ve also created a bridge for Laravel users. Ofcourse you get a huge thumbs up for this!

    https://github.com/kfirba/Directo

  27. Just wanted to let you know Edd, you saved my day by making this example. I was trying AWS Documentation which is not really that useful for an example like this one and I was going crazy adding strange things to the key on the policy etc when it was just as easy as adding it on the form input and voila!

    1. Hi Nick, as far as I know this isn’t quite possible ‘prior’ to upload – the best way would probably to use an AWS Lambda function to progress any uploaded images when they’re added to your s3 bucket. Alternatively you can use a server side call to resize the images.

  28. Great tutorial. Thank you!
    All works great, except in IE/Edge I get: Unable to get property ‘2’ of undefined or null reference. on this line:

    filesUploaded.push({
        "original_name": original.name,
        "s3_name": s3Result[2].innerHTML,
        "size": original.size,
        "url": s3Result[0].innerHTML
    });
    1. apparently s3Result is undefined in IE. I think it may be related to:
      var single_s3Result = data.result.documentElement.children;

  29. Hi Edd, This is a great tutorial and I’m working through it I am getting problem when I tried to upload any file it goes to “fail: function (e, data) {}” in JavaScript and gives me an error so if could help me out to solve this problem it would be great.
    Thanks for your good work.

    1. Hi Chirag, is the error you’re having in the javascript, or is that what you’re getting back from aws? And when do you get this error?

      1. Hi Edd, Thanks for your reply. Actually when uploading process finished 100% then after it shows me jquery error. You can check over this link “http://www.goindiemusic.com/newsite/upload_ajax2.php” please open console and select multiple images it shows an error.

        1. Hi again, this page is failing because html form’s action=”https://goindiemusic.s3.amazonaws.com” – which implies the us-east-1 AWS region, but a different region is being specified. The response from S3 is: the region ‘us-west-2’ is wrong; expecting ‘us-east-1’. Does this help?

  30. My Bucket policy editor is:

    {
    	"Id": "Policy1477655077323",
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Sid": "Stmt1477655076022",
    			"Action": "s3:*",
    			"Effect": "Allow",
    			"Resource": "arn:aws:s3:::bucket1/*",
    			"Principal": "*"
    		}
    	]
    }

    CORS Configuration is:

    *
    GET
    POST
    PUT
    3000
    *

    1. Hi Michele (thanks for the email prompt) – if I understand right you want to upload multiple files? But you can’t? (there were a few words missing from the above comment)

  31. I want upload the same file in two “folder”: ina bucket of amazon s3 and in a directory of server (server\php\).
    In your form we have , and is correct for amazon s3.
    But in Jquery File Upload I have , and so I don’t upload to amazon S3.
    I must change “file” to “files[]”.
    Thank you for Your ansker.

  32. Hi Edd, great script, thank you for your sharing. I’m using your script to upload the file to s3. It works great. but now we need upload large file over 1G, it’s impossible to upload in one post. So I want to upload by multiple parts. Does this script support multiple parts upload? If yes, how? I’m trying for a long time. But no result. Can you help, or give me some suggestion? Thank you very much.

      1. oh, surprised, you replied so quickly. Thank you for your answer, wait for your news. I will continue to work on this.

  33. Great tutorial! after a couple of hicups I got it working, but the only problem I am having now is the uploaded files are not viewable to the public- how do I enable this?

    1. Good work John! You can make files public by using a public ACL within the signature. So when calling this function the acl param needs to be ‘public-read’ I believe.

Leave a Reply

Your email address will not be published. Required fields are marked *