Also available on GitHub

The AmazonS3 module allows the local file system to be replaced with S3. Uploads are saved into the Drupal file table using D7's file/stream wrapper system.

You can also use it with other S3 compatible cloud storage services such as Google Cloud Storage.

You can switch it on as the default file system scheme, or individually for file and image fields.



Most module configuration is handled at admin/config/media/amazons3. At a minimum, S3 credentials and a default bucket will need to be configured. It's best to configure these settings in $conf variables in settings.php.

To use signed CloudFront URLs, the CloudFront private key and ID are needed. The private key is a .pem file, and should be stored outside of your document root. Set $conf['amazons3_cloudfront_private_key'] to the path of the private key and $conf['amazons3_cloudfront_keypair_id'] to the key ID in settings.php to enable this feature.


  • Review the patch notes below. Nearly all sites will need the very first patch against Drupal's image module.
  • Download and install Composer Manager and AmazonS3 Drupal modules.
  • Enable the AmazonS3 module. It's easiest to use drush so it will automatically download the AWS SDK.
  • Configure the AmazonS3 credentials and other settings at

Required Patches

Because S3 is not a file system, it has some requirements that core and contributed modules have not fully anticipated. These patches (except for the ImageMagick patch) add simple alter hooks, so that the AmazonS3 module can adjust file URIs as needed.

Patches Required to Support Image Styles

Without these patches, image styles will not be able to be created for files hosted on S3:

Patches Required to Support Multiple Buckets

This module can support per-field buckets. Allowing you to use a single AWS account with several different S3 destination buckets. If you use any of the following modules, patches are needed in order to support this bucket-to-field mapping ability. Without this patches, only a single site-wide bucket will work properly:


  • Change individual fields to upload to S3 in the field settings
  • Use AmazonS3 instead of the public file system (although there are a few issues due to core hardcoding the use of public:// in a few places e.g. aggregated CSS and JS). Go to /admin/config/media/file-system and set the default download method to Amazon.
  • When using Features to export field definitions, the Upload destination is included. If you want to override this (for example, in a multi-environment workflow), use the 'amazons3fileurischemeoverride' variable. See amazons3fielddefaultfieldbases_alter() for documentation.

CORS Upload

See (note this still needs to be updated to work with the 7.x-2.0 version of this module)


You can modify the generated URL and it's properties, this is very useful for setting Cache-Control and Expires headers (as long as you aren't using CloudFront).

You can also alter the metadata for each object saved to S3 with hookamazons3save_headers(). This is very useful for forcing the content-disposition header to force download files if they're being delivered through CloudFront presigned URLs.

See amazons3.api.php

Running PHPUnit tests

The included unit tests do not have any dependency on a Drupal installation. By using PHPUnit, you can integrate test results into your IDE of choice.

  • Run composer install in the module directory.
  • Run vendor/bin/phpunit tests.

In PHPStorm, it's easiest to configure PHPUnit to use the autoloader generated in vendor/autoload.php. It's also good to mark the vendor directory as excluded, if you already have a vendor directory indexed from composer_manager.

Supporting organizations: 

Project information