Only a few weeks back AWS’ CloudFront announced the use of signed cookies to secure access to private content.

Getting signed cookies to work with CloudFront in my particular test scenario made me trip over a couple of foot falls that were partially caused by my lack of CloudFront knowledge and partially because the documentation wasn’t helping.

The highlight: Do not use PHP setcookie

PHP setcookie encodes the signed cookie and that will break the signature. The signature process already encodes the signature anyway. The below code example uses header() for that reason.

Other more obvious things:

  • Using a CNAME for your CDN makes everything easier and cleaner.
  • Really read the documentation.
  • CloudFront uses its own private key. The CloudFront private keys are created per AWS account via your root account.
  • When CloudFront asks for the account ID of the signer when you set up a distribution’s ‘Behavior’, it refers to that root account ID. You cannot add that account ID as a signer to the ‘Behavior’ to your own distribution, instead it will use “Self” and ignore the account ID. This is fine.
  • HTTPS/HTTP re-directions can break everything.

The simple case would be that you use a CNAME for CloudFront distribution which means that your website and your CloudFront distribution can share cookies.

In my case I have not yet set up the CNAME nor got the SSL certificate for it. The browser would reject any ‘’ cookies that do not come from a ‘’ domain. Therefore I have a two step approach to be able to set a cookie.

  • The CDN distribution is set up with a ‘Behavior’ that lets CloudFrontSignedCookieHelper pass through with all its headers and cookies and URL parameters to and never caches it.
  • All other requests for are handled via a default ‘Behavior’ which only allows access with a signed cookie (or url). In my case that means all requests are passed on to S3 and cached forever. S3 itself is setup to only allows access from

When a user loads a gallery the browser loads gallery.xml directly from, this forces authentication.

  • The returned gallery html includes a JS file reference to //
  • CloudFrontSignedCookieHelper is loaded through .
  • never caches that request and requests it from .
  • CloudFrontSignedCookieHelper on checks if the user is authenticated and then creates the CDN signature cookies with a domain of .
  • These CDN signature cookies are passed through the CDN to the user’s browser.
  • The user’s browser accepts these CDN signature cookies for .
  • For all future requests to (in that browser session) the browser will sent these cookies on to .
  • The rest of the gallery html will trigger image requests to . will allow these requests because the signed cookie is allowing access.

The following CloudFrontSignedCookieHelper.php code would also work for the more appropriate CNAME scenario as part of the first authentication.

Apart from isAuthorized() this is a working example. You can handle the authentication many different ways so I don’t elaborate.

* site.secrets.php sets CLOUDFRONT_KEY_PAIR_ID and CLOUDFRONT_KEY_PATH as well as CDN_HOST 
* e.g.  
// define('CLOUDFRONT_KEY_PATH' , '/etc/secrets/pk.APSOMEOROTHERA.pem')
// define('CDN_HOST' , '')
require_once ('/etc/secrets/site.secrets.php');

class CloudFrontSignedCookieHelper {
   public static function rsa_sha1_sign($policy, $private_key_filename) {
      $signature = "";
      openssl_sign ( $policy, $signature, file_get_contents ( $private_key_filename ) );
      return $signature;
   public static function url_safe_base64_encode($value) {
      $encoded = base64_encode ( $value );
      return str_replace ( array ('+','=','/'), array ('-','_','~'), $encoded );
   public static function getSignedPolicy($private_key_filename, $policy) {
      $signature = CloudFrontSignedCookieHelper::rsa_sha1_sign ( $policy, $private_key_filename );
      $encoded_signature = CloudFrontSignedCookieHelper::url_safe_base64_encode ( $signature );
      return $encoded_signature;
   public static function getNowPlus2HoursInUTC() {
      $dt = new DateTime ( 'now', new DateTimeZone ( 'UTC' ) );
      $dt->add ( new DateInterval ( 'P1D' ) );
      return $dt->format ( 'U' );
   public static function setCookie($name, $val, $domain) {
      // using our own implementation because
      // using php setcookie means the values are URL encoded and then AWS CF fails
      header ( "Set-Cookie: $name=$val; path=/; domain=$domain; secure; httpOnly", false );
   public static function setCloudFrontCookies() {
      $cloudFrontHost = CDN_HOST;
      $cloudFrontCookieExpiry = CloudFrontSignedCookieHelper::getNowPlus2HoursInUTC ();
      $customPolicy = '{"Statement":[{"Resource":"https://' . $cloudFrontHost .
            '/*","Condition":{"DateLessThan":{"AWS:EpochTime":' . $cloudFrontCookieExpiry . '}}}]}';
      $encodedCustomPolicy = CloudFrontSignedCookieHelper::url_safe_base64_encode ( $customPolicy );
      $customPolicySignature = CloudFrontSignedCookieHelper::getSignedPolicy ( CLOUDFRONT_KEY_PATH, 
            $customPolicy );
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Policy", $encodedCustomPolicy, $cloudFrontHost);
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Signature", $customPolicySignature, $cloudFrontHost);
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Key-Pair-Id", CLOUDFRONT_KEY_PAIR_ID, $cloudFrontHost);

if (isAuthorized()){
   CloudFrontSignedCookieHelper::setCloudFrontCookies ();
var cloudFrontCookieSet=true;

During a few short performance test I used CloudWatch for the first time.

I don’t like it.

I was looking around and checked other metrics tools but that didn’t help because I don’t like them either.

So it’s not really a CloudWatch problem. It’s me.

I believe I want both the raw data as well as fast aggregated data. I I would like to be able to get CPU usage in the last thirty minutes per ten seconds and I would like to get all the information about all individual calls to a certain service in the last 24 months. And I don’t want to use Cloud Watch for the former and logstash or similar for the latter. I want to have both in one interface. (Though I also see an addition need for logstash, but that is another story)

I am wondering if I am wrong about my expectations or if my understanding of the tools that are around is wrong.

OpenTSDB seems to be up my alley in terms of providing both, but supposedly does not perform well in some scenarios. With Netflix Atlas, I don’t like that it does not support other backends and I also not sure how flexible it is in the set-up but I believe if it was possible to connect Atlas to OpenTSDB it would be what I am after. With Graphite I don’t like the store mechanism since it isn’t covering my long term storage “needs”. With InfluxDB I am not happy about the lack of clustering (there is a experimental version) but I also find it is doing too much even though I think I would like it’s query interface.

There are several paid options, but my understanding is too limited to understand what metrics I want in one year to be able to decide this now.

Well, there is no way around it, I need to try things out.