While watching the first season of True Detective, I decided to check out how to patch yaml configuration files as part of packer AMI creations.

E.g. you want to install some package that uses yaml for its configuration files and in thousand lines of yaml configuration you have a few settings that you want to change.

Using patch is a pain whenever you get a new version of these files and the patch is being rejected. using something like sed is painful, since you have to refer to some key that might be 5 lines up and three layers deep.

Considering changing authentication_options.other_schemes.internal to secluded and authentication_options.transitional_mode to enabled

authentication_options:
    enabled: false
    default_scheme: kerberos
    other_schemes:
        - internal
    scheme_permissions: true
    allow_digest_with_kerberos: true
    plain_text_without_ssl: warn
    transitional_mode: disabled

Using patchYamlConfig.py you can supply a patch looking like this:

authentication_options:
    other_schemes:
        - internal
    transitional_mode: enabled

apply the patch with

 patchYamlConfig originalFile patchFile

and done

Only a few weeks back AWS’ CloudFront announced the use of signed cookies to secure access to private content.

Getting signed cookies to work with CloudFront in my particular test scenario made me trip over a couple of foot falls that were partially caused by my lack of CloudFront knowledge and partially because the documentation wasn’t helping.

The highlight: Do not use PHP setcookie

PHP setcookie encodes the signed cookie and that will break the signature. The signature process already encodes the signature anyway. The below code example uses header() for that reason.

Other more obvious things:

  • Using a CNAME for your CDN makes everything easier and cleaner.
  • Really read the documentation.
  • CloudFront uses its own private key. The CloudFront private keys are created per AWS account via your root account.
  • When CloudFront asks for the account ID of the signer when you set up a distribution’s ‘Behavior’, it refers to that root account ID. You cannot add that account ID as a signer to the ‘Behavior’ to your own distribution, instead it will use “Self” and ignore the account ID. This is fine.
  • HTTPS/HTTP re-directions can break everything.

The simple case would be that you use a CNAME for CloudFront distribution which means that your website and your CloudFront distribution can share cookies.

In my case I have not yet set up the CNAME nor got the SSL certificate for it. The browser would reject any ‘cloudfront.net’ cookies that do not come from a ‘cloudfront.net’ domain. Therefore I have a two step approach to be able to set a .cloudfront.net cookie.

  • The CDN distribution dsmxmpl.cloudfront.net is set up with a ‘Behavior’ that lets CloudFrontSignedCookieHelper pass through with all its headers and cookies and URL parameters to www.example.com and never caches it.
  • All other requests for dsmxmpl.cloudfront.net are handled via a default ‘Behavior’ which only allows access with a signed cookie (or url). In my case that means all requests are passed on to S3 and cached forever. S3 itself is setup to only allows access from dsmxmpl.cloudfront.net.

When a user loads a gallery the browser loads gallery.xml directly from example.com, this forces authentication.

  • The returned gallery html includes a JS file reference to //dsmxmpl.cloudfront.net/CloudFrontSignedCookieHelper.php.
  • CloudFrontSignedCookieHelper is loaded through dsmxmpl.cloudfront.net .
  • dsmxmpl.cloudfront.net never caches that request and requests it from www.example.com .
  • CloudFrontSignedCookieHelper on www.example.com checks if the user is authenticated and then creates the CDN signature cookies with a domain of dsmxmpl.cloudfront.net .
  • These CDN signature cookies are passed through the CDN to the user’s browser.
  • The user’s browser accepts these CDN signature cookies for dsmxmpl.cloudfront.net .
  • For all future requests to dsmxmpl.cloudfront.net (in that browser session) the browser will sent these cookies on to dsmxmpl.cloudfront.net .
  • The rest of the gallery html will trigger image requests to dsmxmpl.cloudfront.net . dsmxmpl.cloudfront.net will allow these requests because the signed cookie is allowing access.

The following CloudFrontSignedCookieHelper.php code would also work for the more appropriate CNAME scenario as part of the first authentication.

Apart from isAuthorized() this is a working example. You can handle the authentication many different ways so I don’t elaborate.

<?php
/** 
* site.secrets.php sets CLOUDFRONT_KEY_PAIR_ID and CLOUDFRONT_KEY_PATH as well as CDN_HOST 
* e.g.  
// define('CLOUDFRONT_KEY_PAIR_ID' , 'APSOMEOROTHERA')
// define('CLOUDFRONT_KEY_PATH' , '/etc/secrets/pk.APSOMEOROTHERA.pem')
// define('CDN_HOST' , 'dsmxmpl.cloudfront.net')
*/
require_once ('/etc/secrets/site.secrets.php');

class CloudFrontSignedCookieHelper {
   public static function rsa_sha1_sign($policy, $private_key_filename) {
      $signature = "";
      openssl_sign ( $policy, $signature, file_get_contents ( $private_key_filename ) );
      return $signature;
   }
   public static function url_safe_base64_encode($value) {
      $encoded = base64_encode ( $value );
      return str_replace ( array ('+','=','/'), array ('-','_','~'), $encoded );
   }
   public static function getSignedPolicy($private_key_filename, $policy) {
      $signature = CloudFrontSignedCookieHelper::rsa_sha1_sign ( $policy, $private_key_filename );
      $encoded_signature = CloudFrontSignedCookieHelper::url_safe_base64_encode ( $signature );
      return $encoded_signature;
   }
   public static function getNowPlus2HoursInUTC() {
      $dt = new DateTime ( 'now', new DateTimeZone ( 'UTC' ) );
      $dt->add ( new DateInterval ( 'P1D' ) );
      return $dt->format ( 'U' );
   }
   public static function setCookie($name, $val, $domain) {
      // using our own implementation because
      // using php setcookie means the values are URL encoded and then AWS CF fails
      header ( "Set-Cookie: $name=$val; path=/; domain=$domain; secure; httpOnly", false );
   }
   public static function setCloudFrontCookies() {
      $cloudFrontHost = CDN_HOST;
      $cloudFrontCookieExpiry = CloudFrontSignedCookieHelper::getNowPlus2HoursInUTC ();
      $customPolicy = '{"Statement":[{"Resource":"https://' . $cloudFrontHost .
            '/*","Condition":{"DateLessThan":{"AWS:EpochTime":' . $cloudFrontCookieExpiry . '}}}]}';
      $encodedCustomPolicy = CloudFrontSignedCookieHelper::url_safe_base64_encode ( $customPolicy );
      $customPolicySignature = CloudFrontSignedCookieHelper::getSignedPolicy ( CLOUDFRONT_KEY_PATH, 
            $customPolicy );
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Policy", $encodedCustomPolicy, $cloudFrontHost);
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Signature", $customPolicySignature, $cloudFrontHost);
      CloudFrontSignedCookieHelper::setCookie ( "CloudFront-Key-Pair-Id", CLOUDFRONT_KEY_PAIR_ID, $cloudFrontHost);
   }
}

if (isAuthorized()){
   CloudFrontSignedCookieHelper::setCloudFrontCookies ();
}
?>
var cloudFrontCookieSet=true;

During a few short performance test I used CloudWatch for the first time.

I don’t like it.

I was looking around and checked other metrics tools but that didn’t help because I don’t like them either.

So it’s not really a CloudWatch problem. It’s me.

I believe I want both the raw data as well as fast aggregated data. I I would like to be able to get CPU usage in the last thirty minutes per ten seconds and I would like to get all the information about all individual calls to a certain service in the last 24 months. And I don’t want to use Cloud Watch for the former and logstash or similar for the latter. I want to have both in one interface. (Though I also see an addition need for logstash, but that is another story)

I am wondering if I am wrong about my expectations or if my understanding of the tools that are around is wrong.

OpenTSDB seems to be up my alley in terms of providing both, but supposedly does not perform well in some scenarios. With Netflix Atlas, I don’t like that it does not support other backends and I also not sure how flexible it is in the set-up but I believe if it was possible to connect Atlas to OpenTSDB it would be what I am after. With Graphite I don’t like the store mechanism since it isn’t covering my long term storage “needs”. With InfluxDB I am not happy about the lack of clustering (there is a experimental version) but I also find it is doing too much even though I think I would like it’s query interface.

There are several paid options, but my understanding is too limited to understand what metrics I want in one year to be able to decide this now.

Well, there is no way around it, I need to try things out.

After reading about it a few times I decided that I am too lazy to build one and we bought a ~10$ version of the Google Cardboard.

It took about a month to arrive and was quickly unpacked and the app downloaded.

IMG_20141021_174248

IMG_20141021_174758

After playing with the Google Cardboard for about an hour our eyes hurt and we are slightly happier than before.

OI000007

It is such a neat idea and simple idea (a different approach). And it works very well with the Nexus 5.

After a while we took off the part that holds the lenses and moved it a bit further (1.5 cm) away from the screen to get the picture sharp. Could be the wrong lenses, bad eyes, etc.

We checked out my Hallstatt and other spheres, I flew to Vienna in Google Earth and back to Sydney, Martina checked out her parents home. Connected Bluetooth headset, mouse and keyboard (lacking a controller) and checked out some more and generally enjoyed it.