The services I am using in this post include:

  • S3
  • CloudFront
  • Route 53
  • AWS Certificate Manager

Prerequisites:

  • An existing static website hosted in s3 using a custom domain registered with Route 53

Jump to the Good Stuff

After deploying a version or two of my website, I started bombarding my friends to check it out and give me feedback. While someone always wants to hear how great a job they have done, I had to start telling them “No, don’t tell me how great it is - tell me what’s wrong with it”.

One friend gets back to me with a simple text ‘gotta set up https’. Oof. My lack-of-security was showing. Despite how smart and confident I have been feeling getting all of my content together and utilizing things I have learned over the past five years, I realized that there is still so much I don’t know. I have an understanding of networking and content delivery but I realized I’ve only learned in educational settings without real experience and should probably rectify that. It turned out to be pretty simple to actually implement. Understanding it was a different challenge.

So here I am going to walk through the basic steps of how I set up HTTPS on a static site hosted on s3 with public access AND include information about each services used and what is actually happening from a logical standpoint.

Now let’s go through what we are going to do:

  • Before setting up HTTPS we are currently accessing the static site through the s3 website endpoint of the bucket it is located in
  • This bucket is allowing public access
  • Instead, we are going to use 53 to direct to a CloudFront distribution that will serve the website
    • So S3 is hosting the static site and CloudFront is serving it through a distribution
  • This allows us to use a SSL certificate connected to the CloudFront Distribution to then direct towards our public site
  • We create a certificate for our domain names then have to redirect traffic from our domains to the distribution instead of directly to the S3 endpoint
  • Now the site will no longer have that pesky “NOT SECURE” tag in the url

The actual steps to do this were very straightforward because it was all self-contained within the AWS ecosystem. I simply:

  1. Requested a Certificate in AWS Certificate Manager
    • Request a public certificate
    • provided the domain name for my site:
      • you can add multiple additonal names to capture all versions
      • ex: www.reecelincoln.me and reecelincoln.me
    • Left the Validation method (DNS Validation) and Key algorithm (RSA 2048) as teh default values
  2. Verify ownership by DNS - (see domain ownership link)
    • Inside AWS Certificate Manager, select the created Certificate ID with the pending validation status
    • Validate with Route 53 by ‘Create DNS records in Amazon Route 53’
    • Wait for ‘Pending validation’ status to update - approx 30 min
  3. Link to CloudFront Web Distribution
    • Create a CloudFront web distribution
    • Alternate domain name should include your custom domain names
    • use the website endpoint for the Origin Domain value.
      • this should not be the bucket (this is because s3 doesn’t allow https)
    • Under settings and Custom SSL certificate select the newly created certificate and create the distribution
    • Edit under Behaviors to change the viewer protocol policy to ‘Redirect HTTP to HTTPS’
    • Go back to Route 53, edit the domain to route traffic to the CloudFront distribution
      • Select the record with type A and edit
      • Change Route traffic to value to be Alias to CloudFront distribution
      • place the domain name of the distribution and save
  4. Wait for things to deploy and verify your site is on HTTPS!
    • Previous DNS entries need to expire depending on the TTL value
    • This can be anywhere from 2-48 hours so be patient!

Now you’ve set up HTTPS on your site. One thing to note, if you update your site in your bucket the changes will not take effect immediately. What does Cloudfront do? It’s cacheing our content at an edge location and so it will not pull from our S3 bucket except for every 24 hours. The files cached at the edge location are what are being served to the users now SO our site will not update until 1. cloudfront pulls the new version of our files from s3 when the cache expires (default 24 hours) AND 2. a user must request the files from that edge location.

This isn’t a huge issue but one thing to keep in mind is the idea that files served can now be ‘out-of-sync’ with one another for a small period of time. For example:

  • Imagine there are two pages on your site named PageA and PageB
    • We will denote what version they are in brackets - PageA (V1) and PageB (V1)
  • We have deployed our site on S3 and are serving it through a cloudfront distribution
    • User123 accesses the site through Edge Location Alpha and navigates to PageA (V1)
    • PageA (V1) is now cached in Edgle Location Alpha
  • You update the site and deploy so now your two pages are PageA (V2) and PageB (V2)
    • (V2) is located in the S3 location
    • Edge Location Alpha only has PageA (V1) cached, PageB was never requested so it is not in the Edge Location
  • User123 accesses the site through Edge Location Alpha after you have deployed (V2) but before Edge Location A’s cache has expired
    • User123 navigates to PageB and the CloudFront Distribution pulls PageB (V2) from the S3 bucket because PageB is not cached
    • Now PageB (V2) is cached in Edge Location Alpha
    • User123 navigates to PageA and the CloudFront Distribution uses the cached PageA (V1) because it has not expired

Now we have a situation where a user will have different versions for pages for as long as your cache takes to expire. This may not be an issue but an interesting consequence of how CloudFront works.

Updating existing content with a CloudFront distribution Managing how long content stays in the cache (expiration)