Blog Move

Relocation

A few of you may have noticed that my blog layout has changed a little bit. My renewal had come up for Wordpress, and while it wasn’t hugely expensive (I am from Yorkshire). I also wanted to learn some more about AWS.

So inspired by definit.co.uk, grantorchard.com, thehumblelab.com and many more, I set about looking at Hugo as a static site generator. It didn’t take long for the simplicity of it to win me over. The next question was where to host it? As I already knew I wanted to learn some more about AWS, it made sense to look at hosting within an S3 bucket.

Without going into the nuts and bolts of each stage as there’s plenty of content online for that, this is roughly what I went through.

The Process

The target decided, the next question was how to get there. On the Hugo site there was a link to a Wordpress to Hugo exporter. The only problem with this was that I didn’t have access to the Wordpress backend as my site was on a Wordpress hosted solution. So I exported my site from the hosted platform, built a wordpress container in the lab and imported it.

Once I’d got that far I could import the plugin and run the export to get the content out in a Hugo (markdown) format. The problem I had then was the images. To be honest I had to go manually through every page and sizing the images to fit the screen. I’m sure there’s a better solution out there but this was were I left it.

Once the site was on S3 I could test everything and was happy it was time to change the live DNS entry and wait. All looked good for the duration. It’s not a particularly high traffic site so I couldn’t say whether I dropped many views or not.

Integrations

Once the site was across and everything was looking good, I started looking for ways to improve the site.

The obvious improvement was to use Cloudfront to cache the site nearer to the edge. This also has the added benefit of not getting as many files out of S3 (and saving money). As my Wordpress subscription was coming to and end I needed to move to a different DNS service and domain registration. Funnily enough, that went across to Route 53.

Chuck a certificate into the mix, for free, and we were all set.

Publishing workflow

Next I started looking at my publishing workflow. By this point, it was a case of creating the new markdown file, building the site in Hugo and synchronising with the S3 bucket. I found more often than not I was running an invalidation on CloudFront as well.

I couldn’t help feeling this could be a bit slicker, so I had a browse through the internets and found some developer services on AWS. I’ve made use of AWS CodeBuild to build the site, publish to S3 & invalidate the CloudFront distribution. This was then worked into a pipeline with AWS CodePipeline.

After some tinkering my workflow is now near enough automated:

Write post & submit to github repository. AWS CodePipeline is watching this repo and picks up the change. This kicks off the AWS CodeBuild process.

Gotchas

The only real gotcha I hit was with CodePipeline. Even though if I kicked off CodeBuild manually it all worked, when run through CodePipeline it was failing on a git error.

Failed to read Git log: fatal: not a git repository (or any parent up to mount point /codebuild)
339	Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).

After a bit of investigation I found that when you’re using CodePipeline it copies the contents of the git repo, but doesn’t git clone it. As there was some git metadata included in the site build but the metadata wasn’t there, it was failing. I had a choice to remove the git metadata from the Hugo build or to run a git clone script as part of the pipeline. I elected to remove the metadata as I couldn’t remember why I’d put it there in the first place. Figured that meant it wasn’t important!

Summary

All in all I’m pretty pleased with how it turned out. I might remove the CodePipeline & CodeBuild parts as it doesn’t really save me any time, but it was interesting to figure it all out!