<Marc Qualie/>

Hosted GoStatic now runs on OpenResty

I spent a lot my time this weekend looking into a long overdue infrastructure upgrade for all of our hosted GoStatic sites and found some awesome tools a long the way. Something I've wanted to learn for a long while is the lua programming language due to it's small footprint and simplicity.

For the last 18 months, all GoStatic clients have ran from a very simple file system with Nginx in front. While that served it's purpose, it was very inflexible, slow to deploy and hard to distribute globally. I've always envisioned how I wanted the setup to look, but I just never had the time to implement it.

OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies.
OpenResty website

I was set on the idea of sticking with Nginx; I just needed more flexibility, performance and scalability. OpenResty offers all of this as it comes packaged as an application server with helpful modules like Lua and Redis built right in. It took a few hours but I was able to put together a reverse proxy backed by Amazon S3 and Redis controlled routing. With these changes in place I was able to change the config of any live client site within miliseconds in multiple regional locations.

Here is the complete stack I now have:

Amazon S3 isn't known for it's speed so all of the files are cached at the application layer in RAM. The cache for a client it then instantly invalidated on each deploy by changing the hashed key in Redis. My second reason for this kind of caching is that I can now distribute these servers closer to end users from geographical regions closer to them, regardless of distance from the origin (S3). This gives all GoStatic clients instance CDN capabilities for their entire sites, not just their assets.

Here is a snippet that returns a platform level 404 if the requested domain doesn't route to a client's project:

local host_config, err = client:hmget("gonestatic:host:" .. ngx.var.host, "appid", "cache", "build", "redirect")
if host_config[1] == ngx.null then
  ngx.log(ngx.INFO, "PLATFORM 404")
  res = ngx.location.capture("/404-platform.html")
  ngx.header['Content-Type'] = "text/html"
  ngx.header['Content-Length'] = string.len(res.body)
  ngx.status = 404
  ngx.say(res.body)
  return
end

This checks Redis (using a Hash data type) to see if the host key maps to a project configuration. If it does not, then a subrequest is made to a custom 404 page and the request is then halted. The routing logic is as follows:

The last peice of the puzzle is master<>slave Redis setup as that's now the only component that isn't automatically distributed. Hopefully I'll have time to implement that next weekend.

I plan to write more about the stack in detail in more specific posts, so do let me know if you have any questions or thoughts about this setup as I've had a lot of fun experimenting with it and learned a lot along the way.

Also, incase you were curious, this site is running entirely on the new setup!

If you have any questions about this post, or anything else, you can get in touch on Twitter or browse my code on Github.