Hosted GoStatic now runs on OpenResty

I spent a lot my time this weekend looking into a long overdue infrastructure upgrade for all of our hosted GoStatic sites and found some awesome tools a long the way. Something I've wanted to learn for a long while is the lua programming language due to it's small footprint and simplicity.

For the last 18 months, all GoStatic clients have ran from a very simple file system with Nginx in front. While that served it's purpose, it was very inflexible, slow to deploy and hard to distribute globally. I've always envisioned how I wanted the setup to look, but I just never had the time to implement it.

OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies.
OpenResty website

I was set on the idea of sticking with Nginx; I just needed more flexibility, performance and scalability. OpenResty offers all of this as it comes packaged as an application server with helpful modules like Lua and Redis built right in. It took a few hours but I was able to put together a reverse proxy backed by Amazon S3 and Redis controlled routing. With these changes in place I was able to change the config of any live client site within miliseconds in multiple regional locations.

Here is the complete stack I now have:

  • OpenResty (Nginx, Redis + Lua modules) to handle the logic
  • Redis for host configuration
  • Amazon S3 to store all the actual files
  • Amazon EC2 for the application servers
  • Tutum/Docker for deployment

Amazon S3 isn't known for it's speed so all of the files are cached at the application layer in RAM. The cache for a client it then instantly invalidated on each deploy by changing the hashed key in Redis. My second reason for this kind of caching is that I can now distribute these servers closer to end users from geographical regions closer to them, regardless of distance from the origin (S3). This gives all GoStatic clients instance CDN capabilities for their entire sites, not just their assets.

Here is a snippet that returns a platform level 404 if the requested domain doesn't route to a client's project:

local host_config, err = client:hmget("gonestatic:host:" .., "appid", "cache", "build", "redirect")
if host_config[1] == ngx.null then
  ngx.log(ngx.INFO, "PLATFORM 404")
  res = ngx.location.capture("/404-platform.html")
  ngx.header['Content-Type'] = "text/html"
  ngx.header['Content-Length'] = string.len(res.body)
  ngx.status = 404

This checks Redis (using a Hash data type) to see if the host key maps to a project configuration. If it does not, then a subrequest is made to a custom 404 page and the request is then halted. The routing logic is as follows:

  • Host is not configured: Platform level 404 encouraging user to claim the URL
  • Host is configured to redirect: Variables such as $uri allow things like ->
  • File is found in the local cache: Served out of RAM with custom headers
  • File is not found in the local cache: Served from S3 using proxy buffering then cached
  • File is not found: A custom 404 page is served from the project (cache or S3)
  • Custom 404 is not found: We served a template application specific 404 page

The last peice of the puzzle is master<>slave Redis setup as that's now the only component that isn't automatically distributed. Hopefully I'll have time to implement that next weekend.

I plan to write more about the stack in detail in more specific posts, so do let me know if you have any questions or thoughts about this setup as I've had a lot of fun experimenting with it and learned a lot along the way.

Also, incase you were curious, this site is running entirely on the new setup!

If you have any questions about this post, or anything else, you can get in touch on Twitter or browse my code on Github.