I'm down this rabbit hole.
My business debit card has expired. I have purposefully allowed this to happen so that I can close the account. The services that are still attached to it and that I want to keep alive need to be moved to a new account. One of those services is web hosting.
I noticed that of all of the expenses, web hosting is the largest one. I spend $25 per month for one virtual server at Rackspace, while simultaneously spending $20 per month for three virtual servers at Digital Ocean. One of the Digital Ocean servers sits empty, waiting for the transfer of content from the server at Rackspace.
So, to save $300 per year, I've decided to migrate all of my personal web content to the preallocated Digital Ocean server. Last night.
The new server is an Ubuntu LTS server. I'm running all of my web sites (there are 97 of them) via Nginx on this server, whereas I was running Apache on the old server. There is an exchange here of configurability for performance. Apache is easier to configure, and Nginx is less resource-hungry.
Nginx uses fastcgi to route requests to php-fpm. I've configured fastcgi to use TCP instead of unix sockets because, counterintuitively to me, TCP is faster and more reliable.
Incoming requests first hit HAProxy, which proxies the request to Varnish, which proxies the request to Nginx. Varnish caches built pages only on asymptomatic.net, since I haven't taken the time to see what other domains could benefit from a caching proxy -- there are probably many. The manual expiry of cached pages is quite complex, even on this domain.
I have managed to configure HAProxy to failover to proxying directly from Nginx in the case that Varnish doesn't answer. This would have come in handy last night when Varnish decided to vanish for about 7 hours. Routing directly to Nginx is not ideal, but it's better than all of the sites going down.
What I'm looking for now is a simple monitoring tool to ensure/alert that any of these services go down. Pingdom will alert me when the site itself isn't available, but it will not tell me when Varnish isn't doing its job properly and HAProxy is circumventing it.
Of the 97 domains hosted on the server, there are a handful that still need to move. Some of them are simply just other domains that need re-pointed to this server. Others are more complicated node-based services that I will need to configure directly in HAProxy so as not to need to use an odd port for the service.
I am pleased with the configuration of the environment so far. Apart from the mysterious Varnish outage, things are running smoothly. I have bumped the available memory on the server to increase the grease between rubbing components.
What I might consider, as an experiment, is using Varnish's file storage backend instead of the malloc backend. Since Digital Ocean's servers are all using SSDs, write speeds should be reasonably quick. If reads are likewise speedy, there might not be a lot of difference between malloc and file based stores, and I'd be able to keep more cache alive. This would allow me to use the cache on more domains, since memory is in limited supply, whereas disk space is more easily obtained.