I recently moved 25 projects from one dedicated server to 8 different VPS instances on DigitalOcean. Here is what I learned!
History
For the last 8 years, I hosted all of my personal and professional web projects on a single, dedicated server. Dedicated hardware has a lot of benefits, especially ample, fixed resources and total root control of your setup. But it also has a lot of downsides: Higher expense, rare but inevitable hardware issues, and lack of separation between projects. Fixed resources can be a double-edged sword–clobbered MySQL trying to back up a huge 100GB database? Oops, everything goes offline.
More than anything, the expense of the server was becoming a personal burden as my requirements diminished and the server itself became overkill. Blurst was our largest project, but we halted development years ago. I found myself paying $250/month for a server that hosted a bunch of small-scale projects (including a lot of friends’ personal sites). It was time for a change.
No More Bare Metal
Virtual machine and hypervisor technologies have advanced significantly in recent years. It’s now feasible to run production hosting on virtual machines. In moving away from our dedicated server, I had two basic options for moving into the cloud:
1) Switch to a monolithic virtual instance. I’d move all sites to a single instance, with enough storage/CPU/memory to handle hosting all sites.
2) Move sites to their own instance, with instance specs tailored each site (or group of sites)
#1 seemed like my best option for an easy transition, in terms of doing the actual move, but it also had many of the same downsides of dedicated hardware–I didn’t want to end up paying for resources I didn’t need, or have changes or issues with sites affect all other sites.
#2 fit a lot better with how I was actually using the old server, but I was worried about overhead from managing multiple virtual servers. I’m not a sysadmin by trade, and my time is already spent working on Aztez full-time plus a bunch of demanding hobbies.
Ultimately, two technologies made #2 the best option and the easiest option: DigitalOcean and ServerPilot.
DigitalOcean
DigitalOcean is a VPS hosting company with a focus on fast hardware and an easy-to-use backend for managing instances (which they refer to as “droplets”). It hits a lot of my needs:
– Cheap! Their lowest tier costs $5/month. Many of my projects can actually live just fine at the $5/month specs.
– Flexible. Droplets can be resized anytime, so if a project goes up or down in traffic I can adjust accordingly. This is particularly good for projects like the IGF judging backend which actually sit idle half the year (and really only spike in traffic occasionally for deadlines).
– Full control. A freshly-provisioned droplet is a full operating system of your choice, running on a hypervisor. This is an important distinction–some cheaper VPS solutions use container virtualization, which share kernels with their host OS.
The backend is really nice. Here’s the interface for a new instance:
They do provide an API, too, in case your infrastructure gets fancy enough to load balance instances on the fly (instances are billed hourly up until a monthly cap).
An obvious question: Why not Amazon AWS?
For me, AWS is overkill. AWS is a fantastic solution if you’re some kind of startup aiming to build an entire business around your technology, with thousands of active users from tens of thousands of total users.
DigitalOcean lacks many of the robust scalability features that AWS provides, especially flexibility like elastic IPs and elastic block storage. If the hypervisor backing your droplet dies–and I assume in the next few years I’ll eventually experience an instance failure–you’re looking at some downtime until you can work around it.
Because of the flexibility of AWS, their backend interface is also complicated and multifaceted. There’s a lot of overhead in learning how things work, even for simple tasks like getting a single EC2 instance online.
ServerPilot
DigitalOcean can give you the IP to a fresh Linux install in 60 seconds. But a fresh Linux install don’t get you very far with hosting an actual website: You’ll need to install some kind of web server (probably Apache), some kind of database server (probably MySQL or MariaDB), configure virtual hosting and paths, logins, passwords, firewall, etc…
And in practice, a stock LAMP configuration has a lot of performance bottlenecks. Really, you’ll want some kind of reverse proxy in front of Apache to manage connection pooling, PHP configured with FastCGI and compilation caches/accelerators, and so on. This left me with a few options for configuring multiple VPS instances:
1) Configure a single instance with an initial setup, and then use DigitalOcean’s snapshot system to spin up additional instances from there. Making changes after the initial setup phase would be painful and manual.
2) Roll my own multi-server management scripts to aid in adding new sites, applying updates/etc. While this is a genuinely appealing side project, I just don’t have time for something like this.
3) Use an all-in-one server management suite like cPanel. cPanel has per-server licensing costs, and is also a huge, gigantic system that makes a mess out of your server configuration. Once you’ve moved a server to cPanel, you’re basically stuck with it for any kind of administration tasks (and therefore stuck into the monthly cost).
In looking around for something lighter than cPanel, but still far more automated than rolling my own systems, I bumped into ServerPilot. It’s pretty great:
– It bootstraps your server into a very competent initial state
– Their documentation already covers common configuration changes
– Their free pricing tier includes unlimited servers/sites, which works very well for this kind of single-server-to-multiple-VPS transition
ServerPilot setup is very, very quick. You paste in their setup script on a fresh install, and a few minutes later everything is ready to be managed by their web UI. Once that process finishes adding a new website is via this simple form:
Performance
So how do things perform?
TIGSource is a good example of one of the decently-active projects on the server. The forums have one million posts with 200-300 active users at any time. This is now hosted on the $10 tier (1GB RAM, 1 CPU). It was actually performing well on the $5/month tier until the server hit the 512MB memory limit and had a load runaway.
On a $5/month Droplet, I tested incoming network speeds (SFO location):
–2015-04-28 17:55:00– http://speedtest.wdc01.softlayer.com/downloads/test500.zip Resolving speedtest.wdc01.softlayer.com (speedtest.wdc01.softlayer.com)… 208.43.102.250 Connecting to speedtest.wdc01.softlayer.com (speedtest.wdc01.softlayer.com)|208.43.102.250|:80… connected. HTTP request sent, awaiting response… 200 OK Length: 524288000 (500M) [application/zip] Saving to: ‘/dev/null’
100%[==============================================================================>] 524,288,000 78.0MB/s in 9.2s
2015-04-28 17:55:09 (54.6 MB/s) – ‘/dev/null’ saved [524288000/524288000]
And outgoing network speeds (downloading from a RamNode VPS in Seattle):
–2015-04-28 22:59:39– http://mwegner.com/test.zip
Resolving mwegner.com (mwegner.com)… 104.236.164.55
Connecting to mwegner.com (mwegner.com)|104.236.164.55|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 524288000 (500M) [application/zip]
Saving to: ‘/dev/null’100%[==============================================================================>] 524,288,000 86.9MB/s in 8.1s
2015-04-28 22:59:48 (61.5 MB/s) – ‘/dev/null’ saved [524288000/524288000]
And a quick test of the SSD disk speeds (again, this is the cheapest possible droplet):
hdparm -Tt /dev/vda1
/dev/vda1:
Timing cached reads: 15380 MB in 2.00 seconds = 7696.35 MB/sec
Timing buffered disk reads: 1558 MB in 3.00 seconds = 518.76 MB/sec
Monitoring
The DigitalOcean backend provides some simple bandwidth and CPU performance graphs:
But for more advanced resource monitoring, New Relic has great stats, and their free pricing tier includes 24 hour data retention for unlimited servers:
Their at-a-glance page is especially useful for managing multiple VPS instances together:
Tips and Tricks
– DigitalOcean resizing is a little confusing. If you resize upwards and increase disk size, that resize operation is permanent. However, you can select “flexible”, which will increase CPU/memory resources but leave your disk alone, which lets you resize back down later. (Put another way, increasing disk size is a one-way operation).
– DigitalOcean backups can only be enabled at droplet creation. You can turn them off later, though, so if you’re unsure if you’ll need them it’s best to leave them enabled initially. Backups cost 20% more on your instance and take weekly snapshots of the entire OS image.
– You can transfer DigitalOcean snapshots to other customers. This is actually super appealing to me, because it means I deploy a contract job to a separate VPS and then potentially transfer it entirely to the client after the project.
– I ended up using my own backup system. Each VPS instance runs a nightly cron job, which exports my control scripts out of a Subversion repository and runs them. Right now, I’m backing up databases nightly and web files weekly. Backups go directly into a Dropbox folder via this excellent Bash script.
– It’s worth mentioning that I deploy all of my own systems via Subversion already. A staging copy of a website exists as a live checkout, and I export/move it to production. With ServerPilot this means I just have a “source” directory living next to each app’s “public” folder.
– ServerPilot seems like they aim their setup at developers. You probably want to disable display_errors or at least suppress warnings/notices. They have a per-website php.ini configuration you can use for this. Ideally I would like to see a “development/production” toggle on their app settings.
– ServerPilot configures apps alphabetically on a server. If you request the IP directly, or a hostname that isn’t configured in the apps’ domain settings, you’ll get the first app alphabetically. (Some of my curation systems use wildcard DNS entries with URLs configured in the backend, so this mattered quite a lot! I just ended up prefixing with “aaa”, which feels a bit sloppy, but hey it worked).
– ServerPilot puts the MySQL root password in ~root/.my.cnf, and requires localhost for connections by default. I manage database development with a graphical client that lets me SSH tunnel in to the server so I can connect as root on localhost.
– ServerPilot will push security fixes and the like to your servers
– NewRelic has application monitoring in addition to their server daemon, which operates via a PHP extension that injects JavaScript into your output. It’s pretty neat, and lets you see how long each page request is taking in each area (PHP/MySQL/memcached/etc), in case you’re developing your own tech and having performance issues.
– Uptime Robot is a free monitoring service (5-minute interfaces; paid plan goes down to 1-minute). I just set this up, though, so I can’t speak to its reliability yet! Previously I was monitoring the single dedicated server, but most monitoring services are priced per-host, so the VPS exodus didn’t work so well there.
– The DigitalOcean links in this post use my referral code (you get $10 credit, I get $25 credit once you spend $25 there). Just an FYI!
Conclusion
At the end of the move, I ended up with a $50/month recurring bill at DigitalOcean. Not bad!
(There’s actually another $20/month on top of that for Blurst itself, but that’s a temporary situation and the droplet size is due to the disk space requirements on the database).