This post is a follow up to Why we moved off the cloud.
As a company, we want to do reliable backups on the cheap. By “cheap” I mean in terms of cost and, more importantly, in terms of developer’s time and attention. In this article, I’ll discuss how we’ve been able to accomplish this and the factors that we consider important.
Backups are an insurance policy. Like conventional insurance policies (e.g. renter’s), you want piece of mind that your stuff is covered if disaster strikes, while paying the best price you can from the available options.
Backups are similar. Both your team and your customers can rest a bit more easily knowing that you have your data elsewhere in case of unforeseen events. But on the flip side, backups cost money and time that could be better applied to improving your product — delivering more features, making it faster, etc. This is good motivation for keeping the cost low while still being reliable.
Our backup machine has 2 quad-core CPUs, 12GB RAM, and 24 2TB drives in a hardware RAID 6 configuration. This is clearly not a cookie-cutter configuration — one of the benefits of dedicated hosting. The main draw is the ample disk space, but with the CPU and RAM provided you can still get real work done as well.
Of course, we follow all the usual best practices — RAID, replicated topologies (master-master, master-slave, etc), input logs, etc. Thus, we already have multiple copies of the important bits. But backups help round this picture out, and we want to do it right. Price is a concern, but speed and convenience are important to us as well.
How cheap is our storage, you ask? Well, the following is a rough back-of-the-napkin comparison of our current costs relative to some alternatives:
- Tarsnap (a system on top of S3) – $0.30
- Amazon S3 – $0.11 (1-49TB)
- Our Softlayer machine – $0.025
- WD RE4 (priced on Newegg), 2 yr lifespan – $0.0094
These numbers are only guideposts. For example, drives can obviously last longer than 2 years, but colo has its own costs and challenges as well. Also, the additional cost of Tarsnap over S3 might be justified by the convenience and compression offered. The point, though, is that Softlayer is giving us a very competitive price. Cheaper than Amazon by 5x is pretty impressive given that Amazon has massive economies of scale. Whether Amazon takes that margin as overhead or profit is up for speculation.
Keeping prices low is a goal, but the real resource of interest here is developer’s time and attention. It takes time to develop and maintain backup systems, to ensure that they are working properly, to manage space, and for engineers to make context switches between backups and their primary work responsibilities. Also, the goal is to keep our engineers happy and motivated so that their 120% effort goes towards taking the company to the next level. Backups are important, but don’t drag it out — get it done and get back to the really cool stuff.
Thus, the cost of backup storage is simply not our dominant monthly cost, and a developer’s time is worth a handy multiple. Here are some other factors to consider:
- Tarsnap – $0.30 per GB
- Amazon S3 – free inbound, pay for outbound and requests
- Softlayer – free between their data centers
Not having to worry about bandwidth gives us great flexibility. We can backup daily, weekly, monthly etc. without too much worry.
We have our main servers in the Dallas data center and the backup server in San Jose. Softlayer provides a gigabit connection between them. If you’re not used to this kind of speed across a good stretch of the continent, you might be pleasantly startled at first.
You’d also be hard pressed to even come close to these transfer speeds over more vanilla topologies. Even though Amazon has its Sneakernet-style data import/export service, do you really want to spend time mailing hard disks around?
Speed has manifold benefits. Backups share resources with the production site, so the sooner it’s done the better. Also, speed simplifies. It takes engineering time to make fine-grained decisions on full vs incremental backups, tweak locking and transactions, serialize backup order to take advantage of limited bandwidth, etc. With a fast connection, it’s much easier to do “dumb dumps” initially and refine later in a prioritized way.
Even if we save a few hours a month with these choices, that’s a huge win in the startup world. Getting more time for feature work helps our ability to grow fast and lead instead of playing catch-up.
Room for growth
One disadvantage of having a dedicated machine is that you’re paying for a fixed amount of resource up front. Right now, we pay for more capacity than we’ll need even for the next few months. This is generally touted as one advantage of cloud computing — pay only for what you use, and provision up easily.
In our case, it’s not a big issue. The amount we’d save over, say, the next half year isn’t worth the inconvenience of migrating data, provisioning and decom’ing machines, etc. Instead, we have runway as our data footprint continues to grow, and we can spend our time elsewhere. Like speed, space simplifies as well.
Simplicity & Flexibility
At the end of the day, a machine is a machine. There’s no new API to learn, access accounts to set up, API keys to provision, or client libs to integrate. Just deploy some SSH keys, drop a cron that kicks off some scp’s, rsync’s or mysqldump’s, gzip away and you’re done. It’s nice to use the toolchain we’re already familiar with, and to keep it as simple and uniform as possible across our whole infrastructure. Whether that simplicity comes in the form of uniform server creation scripts, monitoring tools, or even payment accounts, simpler is better.
Part of the wins we’ve achieved have to do with dedicated hosting, and we’re happy so far. Remember to shop around and get the best prices you can. But more importantly, by making certain decisions you can vastly simplify an important part of your infrastructure, keep costs down, and free up engineering time to work on the good stuff.