I have multiple webservers with the same content, hosted across different providers. However, I can’t seem to find a nice, simple failover solution. Load-balancing software (Pound, HAProxy, etc.) are unnecessary, and I need the flexibility to manage over 100+ domains, so the paid DNS failover solutions I’ve found are too expensive.
So far the simplest solution I’ve thought of is just to set a very low TTL (30min – 1hr) in each zone entry on my nameservers (running BIND). Then, continuously monitor each server, and temporarily remove failed servers from zone entries. But this seems like something that should be currently available.
I only have root access to different VPSes running CentOS. Any suggestions? Thanks!
We do something similar with one of our systems. DNS is run from MyDNS so all the records are stored in MySQL making the updates nice and simple. The TTL records are also run very low as a even a 5 minute outage can be a pain.
System basically works by checking the heartbeats every few minutes and updating the records accordingly.
Not perfect as a host going down can cause an outage to uses who get that dns back or have stupid dns cache policies in their proxies. Only way around this is to cluster the hosts together in locations in a sort of HA setup.
Leave a comment
- What is the easiest way to upgrade my existing Perl 5.14 to Perl 5.16 on FreeBSD 9 using the ports system?
- Know if mysql has done its job
- Redirect https .com to https .co.uk without a valid SSL cert on .com without DNS change
- Why is it a bad idea to use customer email as from address
- 100% packets dropped on first RX queue on 3/5 raid6 iSCSI NAS devices using intel igb (resolved)