CPanel Webserver on EC2 (or How I learned to stop worrying and love the cloud )

It’s been a long time coming but my company is now fully based on Amazon Web Services EC2 for our web hosting. It’s been a long journey to get here.

For more than 15 years we’ve cycled between a variety of providers who offered different things in different ways. First our sites were hosted by Gradwell, on a single shared server where we paid a per-domain cost for additional hosting. We left them in 2004 after 6 years and following a very brief flirtation with Heart Internet we moved to a now-defunct IT company called Amard. This gave us our first taste of CPanel as a way of centralising and easily managing our hosted sites. It was great – so great that it made moving to ServerShed (also defunct) in 2005 very easy.

This was our first dedicated server with CPanel and we enjoyed that greater level of control while still having the convenience of a largely self-managing service. In 2007 we hopped ship to PoundHost where we remained until last week. Three or four times in the last 8 years we’ve re-negotiated for a new dedicated server and battled with the various frailties of ageing hardware and changing software. Originally we had no RAID and downloaded backups manually, then software RAID0, then hardware RAID0, then most recently with S3 backups on top.

Historically ‘the cloud’ was a bit of an impenetrable conundrum. You knew it existed in some form, but didn’t really understand what it meant or how its infrastructure was set up. Access to the AWS Cloud in the early days, as far as I understand, was largely command-line (CLI) based and required a lot of knowledge to get going on the platform. It required a lot of mental visualisation and I’m sure the complexity would have been beyond me back then. Some services didn’t exist, others were in their infancy. Everything was harder.

In that respect I almost don’t mind being a bit late to this party. It’s only in the last two or three years that it seems the platform has been opened up to the less-specialised user. Most CLI functions have been abstracted into a pretty, functional web console. Services interact with each other more fluidly. Access controls that govern permissions to every resource in a really granular way have been introduced. Monitoring, billing, automated resource scaling, load balancing and a whole host of other features are now a reality and can boast really solid stability.

Even then migration has not been a one-click process. CPanel do offer a VPS-optimized version of the Web Host Manager (WHM) and that’s the one we’re running. Its main boast is that it uses less memory than the regular version, apparently as a concession to the fact that a Virtual Machine is more likely to exist as a small slice of an existing server and won’t have as many resources allocated to it. It looked to be the best fit.

Then we needed to find a compatible Amazon Machine Image (AMI) to install it. It seemed to be largely a toss-up between CentOS and RedHat Enterprise Linux. We’d used CentOS 6 in the past with good results (and unlike RHEL it requires no paid-up subscription), so we fired it up and slowly started banging the tin into shape. Compared to other set-ups there is very little AWS-specific documentation on setting up a CPanel environment, so we were mostly sustained by a couple of good articles by Rob Scott and Antonie Potgieter. CPanel themselves wrote an article on how to set up the environment but this didn’t quite cover enough and already the screenshots and references there are out of date. I will write a comprehensive overview of ‘How to Set up CPanel on EC2’ in another article, but to conclude here I will talk about what made this transition so tentative for us that we waited almost a year after setting up AWS before we went live with our main server on the platform:

Fear. Non-specific, but mostly of the unknown. It’s irrational, because when you’re buying managed hardware from resellers you never actually get to see the physical box you’re renting. It’s no more tangible to you than an ephemeral VM is, and yet there’s something oddly reassuring in knowing that your server is a physical brick of a thing loaded into a rack somewhere. If something goes wrong, an engineer that you can speak to on the phone can go up to it and plug in a monitor to see what’s happening. It’s not suddenly going to disappear without a trace.

Not so with the cloud. It’s all a logical abstraction. You don’t know, you’re not allowed to know, precisely where the data centres are. You don’t know how the internal configuration works. Infrastructure is suddenly just software configuration and we all know how easy it is to make a mistake. Click the wrong box during the set-up and you might have your root device deleted accidentally, or be able to terminate the server with all of your precious data because you didn’t enable Termination Protection. If you’re a little careless with your access keys and they become public, you’ll find your account has been compromised to run expensive clusters farming bitcoins at your expense.

Terrifying if you don’t know what you’re doing. So you investigate, learn slowly, play on the Free Tier. Make things, break things. Migrate one small site to make sure it doesn’t explode. Back it up, kill it, restore it, and be absolutely confident that this is really going to work.

And it does. For the first time I feel like we’re our own hosting provider. I spec the server, I buy it by the hour, I configure and deploy it, and Amazon is merely the ‘manufacturer’. Except that everything happens in minutes and seconds instead of weeks and days. Hardware failure is handled as simply as just stopping and restarting the instance where it will redeploy on different hardware. If you’ve architected for failure as you should, backups can be spun up in less than ten minutes. if I get nervous about spiralling running costs I can just flip the off switch and the charges stop, or the Trusted Advisor can offer suggestions on how to run my configuration more efficiently.

It’s telling that every company that I’ve ever bought a physical server from are now pushing cloud-based offerings to their customers too. The time of procuring your own hardware is passing and being replaced by more dynamic, more durable, more robust solutions.

You’d be forgiven for thinking this whole post was merely a sly advert for AWS, but I really am just a humble end user. Admittedly one that has been completely converted, and I haven’t even enumerated the full list services that I now use, to say nothing of the others that I haven’t even had time to investigate. This is a great time to make the transition, because while adoption has been growing at a huge rate, I think it’s going to skyrocket to even greater heights in the next few years.

If you don’t have a pilot for the journey, it’s time to train one.

Comments

About the Author

Pete
Pete is the person that owns this website. This is his face. His opinions are his own except when they're not, at which point you're forced to guess and your perception of what is truly real is diminished that little bit more.