cPanel WHM

Server-wide Referrer Spam Blocking for Cpanel WHM

If like me you maintain a WordPress website, you might have recently noticed a number of referrals to your site coming from one of the following:,,,

or some other unknown, dodgy-sounding website that clearly isn’t linking to you legitimately.

This is referral spam that seeks to achieve high Google PageRank by gaming the system to create the appearance of popularity. One major way to do that is to get a high number of external backlinks and referring domains. These little spambots will visit your site and attempt to spam your WordPress installation with comments if it can, as well as dropping the appearance of referring traffic into your analytics and logs. It’s very annoying because it’s not legitimate traffic and you’ll need to know to exclude it.

It is possible to block these referrals on a per-site basis by adding the following to your site’s .htaccess file:

# Block Referer Spam
RewriteEngine On
RewriteCond %{HTTP_REFERER} buttons\-for\-website\.com
RewriteRule ^.* - [F,L]

This is great, but what if you have a whole server of WordPress domains? Adding this directive manually to each one would be very tedious. Luckily this becomes easy in CPanel with a couple of configuration changes.

1) Follow this guide on creating a custom VirtualHost template within your CPanel installation. Once you have created both the vhost.default and ssl_vhost.default files, open them up and add the following lines to the VirtualHost directives:

RewriteEngine On
RewriteOptions Inherit

2) Go to your WHM in ‘Apache Configuration->Include Editor’ and add the blocking directive (# Block Referrer Spam referenced above) to the ‘Pre-Virtual Host Include’ area for All versions.

So what we’re doing here is telling the Apache config to inherit any Rewrite rules from the main server context for each virtual host by making a custom VirtualHost template to say that. Then we’re specifying that rule in the main server context which will inherit into each hosted site.

In this way the entire server will have their sites tested for the referral block and apply it globally. If you’re maintaining a client server this is the ideal way of taking care of this problem automatically. If you spot any more insidious referrals to your sites in future, just update the Pre-include config to check for the appropriate rewrite condition.

If you were curious, I checked and the ‘Buttons for website’ site has some 800,000 external backlinks from over 10,000 domains – that’s a lot of effort invested in trying to bump up its rank. As of today, it has a Google PageRank of 0. Spam shouldn’t and seemingly doesn’t pay!

Amazon Web Services

Amazon Web Services (AWS) now provides billing in local currency

Developers and businesses around the world will be breathing a huge sigh of relief today as Amazon Web Services finally announced the ability to have invoices generated and charged in one of 11 new local currencies.

Since its launch in 2006, AWS has exclusively billed in US dollars worldwide, causing a currency-conversion and bank fee headache for all but U.S. customers. As a multi-billion dollar, multi-national operation, the introduction of this feature is long overdue and brings AWS into line with other providers such as Microsoft Azure in offering bills in the currency you’re most familiar with.

The new list of supported currencies, when using an elligible Visa or MasterCard to pay your bill are:

  • Australian Dollars (AUD)
  • Swiss Francs (CHF)
  • Danish Kroner (DKK)
  • Euros (EUR)
  • British Pounds (GBP)
  • Hong Kong Dollars (HKD)
  • Japanese Yen (JPY)
  • Norwegian Kroner (NOK)
  • New Zealand Dollars (NZD)
  • Swedish Kronor (SEK)
  • South African Rand (ZAR)

This new preference can be set in your AWS ‘Billing and Cost Management’ account under ‘Account settings’. The changes take effect immediately and your estimated bill for the month is then presented to you in your local currency.


The one important factor to note is that AWS pricing remains firmly in dollars, and all prices presented to you on purchasing remain that way. Amazon appear to be offering an exchange rate that changes daily so all estimates are based on that, and will naturally be susceptible to currency fluctuations – so CFO headaches aren’t gone entirely! Presumably the final bill for the month will be based on the exchange rate for the given currency on that day, but when compared to the existing method – getting hammered by your bank for currency conversion fees at terrible transfer rates – this is substantially preferential and will result in lower overheads for all non-US AWS customers., Raspberry Pi

Raspberry Pi 2 Model B with dnetc RC5-72 client

A little while ago I wrote up a summary of the RC5-72 project. One of my habits over the years has been to run the good old cow client on every new computer I’ve built just to see how the speed compares.

So when I picked up a new Raspberry Pi 2 Model B this week, this habit held true. For this I installed the ARM/embi client v2.9110.519 (sadly dated 2012 – there have been very few client updates in the last few years), and it ran without any issues. By default it doesn’t detect the Pi 2’s quad core architecture, so requires manually setting the Performance options to use 4 cores.

It’s not exactly speedy, but then this is a computer that I can fit in my back pocket.


Four simultaneous crunchers took 1 hour 3 minutes to complete 4 stat units, at a combined keyrate of 4.5Mkeys/sec. At that rate the Pi could crunch around 91 stat units per day. That means that running by itself, the Pi could complete the remaining work on the project in a mere 32 million years. I don’t think the warranty lasts that long.

The CPU temperature held fairly steady at 54 Celsius, and I will also note that I didn’t overclock the Pi so this was running at the default 900mhz.

While the Pi does boast a Broadcom GPU with 1GB RAM (shared with CPU), there are no compatible crunching clients for a GPU test, and I suspect the project is in such a lacklustre state now that I don’t foresee any budding development on one soon. This was still a fun little test, and isn’t that much slower than a decent home PC would have been 15 years ago.

Amazon Web Services

How to modify ‘Deletion on Termination’ flag for EBS volume on running EC2 instance.

When launching an EC2 instance on Amazon Web Services, the EBS volume is set to ‘Delete on Termination’ by default. Most of the time this is fine as you’d often rather make a snapshot of your drive to enable you to boot up multiple copies of the same instance, or if you’re a developer that’s creating and terminating instances regularly it would be a nightmare to have all of these orphaned EBS volumes cluttering up your account.

But what do you do when you have a running instance where the EBS was set to Delete and you’ve changed your mind and want to keep it? The AWS documentation is surprisingly vague about this, mostly talking about setting the flag on launch. So how do you do it for a running instance? And for that matter, how can you tell if you set an existing instance to Delete on Termination? It might have been a while and it’s hard to remember if you ticked that checkbox.

How to check the EBS ‘Delete on Termination’ flag

It’s a little buried. Go to your EC2 management console and click on ‘Instances’. Click on the instance you’re curious about, and then under the ‘Description’ tab, scroll down to ‘Block devices’, and click on the appropriate EBS volume. This will pop up an attribute box which will state the Delete on Termination flag. This seems to be the only place in the whole AWS console to check this information!



How to modify the EBS ‘Delete on Termination’ flag

The only way to do this is by using the AWS CLI, at the current time there’s no way to do this using the web console.

You can do this in one of two ways, either by specifying a JSON attribute file with the modifications you want to apply to the instance, or by doing it in-line with escape characters.

aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings /path/to/file.json

with a .json file in the a format such as:

    "DeviceName": "/dev/sda1",
    "Ebs": {
      "DeleteOnTermination": false

Or without using a .json file inline like this:

aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\":{\"DeleteOnTermination\":false}}]"

Note: You will need to configure the AWS cli (command from terminal: ‘aws configure’) either with user credentials with a policy that grants permissions to the appropriate EC2 instance, or run the CLI on an instance which has EC2 Role permissions.

Now hopefully you’ll be able to save that EBS volume after changing your mind about the termination flag!

Amazon Web Services, RC5-72 on Amazon Web Services (AWS) EC2 Cluster – A modern solution to an old problem


Anyone kicking around the internet since the early days will have heard of‘s RC5-72 distributed computing project. Arising from RSA labs Secret-Key Challenge, the project sought to utilise distributed computing power to perform a brute force attack in an attempt to decrypt a series of secret messages encrypted using the RC5 block cipher. Sponsored by RSA Labs, a $10,000 reward was offered to the participant whose machine was responsible for finding the correct key for each of the challenges, starting at 56-bit encryption, and scaling up to 64, 72, 80, 88, 96, 104, 112, 120, and 128.

I started contributing to the 64-bit instance of the project (termed RC5-64) back in 2001, and with the combined computing power of around 300,000 participants, the key was cracked after almost 5 years of work in July 2002. This project required the testing of 68 billion key blocks (2^64 keys) and found the correct key after searching 82.7% of the total keyspace.

A new project to tackle the next message, encrypted with a 72-bit key, was started on December 2nd 2002. This project required the testing of 1.1 trillion key blocks (2 ^ 72 keys) – 256 times larger than the original project that had taken 5 years to complete. After a few years it became apparent that this was going to take ‘a very long time’, and RSA Labs withdrew the challenges in May 2007, along with the $10,000 prizes. Shortly after this news, announced that they would continue to run the project and would fund a $4000 prize alternative.


As of today the project has made it through 3.378% of the keyspace after 12 years of work. At the current rate the project anticipates hitting 100% in a mere 219 years.

You can imagine that so many years of effort has dulled the enthusiasm of its participants. The statistics report that there have been some 94,000 unique participants to RC5-72 (significantly down on the 300,000 of the previous project), but that only 1200 of them remain active. With the rise of other distributed computing projects in the early 2000s such as SETI@home, Folding@home and many others, this humble project has been rather forgotten by the internet at large. The advent of Bitcoin and other e-currencies have led to people turning their spare processing power to more profitable ends.

And yet, the RC5-72 project’s overall keyrate remains higher today than it ever has been in the past. The reason for this was the development and widespread use of powerful GPUs in home computers. I’m the first to admit I don’t really understand the finer points of how computer hardware works, but GPUs turned out to be roughly 1000 times faster than even the fastest commercial CPU at crunching through keys. Traditional CPU crunching makes up less than 10% of the total daily production, with the vast majority coming from the GPU ATI Stream technology (70%), with the rest coming from NVIDIA CUDA (5%) and the more recent OpenCL – supported on both ATI and NVIDIA hardware (14%).

As much as I salute for continuing to maintain the project and run the the supporting key servers to distribute the work and run the stats, I have to say I’ve never seen any active effort to promote it. The website is rather dated and to my memory has the exact same design that it had in 2001 when I first started. There are no social media buttons, no methods of incentivisation, and even some of the more basic things like keyrate graphs have been broken for months (or years) and nobody has worried about fixing them.

A possible solution

I know that it must be difficult to commit any real effort to something that has been a back-burner operation for almost 10 years, but the completionist in me is desperate to somehow mobilise the modern massive internet to attack the project with gusto and get it to 100% in a month. I know that’s a laughably ridiculous suggestion, because the scale of the work required is massive. If you had a reasonably powerful ATI graphics card that could churn through one billion keys per second, you might be able to crunch around 19,000 work units per day if the GPU was dedicated to the task 24/7. At that rate you could expect to complete the project in around 1.6 million years.

So 1200 people aren’t going to get this done, even though the top 100 of those can pump out almost 10 million work units a day.

The advent of ‘the cloud’ has made potentially unlimited computing power available to anyone in the world – at a cost. Amazon Web Services (AWS) have the largest compute infrastructure of any provider and I’ve spent quite a while familiarising myself with the platform. It occurred to me that a key-crunching test of RC5-72 was in order.

The Test

For the basic test, I provisioned a compute-heavy c3.8xlarge instance running a traditional Linux x86 CPU client, and a GPU g2.2xlarge instance running the CUDA 3.1 client, and I latterly also tested the OpenCL client.

The keyrate and cost results were as follows:

Instance Type dnetc Client Keyrate (Mkeys/Sec) EC2 Spot Price ($/hr)
c3.8xlarge v2.9109.518 180 $0.32
g2.2xlarge v2.9109.518 (CUDA 3.1) 423 $0.08
g2.2xlarge v2.9109.520 (OpenCL) 432 $0.08


The OpenCL client was the winner, offering 432 million keys/sec for 8 cents an hour. It should be noted that this falls far short of the best possible recorded speed from a GPU, where an ATI Radeon HD 7970 can do a stunning 3.6 billion keys/sec – although this benchmark list is at least a year old so it’s probable there exists cards out there that are even more powerful. Compared to that a mere 0.43 billion/sec is only 11% as powerful.

The potential advantage of the AWS cloud is seemingly not in its raw speed, but its scale. I can’t run 10 graphics cards at home but I can run 10 instances of the dnetc client. So that’s what I did for around 36 hours. The cost of 10 instances for 1 day equated to $0.80 x 24 = $19.20. Not bank-breaking but quite a lot to achieve a total speed of 4.32 billion keys/sec. That singular 24 hour effort put me at #26 in the top 100 rankings for the day, and the 87,000 work units completed bolstered my grand total to 1.4 million units over the 12 years I’ve been working on the project. A fairly hefty chunk relative to my total effort, but still a tiny drop in an enormous ocean.

Getting to 100%?

After a few calculations, I estimated that I would require 346,000 nodes like this, running 24/7 for one year, to complete the project. Assuming I could maintain a spot price of 8 cents an hour, it would cost ‘only’ $232 million to provision the cloud to complete this project. I think it’s pretty unlikely I could crowd source this from the internet in order to complete an old cryptography project, but there are other, more simple solutions.

1) Better GPUs. AWS aren’t the only cloud provider and one of the many others may provide GPU instances with better computational power – but none of them offer AWS’ novel ‘spot pricing’ model to get instances at rock-bottom prices, so this is unlikely to be cheaper.

2) Don’t use the cloud at all. Somehow compel millions of people to start running the client on their high-end graphics cards at home and work too. Might happen… but not without a concerted social media effort, refresh of the website, and some kind of modern fun ‘game’ incentivisation thing. Doing it for the fun of cryptographic curiosity is unlikely to motivate too many casual users.

3) More bespoke supercomputers like this one. The Center for Scientific Computing and Visualization Research at the University of Massachusetts have built a bit of a Frankenstein super-computer from old Playstation and AMD Radeon graphics cards. It’s seemingly used for other computational purposes, but the excess capacity is put into the RC5-72 project and recently has been churning out an eye-popping 1.2 million work units a day (more than 10% of the total work of the top 100) – cool huh?

4) Maybe Amazon would like to take on the challenge directly to demonstrate the power of their cloud. They’ve done things like this in the past (like creating the #72 supercomputer in the world with 26,496 cores rated at 593 Tflops/sec) and would presumably do it for free as a bit of a boast, but then there would also be the fear that the other tenacious users on the project would feel a bit cheated by having a giant multinational come along and solve the problem without them. But it’s also possible that the joy of having it complete would be worth the disappointment of having not achieved it personally.

This entire experiment and post was a bit of nostalgic indulgence for me. Rc5-72 has been a tempting Everest of distributed computing and back in the day I wanted to be part of the pioneering team that conquered it. Now there are only a few of us left, and we haven’t even reached basecamp yet. At this point I’d gladly hop on a helicopter to the summit except I don’t know where to get one.