Client IP with X-Forwarded-For across multiple proxies

When you're running HA-Proxy or Nginx in front of Apache, you lose client IP address information. The TCP connection to your Apache server come from Nginx, so all your logs reflect one single client IP address. This is a problem if you run a forum, as it will calculate the number of site visitors by IP address. Worse, this may flag the flood controls on some forum software. At minimum, it throws off log analysis.

X-Forwarded-For is an HTTP header that allows allows Layer 7 (HTTP) Proxies to pass along the original, external client IP to the next destination. To use this, your reverse proxy, caching server, or load-balancer must be configure to add that header to HTTP requests, and the destination point must be configured to look for it.

Everyone has a tutorial for configuring X-Fowarded-For across two servers (say, Nginx+Apache, or HAProxy+Apache). But what happens when you have an HAProxy load balancer balancing between three Nginx caches, which forwards to Apache for PHP/MySQL? That client IP address needs to passed across three different HTTP servers.


Assuming that HAProxy is has address 1.2.3.4, Nginx is running on another server with 1.2.3.5, and Apache is at 1.2.3.6:

HAProxy will need to be configured with this option to pass the X-Forwarded-For header of the connecting IP.
option forwardfor


Nginx will need the following to both *receive* the X-Forwarded-For header from the HAProxy server, and then *add* the X-Fowarded-For header to the new connection to Apache.

In your main Nginx.conf file:

set_real_ip_from 1.2.3.4; # this is the HAProxy connecting IP address
real_ip_header X-Forwarded-for; # The specific header to be read

In your proxy configuration, either in nginx.conf or in a separate include file, you'll need something similar to this:

location / {
proxy_pass http://1.2.3.6;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}


Finally, Apache can use the Rpaf module to read the X-Forwarded-For header from Nginx:


RPAFproxy_ips 1.2.3.5 # The Nginx connecting IP address
RPAFheader X-Forwarded-For # Apache is looking for this header to use as the "client IP"

References:
http://haproxy.1wt.eu/download/1.5/doc/configuration.txt
http://stderr.net/apache/rpaf/
http://wiki.nginx.org/HttpRealIpModule#set_real_ip_from
http://wiki.nginx.org/HttpProxyModule#Variables

Rsync to Fat32 drives

I regularly provide one of my clients with a backup of his data in the form of an external hard drive. Since his server runs CentOS Linux and his computer is a Windows machine, I need to provide a drive formatted Fat32 so he can plug it into his computer and access the data without problem.

# mount -t vfat /dev/sdc1 /mnt/usb -o shortname=mixed -o utf8

The “shortname=mixed” keeps the case preserved, as otherwise vfat will convert any filename that's 8 characters or less to lower case (default behavior is “shortname=lowercase”) and cause problems for rsync. UTF8 is what Windows uses when mounting filesystems, so we specify that to ensure that we're mounting it the same way (default is to mount iso-8859-1, even though the underlying vfat filesystem will store filenames in UTF8 format).

My normal mirror command, “rsync -az /home /mnt/usb”, doesn't work because -a is a shortcut for the following options:


-a, --archive archive mode; same as -rlptgoD (no -H)
-r, --recursive recurse into directories
-l, --links copy symlinks as symlinks
-p, --perms preserve permissions
-t, --times preserve times
-o, --owner preserve owner (super-user only)
-D same as --devices --specials
-g, --group preserve group

Using -o will cause errors, as rsync will copy the file and then chown (change owner) the file. Fat32 doesn't support Unix owership or permissions, so rsycn will error on every file that is copied. Ditto for -p and -g. Symlinks aren't supported either, and we don't want -L to copy the destination file of the symlink (that will produce multiple copies of a file/directory, not desirable in this particular instance). The -D option is irrelevant because we are only copying website data, so we don't need special devices (/dev/*).

That leaves -r (recursive), -t (preserve times) for our vfat options. There's no need to use compression (-z) since we're not syncing across the network.

So the best command to copy form ext3 to Fat32 drive is something like this:


rsync -rtv /home /mnt/usb

I like using -v for verbosity, unless I'm running this within a shell script.

A good reference for further reading on Fat32 with Linux:
http://www.osnews.com/story/9681/The_vfat_file_system_and_Linux/

Bad Proxies Causing Apache to Reach MaxClients

Recently, I was called to assist with a server that was constantly getting bombarded with HTTP connections and causing Apache to hit MaxClients. It took a couple of minutes to track the IPs with the most connections using this little command:


netstat -tpnC | grep httpd | awk '{print $5}' | cut -f1 -d: | sort | uniq -c
1 65.55.25.149
4 67.141.163.123
1 67.195.111.40
1 71.247.16.159
28 137.244.215.55

This showed a large number of connections from an IP 137.244.215.55.
Since we have ExtendedStatus enabled in httpd.conf, it's a simple matter to find what site is getting hit:


# lynx -dump -width=160 http://localhost/server-status | grep -e '...[1-9].*' | grep -v OPTIONS

Sure enough, there's the IP and the site getting hit, along with the request URI. I like to use this to see when a comment spammer is POSTing to WordPress blog or otherwise trying something malicious. The site getting hit was a Fantasty Football site.

However, I was a little concerned because the IP that had a large number of connections was a .mil IP address.


# host 137.244.215.55
55.215.244.137.in-addr.arpa domain name pointer uhhz-wpa-001.robins.af.mil.

So I simply used ConfigServer Firewall (a nice front-end to iptables) to block the address for an hour.


csf -td 137.244.215.55 3600


9:45:00 AM Other Tech: block the .mil!
9:45:45 AM Me: blocked 'em.
9:45:49 AM Me: just for an hour though.
9:46:39 AM Me: in case it's the airforce cybercommand thingy that's investigating a terror suspect and enemy combatant.
9:47:05 AM Me: because a fantasy sports site is where all the terrorists hang out....

All was well and good, but shortly after blocking it another .mil address hitting the same site with a large number of connections:


netstat -tpnC | grep httpd | awk '{print $5}' | cut -f1 -d: | sort | uniq -c
1 124.115.4.197
1 186.18.102.180
1 38.99.98.113
1 65.55.25.144
1 68.216.159.34
1 68.240.147.150
32 131.51.128.21

After blocking that one, a wellsfargo.com address showed up, then a few other random corporate addresses. This beginning to concern me, since this resembles some sort of DDOS behavior from a bunch of infected PCs (and the idea of .mil and Wells Fargo computers being infected didn't sit well with me). But it wasn't a very effective attack, since only one or two IPs would hit at the same time. And why would someone attack a Fantasy Football site with military and banking computers? Sure there are better better targets than that!

Next step was to watch the logs for a bit (tail -f /path/to/domain/access_log). I noticed some odd behavior. Usually, a browser hits a site, requests a page, and then requests all the supporting files (CSS, javascript, images, media files, etc), usually listing the original referring URL along the way. This was a WordPress blog, so most traffic was fairly normal along these lines. But grepping for the specific IP addresses in the log showed a more unusual pattern: a single request from a generic user-agent string (“Mozilla 4.0 (Compatible)”), followed by a large number of requests for all the links on the page. Something like this:


131.50.151.28 - - [16/Oct/2010:11:39:28 -0400] "GET /baseball/wp-content/themes/SomeTheme/style.css HTTP/1.1" 200 404 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=201009 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=201007 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200910 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200912 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=201006 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200908 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200911 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/xmlrpc.php HTTP/1.1" 200 404 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=201010 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200909 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=201008 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200905 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200811 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/xmlrpc.php?rsd HTTP/1.1" 200 408 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200809 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?p=5 HTTP/1.1" 200 408 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200808 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200810 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200904 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200906 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?p=1961 HTTP/1.1" 200 411 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200812 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?p=1989 HTTP/1.1" 200 411 "-" "Mozilla/4.0 (compatible;)"
131.50.151.28 - - [16/Oct/2010:11:15:43 -0400] "GET /football/?m=200907 HTTP/1.1" 200 413 "-" "Mozilla/4.0 (compatible;)"

I watched this for a while, scratching my head. The “Mozilla/4.0 (compatible;)” was suspicious. It was immediately obvious that it was some kind of bot or spider. Bad bots like to disguise themselves or try to pass themselves off as real browsers to avoid detection or redirection based on their behavior. So I was beginning to think this was a really bad indexing/search spider. Except that it's hitting this one single site, and there there were also referrer links in some of the lines indicating traffic from other sites. And bots are usually operated from a single IP address – they don't spring up from other IPs when the first one is blocked.

Troubleshooting is often a team effort, and it certainly helps to discuss a problem and brainstorm ideas.


11:08:53 AM Me: i'm beginning to think that our military is not infected with bots, but are goofing off with fantasy football, which scares me even more.
11:10:31 AM Other Tech: The useragent is weird though
11:11:23 AM Me: yeah. maybe they're behind a proxy server that grabs all the linked pages for immediate caching.

Sure enough, a quick Google for “Mozilla/4.0 (compatible;)” yielded some hits of exactly that. Behind the corporate doors of Wells Fargo and various Air Force bases are a bunch of people reading up on Fantasy Football, and their collective proxy servers (probably Blue Coat) are slamming the server with a ton of requests to pre-fetch all other linked URLs from the first page, so that each visitor is hitting the server with dozens of connections.

Since this is a shared server, this is affecting not only the customer site in question, but all other sites hosted on this server. It's obvious we can't block by IP address, and there are too many variations to block entire ranges (which is a bad idea to begin with).

My solution was to redirect on the User-Agent string until the issue died down. I created a single HTML page on the server's main DocumentRoot (outside the customer's virtual host) and added the following lines to the customer's .htaccess file:


RewriteCond %{HTTP_USER_AGENT} ^Mozilla\/4\.0\ \(compatible;\)$
RewriteRule .* http://host.example.com/denied/ [R]

The actual index.html page in the Redirected URL sums it up like this:


Your "web accelerator" proxy is causing problems with our servers and customer sites.
Sorry, but you will not be able to access content here.
Please contact your IT Support department for assistance.


Addendum:
I found the script on this page very helpful when testing the User-Agent string in my .htaccess rule. While I am quick to telnet to a webserver to pass an HTTP request a simulate a browser visit, I don't know all the details of the HTTP protocol (including the format of User-Agent string). Scripting this to quickly connect to localhost, pass the request and the User-Agent and see if I received a 200 or 302 Redirect was extremely helpful.

Subversion dependencies on cPanel servers using Yum

If you're ever had to manage a cPanel server, you've probably had a request from a customer to install Subversion, and you probably tried the Yum package manager to install it.


root@server [~]# yum install subversion
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* addons: mirror.cogentco.com
* extras: yum.singlehop.com
Excluding Packages in global exclude list
Finished
Setting up Install Process
Resolving Dependencies
[...]
subversion-1.4.2-4.el5_3.1.x86_64 from base has depsolving problems
--> Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.x86_64 (base)

cPanel is written in Perl, so the cPanel software maintains it's own Perl installation and blocks Yum from installing Perl deps. The cPanel /scripts/perlinstaller utility will show that the module needed is installed, yet Yum will not proceed.

I used to download the per-uri RPM from rpmfind.net for my version of CentOS and install it. But an associate of mine showed me an easier way.

Edit /etc/yum.conf.
The very first line:
exclude=apache* bind-chroot courier* dovecot* exim* httpd* mod_ssl* mysql* nsd* perl* php* proftpd* pure-ftpd* ruby* spamassassin* squirrelmail*

Remove “perl*” from that line. Save and quit.
Now, install subversion. It should resolve the perl-URI dependency just fine.
Once it's completed, don't forget to edit the yum.conf file and restore the “perl*” to the exclude line, so Yum doesn't interfere with cPanel's Perl modules.

Yum Transaction Check Errors

A client tried to update his CentOS installation on his virtual server by running “yum update” and got the following error:

Transaction Check Error:
file /usr/share/X11/XKeysymDB from install of libX11-1.0.3-11.el5 conflicts with file from package libX11-1.0.3-9.el5
file /usr/include/popt.h from install of popt-1.10.2.3-18.el5 conflicts with file from package popt-1.10.2-48.el5

The quick and easy fix is to remove the “installed” package that presents the conflicting files. Then let yum reinstall with appropriate versions and dependencies.


[root@vps ~]# rpm -e libX11-1.0.3-11.el5
[root@vps ~]# rpm -e popt-1.10.2.3-18.el5
[root@vps ~]# yum -y update
...
[root@vps ~]# cat /etc/redhat-release
CentOS release 5.4 (Final)

cPanel Account copy: Package over quota

Many hosting providers use PHP built as an Apache module, so that when a hosting customer uploads files through a script, they are owned by the user “nobody”. This causes a major problem with quotas, since cPanel will only count files that are owned by the user. Files owned by nobody are not counted toward quotas. If a user installs an uploader script, then the user can extend their actual disk usage far beyond their quota.

This can be a problem when moving accounts from one server to another, if the account package is larger than the allowed quota, cPanel will not restore all the files to the new server. This usually results in accounts being restored but with incomplete file sets. Even worse, cPanel will report that the account was restored successfully:




/bin/gtar: ./.fantasticodata/PerlDesk: Cannot mkdir: No such file or directory
/bin/gtar: ./.contactemail: Cannot write: Disk quota exceeded
/bin/gtar: ./fantastico_backups/blog.backup.1137314599.tgz: Cannot open: No such file or directory
/bin/gtar: Error exit delayed from previous errors
Done


Account Restore Complete
Unlocking password for user,
passwd: Success.
checked 107 files….

The simple workaround is to increase the customer's quota in WHM before packaging the account. This way, the account has a large enough quota to restore all the files.

If the account is already packaged and access to the origin server is no longer available, then the only option is to edit the quota file stored in the cPanel package. This will allow the new server to restore the file with a larger quota limit, thus ensuring all files are restored. The following real-world example shows the backup package is copied to the destination server in the /home directory, ready to be restored. In addition, there are approx 1.5GB of files in the account, but the quota is set to 700MB.


root@server [/home]# gunzip USER.tar.gz
root@server [/home]# tar -f USER.tar -x USER/quota
root@server [/home]# cat USER/quota
700
root@server [/home]# echo 2000 > USER/quota
root@server [/home]# tar -rf USER.tar USER/quota
root@server [/home]# rm -rf USER/
root@server [/home]# gzip USER.tar
root@server [/home]# /scripts/restorepkg USER

Of course, you can solve this issue in advance by enabling PHP to run under suPHP or FastCGI, so that scripts will create files owned by the cPanel user, rather than nobody. Then, the user will be alerted when they hit quota.

Search and Destroy: Removing core dump files with find

Recently, I made a change to PHP on a server. This change adversely affected PHP so that it would terminate with a core dump frequently. Since PHP runs non-persistently through suPHP, there were scattered core files throughout various clients' home directories.

I was alerted to this when one of my resellers called me asking how several of his WordPress sites filled up their disk quota so quickly, even after he raised their quotas.

Upon investigation, I found lots of files like the following:


core.14453
core.1334
core.19962
core.122

I needed to remove these (after fixing the PHP problem).

Here was my search and destroy command:


find /home -regex '.*/core.[0-9]*$' -exec rm -f {} \;

If you know find, the find /home tells find where to start. The -exec rm -f {} \; tells find what to do with each file that matches.

The -regex option allows find to match filenames by regular expression. I cound do something simple like -name core\*, which would indeed match any filename like “core.12345”. However, it also matches “core.php”, which might screw up someone's website.

The pattern starts with .*/ which means “any single character repeated zero or more times, followed by a slash”. This is needed to match any directory paths leading up to the filename. The period following core matches the . in the filename, and technically any other single character. I would have been better off putting a back-slash in front of it to specify a literal period.

[0-9] matches any single digit with the * matching zero or more times. This means “core.1” and “core.12345566644” are both matched. Finally, the $ anchors it to the end, so that it only matches “core.12345” (and potentially “core-12345”), but not “core12345.php”.