nginx server configuration for a wordpress instance served from a URL’s subdirectory

You want to serve a wordpress instance on a website’s domain url but not at the path’s root, you want it under a sub-directory, for example “blog”, the same as this blog:

https://www.gubatron.com/blog 

Here’s how my NGINX’s server block for ‘www.gubatron.com’ looks like at the moment (https/ssl hasn’t been configured yet)

server {
  server_name www.gubatron.com;
  listen 80;
  listen [::]:80;
  root /media/ebs/data/websites/gubatron.com/;
  index index.php index.html index.htm;

  # wordpress lives at gubatron.com/blog/...
  rewrite ^/blog/wp-admin/(.*) /blog/wp-admin/$1;
  #search redirect                                                                                                       
  rewrite ^/blog/(.*)s=(.*)$ /blog/index.php?s=$2;
  try_files $uri $uri/ /blog/index.php$is_args$args;

  location ~ \.php {
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
    include fastcgi_params;
  }
  location ~ \.git {
    deny all;
  }
}

Here is the equivalent in lighttpd, too bad lighttpd has no plans for HTTP2, it’s much friendlier and flexible to configure than nginx in my humble opinion.

$HTTP["host"] =~ "^gubatron.com$|^www.gubatron.com$" {
  server.document-root="/media/ebs/data/websites/gubatron.com/"

  $HTTP["url"] =~ "\.git" {
     url.access-deny = ("")
  }

  url.rewrite = (
            "^/blog/wp-admin/(.*)" => "$0",
            "^/blog/(.*)\.(.+)$" => "$0",
            "^/blog/(.+)/?$" => "/blog/index.php/$1"
  )
}

I used to host this website and wordpress on lighttpd, lighttpd’s config file is very powerful, it’s all based on matching server variables and applying rules, I will miss it dearly, things like having a compressed file cache and it’s flexibility, but I have to move on to nginx if I want to use http2, the lighttpd has no plans for http2 support and it’s just much faster and efficient than http 1.1.

Fix high CPU usage by WordPress and MySQL

Today one of our wordpress sites had very high server load and it was being caused by MySQL

So I went to the mysql console, and looked up the process list:

So this guy is appearing a lot
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';

Let’s see how it’s behaving with explain
explain SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';

It’s scanning 226k rows to get its search results!

Probably some moronic plugin is doing this and wordpress does not add an index on that table. The solution is simple, let’s add an index!

ALTER TABLE wp_options ADD INDEX (`autoload`);

Now let’s run explain again

From scanning 226k it went down to 408!, 3 orders of magnitude drop.

And now the CPU load went below 4%, crisis averted.

HOW TO ENABLE PHP FPM LOG OUTPUT

This one had me for the longest time.

If you happen to be running a web server with php-fpm, sometimes you will run across an HTTP 500 error and all you will get will be a blank screen.

You will look at your server’s error log, your vhost error’s log and you will see nothing.

At this point you will want to enable logging on php fpm to see what’s up.

So you will go to your /etc/php/7.0/fpm/php-fpm.conf

you will start searching for “log”, and you will come across
error_log = /var/log/php7.0-fpm.log
you will tail -f that log file, and nothing will come up.

you will go back to that config file, you will play with your log levels, and nothing, and that’s because there’s this fucking obscure setting on your pool configuration that you’d never think of.

Let’s say you’re using the default www.conf pool config file (the one sitting at /etc/php/7.0/fpm/pool.d/www.conf), open it and look for “workers”, you will see this:

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes

uncomment catch_workers_output=yes
and restart your php-fpm service, tail -f your log and you will see the stack trace you’re looking for.

You’re welcome.

Upgrading your wordpress blog to PHP 7.0 on Ubuntu Xenial

If you’re about to upgrade your Ubuntu server to 16.04 (Xenial) you might want to take advantage of the new PHP 7.0 which is as fast or more than Facebook’s HHVM, or perhaps during the upgrade process a few things may have broken and perhaps that’s why you’re here

Make sure the following packages are installed.

sudo apt install php7.0-cli php7.0-common php7.0-curl php7.0-fpm php7.0-json php7.0-readline php7.0-mbstring php7.0-xml php7.0-mysql

Update your php-fpm web server configuration

I run lighttpd, but you’re more likely running nginx or apache.
If you use php-fpm and you’ve configured your pool to be accessed via unix socket, you will have to update your server configuration from the old socket path: "/var/run/php5-fpm.sock"
to the new one "/var/run/php/php7.0-fpm.sock"

this is how it looks for lighttpd:

fastcgi.server = ( ".php" =>
(( "socket" => "/var/run/php/php7.0-fpm.sock",
"broken-scriptfilename" => "enable",
"allow-x-send-file" => "enable")))

[SYSADMIN] Serve your WordPress cached pages directly with lighttpd and not PHP

Optimizing Your WordPress Cache Loads in Lighttpd.

If you don’t configure your wordpress virtual host properly in lighttpd, your wordpress cache will still make use of PHP.

Wouldn’t it be nice if all those cached requests were served directy from the webserver as the static files that they are, bypassing the CPU/memory load PHP can have, and use those resources for otherthings?

Install and Enable mod_magnet

For this to occur with lighttpd, you will need mod_magnet, so assuming you’re on a Ubuntu/Debian based linux distro let’s make sure we have it installed.

sudo apt-get install lighttpd-mod-magnet

Then let’s make sure it’s enabled, you can do this manually on your lighttpd.conf by adding “mod_magnet” to the list of enabled modules…

server.modules = (
        "mod_fastcgi",
        "mod_access",
        "mod_alias",
        "mod_accesslog",
        "mod_compress",
        "mod_rewrite",
        "mod_redirect",
        "mod_status",
        "mod_proxy",
        "mod_setenv",
        "mod_magnet"
)

or you can do it the lighty way:

sudo lighty-enable-mod magnet

(this simply makes a symlink to the 10-magnet.conf file inside /etc/lighttpd/conf-enabled which lighty will check upon startup)

The cache logic script that will be executed by lighttpd

Now, on your wordpress directory, create a file called rewrite.lua and paste the following script in it:

function log(str)
   -- wanna tail -f a log to see what's happening    
   fp = io.open("/path/to/some/lua.log","a+")
   fp:write(str .. "\n")
   fp:flush()
   fp:close()
end

function serve_html(cached_page)
    if (lighty.stat(cached_page)) then
        lighty.env["physical.path"] = cached_page
        return true
    else
        return false
    end
end

function serve_gzip(cached_page)
    if (lighty.stat(cached_page .. ".gz")) then
        lighty.header["Content-Encoding"] = "gzip"
        lighty.header["Content-Type"] = ""
        lighty.env["physical.path"] = cached_page .. ".gz"
        return true
    else
        return false
    end
end

if (lighty.env["uri.scheme"] == "http") then
    ext = ".html"
else
    ext = "-https.html"
end

cached_page = lighty.env["physical.doc-root"] .. "/wp-content/cache/supercache/" .. lighty.request["Host"] .. lighty.env["request.orig-uri"]
cached_page = string.gsub(cached_page, "//", "/")
cached_page = string.gsub(cached_page, lighty.request["Host"] .. "/index.php", lighty.request["Host"])

attr = lighty.stat(cached_page)

if (attr) then
    query_condition = not (lighty.env["uri.query"] and string.find(lighty.env["uri.query"], ".*s=.*"))
    user_cookie = lighty.request["Cookie"] or "no_cookie_here"
    cookie_condition = not (string.find(user_cookie, ".*comment_author.*") or (string.find(user_cookie, ".*wordpress.*") and not string.find(user_cookie,"wordpress_test_cookie")))

    if (query_condition and cookie_condition) then
        accept_encoding = lighty.request["Accept-Encoding"] or "no_acceptance"

        if (string.find(accept_encoding, "gzip")) then
            if not serve_gzip(cached_page) then 
                serve_html(cached_page) 
            end
        else
            serve_html(cached_page)
        end
        --log('cache-hit: ' .. cached_page)
    end
else
    --log('cache-miss: ' .. cached_page)
end

Configuring your vhost in lighttpd for WordPress redirects and direct cache serves without php.

Then on your vhost configuration in lighttpd.conf add the following towards the end.
(Fix paths if you have to)

var.wp_blog = 1

magnet.attract-physical-path-to = ( server.document-root + "/rewrite.lua" )

url.rewrite-if-not-file = (
   "^/(wp-.+).*/?" => "$0",
   "^/(sitemap.xml)" => "$0",
   "^/(xmlrpc.php)" => "$0",
   "^/(.+)/?$" => "/index.php/$1"
  )

Restart your lighttpd sudo service lighttpd restart

Now watch how your PHP processes breathe a lot better and you page loads are insanely faster.

You’re welcome 🙂

command line speed test, see how fast is your server’s connection

Save the following script in a file called speed_test

#!/bin/bash

# Requirements
# sudo apt-get install lftp iperf

lftp -e 'pget http://releases.ubuntu.com/14.04.3/ubuntu-14.04.3-desktop-amd64.iso; exit; '

make sure the file is executable: sudo chmod +x speed_test

Once you have installed lftp and iperf make sure you have the script somewhere in your $PATH.

The script basically downloads an ubuntu iso and does the math.

The output looks like this on a AWS m3.large instance:

$ speed_test
1054871586 bytes transferred in 14 seconds (70.37M/s)

Multiply by 8 to convert 70.37M/s to Megabits per second = 562.96 Mbit/s

AWS’s download speeds for m3.large instances is half a gigabit in January 2016. (or is that the upload speed of the Ubuntu ISO server?)

[bash scripting] How to get a file’s name without its extension(s).

Say you have an encrypted file file.foo.gpg and you want to make a shorthand command to decrypt that file, you’ll want the resulting file to be named file.foo (without the .gpg), or say you want the name, with no extension?), you can use bash’s magic variable voodo for that.

Screen Shot 2014-09-06 at 4.29.38 PM

A simple version of that script would look something like this:
Screen Shot 2014-09-06 at 4.34.26 PM

AWS troubleshooting: how to fix a broken EBS volume (bad superblock on xfs)

As great as EBS volumes are on Amazon Web Services, they can break and not ever mount again, even though your data could still be there intact, a simple corruption on the filesystem structure can cause a lot of damage. On this post I teach you how to move all that data onto a new EBS drive, so keep calm and read slowly.

So, you try to mount your drive after some updates and you get an error like this on dmesg | tail:

[56439860.329754] XFS (xvdf): Corruption detected. Unmount and run xfs_repair

so you unmount your drive, invoke xfs_repair and you get this…

$ sudo xfs_repair -n /dev/xvdf
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
..........................................

and no good secondary superblock is found.

Don’t panic, this is what you have to do next to solve this issue:

  1. Go to your AWS dashboard, EC2 section.
  2. Click on “Volumes”
  3. Find the broken volume.
  4. Create a snapshot of the broken volume (this takes a while)
  5. Create a new volume the same size (or larger than) your old drive out of the snapshot you just created (this takes a while)
  6. Attach your new volume to the same EC2 instance (no need to reboot or anything), if the old drive was mapped to /dev/xvdf, the new one will be mapped to /dev/xvdg (see how the last letter increases alphabetically)

Now here’s a gotcha, Amazon will not create your new drive using the same file system type (xfs), for some reason it will create it using the ext2 filesystem.

$ sudo file -s /dev/xvdg
/dev/xvdg: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=2e35874f-1d21-4d2d-b42b-ae27966e0aab (large files)

Here you have two options:
1. Live with the new ext2 file system, make sure your /etc/fstab is updated to look something like this:
/dev/xvdg /path/to/mount/to auto defaults,nobootwait,noatime 0 0

or 2. copy the contents of your drive to a temporary location, usually inside /mnt which has plenty of space from that ephemeral drive the ec2 instances come to, and then mkfs.xfs the new volume, and then copy the contents back… (which what I did, as I chose to create a larger drive and the ext2 format that came on the new volume only recognized the size of the snapshot)

Hope this saved your ass, leave a note if I did.

Remember to never do any irreversible action until you have a disk snapshot, try your best to never lose data.

How to have a Play framework app autostart during boot on Elastic Beanstalk CentOS ec2 instances

So you’ve created an Elastic Beanstalk environment, you have a play framework distribution which you’ve created using play dist (either on your local environment, or right there on the server, whatever you prefer)

play dist outputs a my-app-1.0.zip file which has a self-contained version of your app with all the necessary libraries and a start script.

Afer you unzip it, you end up with a my-app-1.0/lib/ folder and a start script.

[ec2-user@ip-10-235-8-106 bullq-1.0]$ ls -l
total 24
drwxrwxr-x 2 ec2-user ec2-user 4096 Sep 27 15:35 lib
-rwxrwxr-x 1 ec2-user ec2-user 4328 Sep 27 15:35 start

Make sure it’s executable by using chmod +x start on the start script.

So now, this is all in the first ec2 instance of your elastic beanstalk environment, if you’re like me and you’ve used ubuntu/debian for your server management things can be slightly different here, since Amazon preferred CentOS for their default image, and here I’ll show you how to make your play app auto start when the server boots because you want every new machine that may be instanciated to have your app installed and to start the service as soon as the machine is up.

Create a /etc/init.d/myappd script
(I’m using ‘myapp’ here as an example, your app can be named whatever is named, so replace accordingly)

#!/usr/bin/env bash
#myappd
#Script to start|stop|restart myappd from /etc/init.d/
#By Gubatron – @gubatron – gubatron@gmail.com

#replace accordingly in these variables ‘myapp’ for the name of your app
PID_FILE=/home/ec2-user/myapp/dist/myapp-1.0/RUNNING_PID
DAEMON_NAME=myappd
DAEMON_PATH=/home/ec2-user/myapp
DAEMON=$DAEMON_PATH/dist/myapp-1.0/start

test -x $DAEMON || exit 0

set -e

function killDAEMON() {
echo “start kill daemon”
kill -9 cat /home/ec2-user/bullq/dist/bullq-1.0/RUNNING_PID
echo “end kill daemon”
}

function removePIDFile() {
if [ -e $PID_FILE ]
then
rm -f $PID_FILE
fi
}

case $1 in
start)
removePIDFile
echo “Starting $DAEMON_NAME… $DAEMON”
nohup $DAEMON &
;;
restart)
echo “Hot restart of $DAEMON_NAME”
killDAEMON
removePIDFile
COMMAND=”nohup $DAEMON &”;
echo $COMMAND
$COMMAND
rm -f $PID_FILE
;;
stop)
echo “Stopping $DAEMON_NAME”
killDAEMON
removePIDFile
;;
*)
echo “Usage: $DAEMON_NAME {start|restart|stop}” >&2
exit 1
;;
esac

exit 0

 

Wire it to autostart

The simplest way I found to have this script start when the server would boot was to add it at the end of the
/etc/rc.local file. (In ubuntu you’d register the new script with the upate-rc.d command)

#!/bin/sh
#
This script will be executed after all the other init scripts.
You can put your own initialization stuff in here if you don’t
want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

/etc/init.d/myappd start