[CODE/PHP] JpGraph: How to output your graph as a base64 encoded image

Some times you just want to output the image created by your $graph object without having to create a separate .php script that would need to receive a bunch of parameters.

Here’s a function you can pass your $graph object right before the $graph->Stroke(); call

function graphInSrc($graph, $width, $height) {
  $img = $graph->Stroke(_IMG_HANDLER);
  ob_start();
  imagepng($img);
  $img_data = ob_get_contents();
  ob_end_clean();

  echo '<img width="'.$width.'" height="'.$height.'" src="data:image/png;base64,'.base64_encode($img_data).'"/>';
}

HOW TO ENABLE PHP FPM LOG OUTPUT

This one had me for the longest time.

If you happen to be running a web server with php-fpm, sometimes you will run across an HTTP 500 error and all you will get will be a blank screen.

You will look at your server’s error log, your vhost error’s log and you will see nothing.

At this point you will want to enable logging on php fpm to see what’s up.

So you will go to your /etc/php/7.0/fpm/php-fpm.conf

you will start searching for “log”, and you will come across
error_log = /var/log/php7.0-fpm.log
you will tail -f that log file, and nothing will come up.

you will go back to that config file, you will play with your log levels, and nothing, and that’s because there’s this fucking obscure setting on your pool configuration that you’d never think of.

Let’s say you’re using the default www.conf pool config file (the one sitting at /etc/php/7.0/fpm/pool.d/www.conf), open it and look for “workers”, you will see this:

; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environement, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes

uncomment catch_workers_output=yes
and restart your php-fpm service, tail -f your log and you will see the stack trace you’re looking for.

You’re welcome.

Upgrading your wordpress blog to PHP 7.0 on Ubuntu Xenial

If you’re about to upgrade your Ubuntu server to 16.04 (Xenial) you might want to take advantage of the new PHP 7.0 which is as fast or more than Facebook’s HHVM, or perhaps during the upgrade process a few things may have broken and perhaps that’s why you’re here

Make sure the following packages are installed.

sudo apt install php7.0-cli php7.0-common php7.0-curl php7.0-fpm php7.0-json php7.0-readline php7.0-mbstring php7.0-xml php7.0-mysql

Update your php-fpm web server configuration

I run lighttpd, but you’re more likely running nginx or apache.
If you use php-fpm and you’ve configured your pool to be accessed via unix socket, you will have to update your server configuration from the old socket path: "/var/run/php5-fpm.sock"
to the new one "/var/run/php/php7.0-fpm.sock"

this is how it looks for lighttpd:

fastcgi.server = ( ".php" =>
(( "socket" => "/var/run/php/php7.0-fpm.sock",
"broken-scriptfilename" => "enable",
"allow-x-send-file" => "enable")))

[SYSADMIN] Serve your WordPress cached pages directly with lighttpd and not PHP

Optimizing Your WordPress Cache Loads in Lighttpd.

If you don’t configure your wordpress virtual host properly in lighttpd, your wordpress cache will still make use of PHP.

Wouldn’t it be nice if all those cached requests were served directy from the webserver as the static files that they are, bypassing the CPU/memory load PHP can have, and use those resources for otherthings?

Install and Enable mod_magnet

For this to occur with lighttpd, you will need mod_magnet, so assuming you’re on a Ubuntu/Debian based linux distro let’s make sure we have it installed.

sudo apt-get install lighttpd-mod-magnet

Then let’s make sure it’s enabled, you can do this manually on your lighttpd.conf by adding “mod_magnet” to the list of enabled modules…

server.modules = (
        "mod_fastcgi",
        "mod_access",
        "mod_alias",
        "mod_accesslog",
        "mod_compress",
        "mod_rewrite",
        "mod_redirect",
        "mod_status",
        "mod_proxy",
        "mod_setenv",
        "mod_magnet"
)

or you can do it the lighty way:

sudo lighty-enable-mod magnet

(this simply makes a symlink to the 10-magnet.conf file inside /etc/lighttpd/conf-enabled which lighty will check upon startup)

The cache logic script that will be executed by lighttpd

Now, on your wordpress directory, create a file called rewrite.lua and paste the following script in it:

function log(str)
   -- wanna tail -f a log to see what's happening    
   fp = io.open("/path/to/some/lua.log","a+")
   fp:write(str .. "\n")
   fp:flush()
   fp:close()
end

function serve_html(cached_page)
    if (lighty.stat(cached_page)) then
        lighty.env["physical.path"] = cached_page
        return true
    else
        return false
    end
end

function serve_gzip(cached_page)
    if (lighty.stat(cached_page .. ".gz")) then
        lighty.header["Content-Encoding"] = "gzip"
        lighty.header["Content-Type"] = ""
        lighty.env["physical.path"] = cached_page .. ".gz"
        return true
    else
        return false
    end
end

if (lighty.env["uri.scheme"] == "http") then
    ext = ".html"
else
    ext = "-https.html"
end

cached_page = lighty.env["physical.doc-root"] .. "/wp-content/cache/supercache/" .. lighty.request["Host"] .. lighty.env["request.orig-uri"]
cached_page = string.gsub(cached_page, "//", "/")
cached_page = string.gsub(cached_page, lighty.request["Host"] .. "/index.php", lighty.request["Host"])

attr = lighty.stat(cached_page)

if (attr) then
    query_condition = not (lighty.env["uri.query"] and string.find(lighty.env["uri.query"], ".*s=.*"))
    user_cookie = lighty.request["Cookie"] or "no_cookie_here"
    cookie_condition = not (string.find(user_cookie, ".*comment_author.*") or (string.find(user_cookie, ".*wordpress.*") and not string.find(user_cookie,"wordpress_test_cookie")))

    if (query_condition and cookie_condition) then
        accept_encoding = lighty.request["Accept-Encoding"] or "no_acceptance"

        if (string.find(accept_encoding, "gzip")) then
            if not serve_gzip(cached_page) then 
                serve_html(cached_page) 
            end
        else
            serve_html(cached_page)
        end
        --log('cache-hit: ' .. cached_page)
    end
else
    --log('cache-miss: ' .. cached_page)
end

Configuring your vhost in lighttpd for WordPress redirects and direct cache serves without php.

Then on your vhost configuration in lighttpd.conf add the following towards the end.
(Fix paths if you have to)

var.wp_blog = 1

magnet.attract-physical-path-to = ( server.document-root + "/rewrite.lua" )

url.rewrite-if-not-file = (
   "^/(wp-.+).*/?" => "$0",
   "^/(sitemap.xml)" => "$0",
   "^/(xmlrpc.php)" => "$0",
   "^/(.+)/?$" => "/index.php/$1"
  )

Restart your lighttpd sudo service lighttpd restart

Now watch how your PHP processes breathe a lot better and you page loads are insanely faster.

You’re welcome 🙂

How to process thousands of WordPress posts without hitting or raising memory limits.

So you need to write a script that processes all the posts in your wordpress database, you don’t need to use the stupid wordpress loop because you’re not writing web facing code, you might just need to fix some metadata on each one of the posts, but every time you iterate through your posts, no matter if you use the wordpress api, or even if you try to fetch the post IDs directly with MySQL and then call “wp_get_post($id)” after a few hundred posts your PHP interpreter dies when it uses all the memory you’ve given it.

You Google and every other clueless php-wordpress-noob that thinks himself of a programmer will give you the “Raise your PHP’s memory limit” or “Do it in parts” answer.

WTF is it with these noobs?

You a programmer used to real programming languages start to curse on the frustration of having to get dirty with this awful language and badly documented wordpress “API”. The noobs don’t understand that you might have to deal with tens of thousands of posts that couldn’t possibly fit in memory and you know…

is the solution to the problem is to free memory?

The WordPress geniuses created this WP Object cache which is used by default whenever you invoke “get_post()” or other functions. The bastards didn’t bother to mention it on the function documentations, nor they bothered to put a little note on how to disable it in case you didn’t need it. This is what happens when you get people that don’t think about all the possible uses of an API.

If you start iterating through a list of IDs, and you start invoking get_post($somePostId,’OBJECT’), and you start to print how much memory is available you will see how get_post() does keep the posts in memory, if you read get_post() and dig further you will see objects being cached in the in-memory WP Object Cache, a half-ass solution would be to invoke wp_cache_flush() every now and then:

[php]
$post_ids_cursor = mysqli_query(“select ID from wp_posts where post_status=’publish’ order by post_date desc”);
$n = 0;
$last_memory_usage = memory_get_usage();

while ($row = mysqli_fetch_row($post_ids_cursor)) {
//this son of a bitch caches the post object.
//nowhere on the WordPress documentation for the function it says so
//http://codex.wordpress.org/Function_Reference/get_post
$post = get_post($row[0],’OBJECT’);

$memory_usage = memory_get_usage();
$delta_memory = $memory_usage – $last_memory_usage;
$last_memory_usage = $memory_usage;

echo “($n) ” . $memory_usage . ” ($delta_memory) \n”;

$n++;

//flush php’s cache every 100 posts, and let’s see what happens.
if ($n % 100 == 0) {
wp_cache_flush();
echo “Flush!\n”;
}
}
[/php]

[bash]
//N post – Memory used – Delta memory used
(0) 30254136 (13984) //start about 28.85MB
(1) 30262280 (8144)
(2) 30269592 (7312)
(3) 30277656 (8064)
(4) 30285720 (8064)
(5) 30293784 (8064)
(6) 30301848 (8064)
(7) 30309928 (8080)
(8) 30318056 (8128)
(9) 30326120 (8064)
(10) 30334184 (8064)

(93) 31054104 (8104)
(94) 31062168 (8064)
(95) 31070232 (8064)
(96) 31078344 (8112)
(97) 31086440 (8096)
(98) 31094552 (8112)
(99) 31102632 (8080) //already here at 29.66MB
Flush!
(100) 29816984 (-1285648) //bam we’ve freed 1.22MB with the flush call
[/bash]

However this solution is slow, WordPress will use and free unnecessary dynamic memory and it’ll check the cache with no luck every time you get a new object, which is the case of a linear scan like the one we have to do to batch process our posts.

Luckily someone in the wordpress team put a way to disable caching, so the real solution is…

To not use dynamic memory at all if you don’t need it

When you read the code of “wp_post_cache” every time $wp_object_cache->add() is invoked, that code always checks to see if caching has been suspended using a function called “wp_suspend_cache_addition()”

[php]
function add( $key, $data, $group = ‘default’, $expire = ” ) {
if ( wp_suspend_cache_addition() )
return false;

[/php]

This function can be used to turn off the freaking caching, this way you can iterate much faster through all your posts, every object fetched from the database will be kept in the stack of your loop and by not needing to flush or check the cache your processing will be much much faster.

This is how you turn it off:

[php]
wp_suspend_cache_addition(true);
[/php]

Hope this helped you process your posts in batch efficiently, leave a tip on the way out if I saved your ass.