Robert Eisele
Systems Engineer, Architect and DBA

Faster PHP behind FastCGI

A few years ago, Jan Kneschke came up withthe idea of using lighttpd's X-Sendfile to send dynamic contentwithout copying it several times. I liked the idea and used it as basis of myframework. It seems, there are now some immature implementations of this ideaon the lighttpd bug tracker. All of these implementations also use the sharedmemory, with the difference that I haven't used PHP's tempfile()function, but rather exported lighttpd's client-file-descriptor to the PHPscope in order to use it as the file-name of the temporary file. It could also be the IP or something else but the client-fd is the most unique identifier inside of the webserver <-> PHP construct.

I've just published my own PHP version with a lotof improvements and optimizations. To write the content quickly, I also added anew function ob_fwrite() to write the contents of the ob-buffer to anopened file descriptor like this:

ob_start(null, 0x20000);
echo 'Write the content into the buffer';
$fd = fopen('/pipe/' . $_SERVER['CFD']);

if (!BUFFER_CONTENTS) {
	ob_fwrite($fd);
	ob_end_clean();
} else {
	$buffer = ob_get_clean();
}
fclose($fd);

$ref =&$_SERVER['HTTP_ACCEPT_ENCODING'];

if (CACHE_CONTENTS) {
	$path = System::getCachePath($_SERVER['CFD']);
	rename('/pipe/' . $_SERVER['CFD'], '/cache/' . $path);
	exec('gzip -c '.$path.' > '.$path.'z');

	if(isset($ref) && false !== strpos($ref, 'gzip')) {
		header('Content-Encoding: gzip');
		$path.= 'z';
	}
	header('X-Sendfile: /cache/' . $path, true, $http_status);

} else {

	if(isset($ref) && false !== strpos($ref, 'gzip')) {
		header('Content-Encoding: gzip');
		exec('gzip ' . $path . ' > ' . $path . 'z');
		$path.= 'z';
	}
	header('X-Sendfile: /pipe/' . $_SERVER['CFD'], true, $http_status);
}

Please note that /pipe is a tmpfs mountpoint!

The same works, of course, with nginx's X-Accel-Redirect. Reducingthe number of memory copies improves the performance alot! There are severalcopies before the content is finally transfered to the client; which thefollowing sketch illustrates:

PHP ECHO > FCGI SAPI > WEBSERVER> CLIENT

The new way, the content takes, looks like this:

PHP ECHO > OB BUFFER > SHM > WEBSERVER >CLIENT

..., or if the content is already cached:

PHP > CACHE HIT > WEBSERVER > CLIENT

Thick arrows in the sketch above are expenisve copies.

The whole thing runs contrary to the best practices, published by Yahoo!.It's not possible to send the buffer early this way. This could mean, that theclient can't start downloading the style-sheets and scripts as fast aspossible. But as far as I can see, most of the time should not spend in thebackend but with sending the data to the client. Everything else can beoptimized away, like optimizing hanging database queries and so on. I havealready raised this idea a while ago on the Yahoo! developerforum.

Incidentally, this optimization only makes sense for single-threaded web-servers that communicate over the slow FCGI bridge. Apache and other thread-based servers that integrate PHP as a module, can not save much here. Additionally, Apache would need the X-Sendfile-module.

Update of June 12

I used the $_SERVER["CFD"] variable in the example above. In order to make use of it, lighttpd and nginx must be patched. You could also use tempfile() instead but this is annoying because of the aditional required delete-operation of the file. If every file descriptor gets it's own unique file, this is much cleaner. Here you can get the Patch for lighttpd 1.4.28.

You might also be interested in the following

3 Comments on „Faster PHP behind FastCGI”

li
li

robert,
please publish this patch
thank you in advance

Robert

li, I patched both, nginx and lighttpd. I posted a lot of patches to the lighttpd bug tracker and some are already merged into the current version. Anyway, I think you're asking for a patch which exports the client-fd to the PHP scope, right? This can be done very easily, if you need this small patch, I'll publish it.

li
li

do you have any changes to lighttpd code base ? if so can you published them

 

Sorry, comments are closed for this article. Contact me if you have an inventive contribution.