Glad to hear that someone has taken this up. It would be great if nginx support was included.

To purge page http://mysite.com/node/1 from the nginx cache you just need to visit http://mysite.com/purge/node/1. The "/purge/" part is configurable, but that seems to be the convention. http://labs.frickle.com/nginx_ngx_cache_purge/

There are already two Wordpress modules that support nginx cache purging so a Drupal one would be great. (http://wordpress.org/extend/plugins/nginx-manager/ and http://wordpress.org/extend/plugins/nginx-proxy-cache-purge/)

Thanks.

Comments

brianmercer’s picture

Might be able to do this with a rewrite. I'll look into that as well.

SqyD’s picture

Sounds like a good feature to have indeed.

I'll look into it. Not too familiar with nginx yet but you just gave me the best excuse ;-)
Would be more nice if you can get nginx to accept the PURGE request... maybe something like (pseudo code)

location / {
  if ($request_method = PURGE ) {
    proxy_cache_purge	tmpcache   $1$is_args$args;
   }
}
brianmercer’s picture

Thanks for looking. That does look promising.

Maybe something like this by defining a separate local port to clear the cache for all your domains, and put the domain in the key:

server {
  listen 127.0.0.1:8888;
  proxy_cache_purge  tmpcache  $host$uri$is_args$args;
}

Let me know when the code is ready for testing. :)

SqyD’s picture

That config was just a blind guess from some googling, I'll setup an instance with nginx later this week to actualy play with it a bit. It should be doable...
And yes, thanks. That curl example has an error and should read:
curl -X PURGE -H "Host:example.com" http://192.168.1.23/node/2
When you just use one server you don't need to set the header explicitly but just use:
curl -X PURGE http://example.com/node/2
I'll fix it in the Readme.

Thanks!

brianmercer’s picture

OK, I found a config that works.

## Cache purging
server {
  listen 127.0.0.1:8888 default_server;
  #access_log /var/log/nginx/caching.access.log;
  error_page 405 = $uri;
  location / {
    fastcgi_cache_purge mycache $host$request_uri;
  }
}

In response to something like:

curl -X PURGE -H "Host:test.brianmercer.com" "http://127.0.0.1:8888/content/dolor-persto-distineo"

it will properly purge the cache and return a 200 if the page exists in the cache and a 404 if the page doesn't exist in the cache.

I installed the 6.x-1.x-dev but it didn't send anything to the web server. I assume the module is not yet ready for community testing.

Thanks again for taking this up.

SqyD’s picture

It is ready for testing I believe but it could be this new use case prompts new issues to solve.
- Did the purge module and the expire module put entries in the watchdog log? Please provide both.
- Please provide the url you used to configure the purge module.

Some test ideas:
- Maybe just put "http://test.brianmercer.com" as a proxy url and see if you can find something in the logs.
- Your current setup doesn't check for the PURGE request type and should work with a normal GET request as well. You could test this by commenting out the line in purge.inc that sets this and try if that fixes it.
- My nginx experiment will probably have to wait till Sunday. (drupaldevdays brussels on saturday :-) I still hope there is a clean way to implement the PURGE request just like the others. Your approach may work and if it doesn't require changes in the code I am all for it. The idea of the url prefix /purge/* I will take that into account for future refactoring.

brianmercer’s picture

Trying the web address as the proxy url showed me what was happening. Even though I have "http://127.0.0.1:8888" set at admin/settings/purge the request is still going to the actual web site and the purge requests show up in the log for test.brianmercer.com. However, test.brianmercer.com is listening on its public IP: 69.164.210.108:80.

I can't see what curl command the php5-curl module is generating. The odd thing is that in the test.brianmercer.com logs, it looks like the requests are coming from the public IP and not from the localhost. i.e.:

69.164.210.108 - - [02/Feb/2011:18:05:07 -0500] "PURGE /content/test-cache HTTP/1.1" 200 5552 "-" "-"
69.164.210.108 - - [02/Feb/2011:18:05:07 -0500] "PURGE /node/3039 HTTP/1.1" 301 5 "-" "-"
69.164.210.108 - - [02/Feb/2011:18:05:07 -0500] "PURGE /forums/loquor-haero/camur-ibidem-quadrum HTTP/1.1" 200 18575 "-" "-"

The expire module is adding two watchdog entries that look ok. The purge module is logging an error message for those urls that result in a 301 because of aliases and the Global Redirect module. On my site /node will always result in a 301 to /, and node/3089 will result in a 301 to /title-of-node. The redirect behavior is by design. You'll have to decide how to handle non-200, but correct responses in watchdog if someone is using Global Redirect.

On a different, simpler test site (c.brianmercer.com) I am getting one of these errors for each url:

Warning: curl_setopt() expects parameter 2 to be long, string given in purge_urls() (line 46 of /usr/local/drupal/modules6/purge/purge.inc).

That might be a PHP 5.3 issue. I run into those occasionally. Oddly I don't get that error on the test.brianmercer.com site.

admin/settings/performance/expire results in a white page. Dunno if that's an expire module issue.

nginx doesn't recognize the PURGE request method. If you send it a PURGE request it returns 405 Method Not Allowed. That is why I need the "error_page 405" directive in there, so that it redirects as a GET.

SqyD’s picture

Ok, thanks for the update. It's more clear to me what is going on.
The more I think about it, the more I tend to go for the nginx "native" approach by adding the /purge/ prefix to the path. They must have done this for a reason... The native method seems a good way to avoid all the redirect problems we run into now. I can't turn up anything usefull on google about this.

I have an idea for a relative simple way to add this native purge feature without having to rewrite half of the module. It's high on my todo list, after drush and rules integration. With that I drop the hard dependency on expire so we can rule out causes of problems there. Shouldn't take that long, the basics of those two features are there, just needs some polishing. (or should I have used "varnishing" in this context ;-)

brianmercer’s picture

This fixed it:

-      $proxy_purge_url = str_replace($purge_url_parts['scheme'] . $purge_url_host , $proxy_url , $purge_url);
+      $proxy_purge_url = str_replace($purge_url_parts['scheme'] . "://" . $purge_url_host , $proxy_url , $purge_url);

The nginx config still isn't working, so I'll post back when I figure out that part.

brianmercer’s picture

OK, found the other one:

-      curl_setopt($purge_requests[$current_purge_request], CURLOPT_HTTP_HEADER, array("Host: " . $purge_url_host));
+      curl_setopt($purge_requests[$current_purge_request], CURLOPT_HTTPHEADER, array("Host: " . $purge_url_host));
brianmercer’s picture

Seems to be working wonderfully now.

Thanks again for this project. This is something I've wanted for nginx since the expire module appeared but didn't have the skill to create.

SqyD’s picture

Great skills as far as I can see :-) Thank you for finding these embarrassing bugs... I should have tested this piece of code better after my last refactoring. I'll commit your changes plus a first try at drush and rules integration tonight...

SqyD’s picture

Status: Active » Needs review

Hi,

The .dev release now should have basic "native" nginx support. I need to update the Docs and Gui help messages still. Here's how it works:

  • Setup nginx according to http://labs.frickle.com/nginx_ngx_cache_purge
  • Configure your nginx proxy under Site Configuration/Purge Settings like:

    <scheme>://<host>[:port][/path]<?purge_method=[purge|get]> where
    • scheme: http or https (required)
    • host: hostname or ip (required)
    • port: port of the http(s) service, defaults to 80 (optional)
    • path: path prefix to append to purge urls. Ignored by the default purge method. (optional)
    • purge_method: "purge" (default) or "get"

Example for nginx:
http://localhost:8080/purge?purge_method=get

In other news: Also the error logging code has been improved and I've also added drush integration to the expire module so all should be good for a test drive. I'll commit after I've updated the docs etc and done a bit more testing myself.

Cheers!

SqyD’s picture

It's now in the 1.1 release and mentioned in the Readme and project page. I would still appreciate on independent confirmation this works before I close this one.

crea’s picture

Subscribing

omega8cc’s picture

Status: Needs review » Needs work

Nginx 1.0.5
ngx_cache_purge-1.3
PHP-FPM 5.2.17
MariaDB 5.2.7
Debian Squeeze

Nginx and Purge configured per readme etc.

Any attempt to add node or comment results with WSOD because of Nginx crash, however the node or comment is added, only the "purge" request results with:

Jul 23 07:49:24 linode kernel: nginx[41018]: segfault at 3 ip 000000000043d859 sp 00007fffadea1178 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode kernel: nginx[41015]: segfault at 3 ip 000000000043d859 sp 00007fffadea1178 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode kernel: nginx[41019]: segfault at 3 ip 000000000043d859 sp 00007fffadea1178 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|purge|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||1 errors have been encountered when purging these URLs. !purge_log
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||Input: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[node] =&gt; node/1#012&nbsp;&nbsp;&nbsp;&nbsp;[front] =&gt; &lt;front&gt;#012)#012  Output: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; http://d6.o3.linode.us.host8.biz/node/1#012&nbsp;&nbsp;&nbsp;&nbsp;[1] =&gt; http://d6.o3.linode.us.host8.biz/#012&nbsp;&nbsp;&nbsp;&nbsp;[2] =&gt; http://d6.o3.linode.us.host8.biz/rss.xml#012&nbsp;&nbsp;&nbsp;&nbsp;[3] =&gt; http://d6.o3.linode.us.host8.biz/node#012)#012  Modules Using hook_expire_cache(): Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; purge#012)
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||Node 1 was flushed resulting in 4 pages being expired from the cache
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|content|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1|view|Comment: added erghrehg.
Jul 23 07:49:24 linode kernel: nginx[41017]: segfault at 3 ip 000000000043d859 sp 00007fffadea1178 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode kernel: nginx[41954]: segfault at 3 ip 000000000043d859 sp 00007fffadea11d8 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode kernel: nginx[41953]: segfault at 3 ip 000000000043d859 sp 00007fffadea11d8 error 4 in nginx[400000+80000]
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|purge|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||1 errors have been encountered when purging these URLs. !purge_log
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||Input: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[node] =&gt; node/1#012&nbsp;&nbsp;&nbsp;&nbsp;[front] =&gt; &lt;front&gt;#012)#012  Output: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; http://d6.o3.linode.us.host8.biz/node/1#012&nbsp;&nbsp;&nbsp;&nbsp;[1] =&gt; http://d6.o3.linode.us.host8.biz/#012&nbsp;&nbsp;&nbsp;&nbsp;[2] =&gt; http://d6.o3.linode.us.host8.biz/rss.xml#012&nbsp;&nbsp;&nbsp;&nbsp;[3] =&gt; http://d6.o3.linode.us.host8.biz/node#012)#012  Modules Using hook_expire_cache(): Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; purge#012)
Jul 23 07:49:24 linode drupal: http://d6.o3.linode.us.host8.biz|1311421764|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/1|http://d6.o3.linode.us.host8.biz/comment/reply/1|1||Node 1 was flushed resulting in 4 pages being expired from the cache

Not sure if that is ngx_cache_purge broken with Nginx 1.0.5 (will test again with 1.0.1) or it is a problem with purge module.

omega8cc’s picture

Just tested it with Nginx 1.0.1 and it results with exactly the same segfault.
What can be wrong here?

BTW: Nginx config used: http://drupalcode.org/project/barracuda.git/blob/HEAD:/aegir/conf/nginx_...

omega8cc’s picture

Ah, we need to put this in a separate server { } on different port so it doesn't call itself in the same server { } probably.

omega8cc’s picture

This still fails for me.

We use Proxy Url: http://127.0.0.1:8888/purge?purge_method=get with separate vhost: http://drupalcode.org/project/barracuda.git/blob/HEAD:/aegir/conf/nginx_...

Still the same:

Jul 23 10:28:58 linode kernel: nginx[20858]: segfault at 3 ip 000000000043d859 sp 00007fff08cb7a28 error 4 in nginx[400000+80000]
Jul 23 10:28:58 linode kernel: nginx[21566]: segfault at 3 ip 000000000043d859 sp 00007fff08cb7a28 error 4 in nginx[400000+80000]
Jul 23 10:28:58 linode kernel: nginx[21565]: segfault at 3 ip 000000000043d859 sp 00007fff08cb7a28 error 4 in nginx[400000+80000]

Ideas?

Fidelix’s picture

omega8cc, I tested with the same config (except PHP version, which is 5.3 here) as your first attempt.
It's not segfaulting here, but I get some random "Not Found" errors, which I'm trying to fix.

SqyD’s picture

Thanks for debugging this. No time this week to help out myself but here are a few ideas to isolate the problem:
- Try to hit the purge url outside of drupal by (for instance) curl or wget and compare results on the nginx side. You bypass PHP with this.
- You can try to use the drush expire commands. drush xp node/1

omega8cc’s picture

Now that is really weird. We changed in Nginx *only* error logging level from crit to debug and it no longer segfaults (why?), but also doesn't purge anything:

Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|purge|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||1 errors have been encountered when purging these URLs. !purge_log
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||Input: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[node] =&gt; node/2#012&nbsp;&nbsp;&nbsp;&nbsp;[front] =&gt; &lt;front&gt;#012)#012  Output: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; http://d6.o3.linode.us.host8.biz/node/2#012&nbsp;&nbsp;&nbsp;&nbsp;[1] =&gt; http://d6.o3.linode.us.host8.biz/#012&nbsp;&nbsp;&nbsp;&nbsp;[2] =&gt; http://d6.o3.linode.us.host8.biz/rss.xml#012&nbsp;&nbsp;&nbsp;&nbsp;[3] =&gt; http://d6.o3.linode.us.host8.biz/node#012)#012  Modules Using hook_expire_cache(): Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; purge#012)
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||Node 2 was flushed resulting in 4 pages being expired from the cache
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|content|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0|view|Comment: added erbhre.
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|purge|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||1 errors have been encountered when purging these URLs. !purge_log
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||Input: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[node] =&gt; node/2#012&nbsp;&nbsp;&nbsp;&nbsp;[front] =&gt; &lt;front&gt;#012)#012  Output: Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; http://d6.o3.linode.us.host8.biz/node/2#012&nbsp;&nbsp;&nbsp;&nbsp;[1] =&gt; http://d6.o3.linode.us.host8.biz/#012&nbsp;&nbsp;&nbsp;&nbsp;[2] =&gt; http://d6.o3.linode.us.host8.biz/rss.xml#012&nbsp;&nbsp;&nbsp;&nbsp;[3] =&gt; http://d6.o3.linode.us.host8.biz/node#012)#012  Modules Using hook_expire_cache(): Array#012(#012&nbsp;&nbsp;&nbsp;&nbsp;[0] =&gt; purge#012)
Jul 23 13:40:35 linode drupal: http://d6.o3.linode.us.host8.biz|1311442835|expire|217.114.215.250|http://d6.o3.linode.us.host8.biz/comment/reply/2|http://d6.o3.linode.us.host8.biz/comment/reply/2|0||Node 2 was flushed resulting in 4 pages being expired from the cache
omega8cc’s picture

Using curl/wget will not bypass php-fpm here, because it is not a proxy in front of backend Nginx, it is fastcgi_cache_purge method, so in the same Nginx server.

For drush it shows only:

root@linode:/data/disk/o3/distro/001/pressflow-6.22-prod/sites/d6.o3.linode.us.host8.biz# drush xp node/1 -d
Bootstrap to phase 0. [0.03 sec, 2.19 MB]                            [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drush() [0.03 sec, 2.42 MB] [bootstrap]
Bootstrap to phase 5. [0.08 sec, 5.79 MB]                                                           [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_root() [0.08 sec, 5.8 MB]                           [bootstrap]
Loading drushrc "/data/disk/o3/distro/001/pressflow-6.22-prod/drushrc.php" into "drupal" scope.     [bootstrap]
[0.08 sec, 5.8 MB]
Initialized Drupal 6.22 root directory at /data/disk/o3/distro/001/pressflow-6.22-prod [0.11 sec,      [notice]
7.22 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_site() [0.11 sec, 7.22 MB]                          [bootstrap]
Initialized Drupal site d6.o3.linode.us.host8.biz at sites/d6.o3.linode.us.host8.biz [0.11 sec, 7.23   [notice]
MB]
Loading drushrc                                                                                     [bootstrap]
"/data/disk/o3/distro/001/pressflow-6.22-prod/sites/d6.o3.linode.us.host8.biz/drushrc.php" into
"site" scope. [0.11 sec, 7.23 MB]
Drush bootstrap phase : _drush_bootstrap_drupal_configuration() [0.13 sec, 7.69 MB]                 [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_database() [0.13 sec, 7.75 MB]                      [bootstrap]
Successfully connected to the Drupal database. [0.13 sec, 7.75 MB]                                  [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_full() [0.14 sec, 8.16 MB]                          [bootstrap]
Bootstrap to phase 6. [0.24 sec, 16.16 MB]                                                          [bootstrap]
Drush bootstrap phase : _drush_bootstrap_drupal_login() [0.25 sec, 16.16 MB]                        [bootstrap]
Successfully logged into Drupal as Anonymous (uid=0) [0.25 sec, 16.17 MB]                           [bootstrap]
Found command: expire-path (commandfile=expire) [0.25 sec, 16.17 MB]                                [bootstrap]
Initializing drush commandfile: user [0.25 sec, 16.17 MB]                                           [bootstrap]
WD purge: 1 error has been encountered when purging URLs Array                                          [error]
(
    [0] => http://d6.o3.linode.us.host8.biz/purge/node/1 on
http://d6.o3.linode.us.host8.biz/purge?purge_method=get 0
)
 [0.26 sec, 17.05 MB]
WD expire: Input: Array                                                                                [notice]
(
    [0] => node/1
)
  Output: Array
(
    [0] => http://d6.o3.linode.us.host8.biz/node/1
)
  Modules Using hook_expire_cache(): Array
(
    [0] => purge
)
 [0.26 sec, 17.04 MB]
Command dispatch complete [0.26 sec, 17 MB]                                                            [notice]
 Timer  Cum (sec)  Count  Avg (msec) 
 page   0.136      1      135.91     

Peak memory usage was 17.24 MB [0.26 sec, 17 MB]                                                       [memory]
root@linode:/data/disk/o3/distro/001/pressflow-6.22-prod/sites/d6.o3.linode.us.host8.biz# 
omega8cc’s picture

Hmm.. it seems a bit random, as now it segfaults again, also with drush xp node/1.

SqyD’s picture

The error log indicates an "error 0" which is weird. It should be 200, 403 etc What would be the result of a manul command like:
curl -v http://d6.o3.linode.us.host8.biz/purge/node/1
At least the ACL works since I get an 403 here. Do you get a 403 in the drupal log when you exclude the localhost in the acl?

Fidelix’s picture

This is the Purge Proxy Url I'm using:

http://www.anbient.net/purge?purge_method=get
dumb@anbient:~# curl -v http://www.anbient.net/purge/node/433
* About to connect() to www.anbient.net port 80 (#0)
*   Trying 94.23.74.122... connected
* Connected to www.anbient.net (94.23.74.122) port 80 (#0)
> GET /purge/node/433 HTTP/1.1
> User-Agent: curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: www.anbient.net
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.0.5
< Date: Sat, 23 Jul 2011 21:15:18 GMT
< Content-Type: text/html
< Content-Length: 168
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.0.5</center>
</body>
</html>
* Connection #0 to host www.anbient.net left intact
* Closing connection #0
omega8cc’s picture

You will get 404 now, because I reimaged that server and the exact config I'm testing now is:

###
### Support for http://drupal.org/project/purge module.
###
server {
  listen      127.0.0.1:8888;
  server_name _;
  access_log  /var/log/nginx/speed_purge.log;
  allow       127.0.0.1;
  allow       69.164.222.168;
  deny        all;
  root        /var/www/nginx-default;
  index       index.html index.htm;
  location / {
    try_files $uri =404;
  }
  location ~ /purge(/.*) {
    fastcgi_cache_purge speed $host$1$is_args$args;
  }
}

And when I'm trying locally:

curl -vv -H "Host:d6.o2.linode.us.host8.biz" "http://127.0.0.1:8888/purge/node/2"

It gives:

root@linode:~# curl -vv -H "Host:d6.o2.linode.us.host8.biz" "http://127.0.0.1:8888/purge/node/2"
* About to connect() to 127.0.0.1 port 8888 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
> GET /purge/node/2 HTTP/1.1
> User-Agent: curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Accept: */*
> Host:d6.o2.linode.us.host8.biz
> 
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
* Closing connection #0
root@linode:~# 

And of course in the /var/log/syslog it results with:

Jul 24 15:45:31 localhost kernel: nginx[15035]: segfault at 3 ip 000000000043ef7e sp 00007fff1d1934b8 error 4 in nginx[400000+84000]

And because of this there is nothing logged in /var/log/nginx/speed_purge.log

Normally, when Nginx segfaults it means we are doing something really bad in its config. But what can be wrong here?

omega8cc’s picture

And here is a full debug output:

2011/07/24 16:03:09 [debug] 15254#0: post event 0000000001528D78
2011/07/24 16:03:09 [debug] 15254#0: delete posted event 0000000001528D78
2011/07/24 16:03:09 [debug] 15254#0: accept on 127.0.0.1:8888, ready: 0
2011/07/24 16:03:09 [debug] 15254#0: posix_memalign: 0000000000F0ECE0:256 @16
2011/07/24 16:03:09 [debug] 15254#0: *4992 accept: 127.0.0.1 fd:4
2011/07/24 16:03:09 [debug] 15254#0: *4992 event timer add: 4: 60000:1311537849003
2011/07/24 16:03:09 [debug] 15254#0: *4992 epoll add event: fd:4 op:1 ev:80000001
2011/07/24 16:03:09 [debug] 15254#0: *4992 post event 0000000001528EB0
2011/07/24 16:03:09 [debug] 15254#0: *4992 delete posted event 0000000001528EB0
2011/07/24 16:03:09 [debug] 15254#0: *4992 malloc: 000000000110AD00:1264
2011/07/24 16:03:09 [debug] 15254#0: *4992 posix_memalign: 0000000000F05E20:256 @16
2011/07/24 16:03:09 [debug] 15254#0: *4992 malloc: 0000000001200E80:32768
2011/07/24 16:03:09 [debug] 15254#0: *4992 posix_memalign: 0000000000F0FAF0:4096 @16
2011/07/24 16:03:09 [debug] 15254#0: *4992 http process request line
2011/07/24 16:03:09 [debug] 15254#0: *4992 recv: fd:4 177 of 32768
2011/07/24 16:03:09 [debug] 15254#0: *4992 http request line: "GET /purge/node/2 HTTP/1.1"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http uri: "/purge/node/2"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http args: ""
2011/07/24 16:03:09 [debug] 15254#0: *4992 http exten: ""
2011/07/24 16:03:09 [debug] 15254#0: *4992 http process request header line
2011/07/24 16:03:09 [debug] 15254#0: *4992 http header: "User-Agent: curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http header: "Accept: */*"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http header: "Host: d6.o2.linode.us.host8.biz"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http header done
2011/07/24 16:03:09 [debug] 15254#0: *4992 event timer del: 4: 1311537849003
2011/07/24 16:03:09 [debug] 15254#0: *4992 generic phase: 0
2011/07/24 16:03:09 [debug] 15254#0: *4992 rewrite phase: 1
2011/07/24 16:03:09 [debug] 15254#0: *4992 test location: "/"
2011/07/24 16:03:09 [debug] 15254#0: *4992 test location: ~ "/purge(/.*)"
2011/07/24 16:03:09 [debug] 15254#0: *4992 using configuration "/purge(/.*)"
2011/07/24 16:03:09 [debug] 15254#0: *4992 http cl:-1 max:104857600
2011/07/24 16:03:09 [debug] 15254#0: *4992 rewrite phase: 3
2011/07/24 16:03:09 [debug] 15254#0: *4992 rewrite phase: 4
2011/07/24 16:03:09 [debug] 15254#0: *4992 post rewrite phase: 5
2011/07/24 16:03:09 [debug] 15254#0: *4992 generic phase: 6
2011/07/24 16:03:09 [debug] 15254#0: *4992 generic phase: 7
2011/07/24 16:03:09 [debug] 15254#0: *4992 generic phase: 8
2011/07/24 16:03:09 [debug] 15254#0: *4992 access phase: 9
2011/07/24 16:03:09 [debug] 15254#0: *4992 access: 0100007F FFFFFFFF 0100007F
2011/07/24 16:03:09 [debug] 15254#0: *4992 access phase: 10
2011/07/24 16:03:09 [debug] 15254#0: *4992 post access phase: 11
2011/07/24 16:03:09 [debug] 15254#0: *4992 try files phase: 12
2011/07/24 16:03:09 [debug] 15254#0: *4992 http set discard body
2011/07/24 16:03:09 [alert] 30744#0: worker process 15254 exited on signal 11
2011/07/24 16:03:09 [debug] 15267#0: epoll add event: fd:7 op:1 ev:00000001
2011/07/24 16:03:09 [debug] 15267#0: epoll add event: fd:8 op:1 ev:00000001
2011/07/24 16:03:09 [debug] 15267#0: epoll add event: fd:9 op:1 ev:00000001
SqyD’s picture

From the error message "Empty reply from server" in the curl command output I think it's safe to conclude the purging module in nginx is the problem, not php, drupal etc. It should return a 200.

omega8cc’s picture

But then why it segfaults also with Nginx 1.0.1 which is known as working for others (WordPress etc) ?

Of course, since it fails purely on the Nginx level, it has nothing to do with PHP, Drupal or your module.

So I guess that possible problems are caused by one of:

1. Some specific settings in my global Nginx config: http://drupalcode.org/project/provision.git/blob/HEAD:/http/nginx/server...
2. Conflict with upload progress module in Nginx (however no idea how it could be connected).

I guess that my next steps should be to remove (one by one) my global settings in Nginx and see where (if) it stops crashing...

omega8cc’s picture

Removed *all* my settings so it is basically vanilla Nginx 1.0.5, with upload progress only, still segfaults. Now to rebuild Nginx w/o upload progress :/

omega8cc’s picture

No difference, so either fastcgi_cache_purge is buggy or I'm doing something wrong I have no idea about. I would appreciate any confirmation about working config for Nginx 1.0.5 as I tested already all possible combinations, and none worked for me.

Fidelix’s picture

omega8cc, what about this?

Host:
http://pastebin.com/vtpfp3Rg

Nginx.conf (where your problem most likely lies in)
http://pastebin.com/4Xp8CHHB

omega8cc’s picture

@Fidelix

What difference do you mean?

SqyD’s picture

@Fidelix Just reread the thread and noted in #20 your "Not Found" errors. These are probably just normal behaviour of nginx when the object was not in cache. I would not worry about it.

Fidelix’s picture

SqyD, but I'm almost sure something about the cache was going wrong:

Editors posted something to homepage, and the homepage was not being updated.
Editors edited some post on the homepage, and the homepage was not being updated.
Users logged in, browsed 1 or 2 pages, and the login block returned.

And this kind of stuff. What could be the problem?

PS: I'm running boost too.

omega8cc’s picture

@Fidelix

So this config from #33 works for you w/o issues? I see this is mostly copied from my standard config and can't find anything making difference there?

I think I will try this on another VM maybe, as this already drives me crazy ;)

omega8cc’s picture

@Fidelix

Ah, so disable Boost completely to avoid confusion and then let us know if the Nginx cache/purge works there.

omega8cc’s picture

@Fidelix

Also, do you include my global.inc in the site's settings.php? It is required or the "speed booster" will never work correctly.

Fidelix’s picture

omega, I did not include global.inc, I never realized it was actually needed.

I only did this:

if (preg_match("/^\/(?:user|edit|admin|usr)/", $_SERVER['REQUEST_URI'])) {
	header('X-Accel-Expires: 0'); // do not cache it in the speed booster
}

The global.inc you are referring to is this one?
http://gitorious.org/aegir/barracuda-octopus/blobs/master/aegir/conf/ove...

Fidelix’s picture

omega8cc’s picture

@Fidelix

I mean http://drupalcode.org/project/barracuda.git/blob/HEAD:/aegir/conf/global...

Without Speed Booster related cookies/logic there you will end up with cached pages for logged in users, but "shared" between them, since there will be no $cookie_OctopusCacheID in the fastcgi_cache_key - see http://drupalcode.org/project/barracuda.git/blob/HEAD:/aegir/conf/nginx_...

What is even worse, any logged in user will create cached pages, and their (logged in) cached pages will be available for all not logged in visitors for the next hour, while normally those caches are separate and for anonymous visitors valid for one hour, while for logged in users for five minutes only, and created per user, thanks to unique $cookie_OctopusCacheID in the fastcgi_cache_key.

Plus, any cached pages created by anonymous visitors will be served also for logged in users, causing "logged out" syndrome.

You should use Barracuda and Octopus or at least derive required bits from my global.inc to make it working properly.

SqyD’s picture

Status: Needs work » Needs review

Any progress on this?
From the thread above I conclude so far no problem has been found in this projects code and possible causes could be related to
- the 3rd party module being not compatible with current nginx releases
- conflicts with complex configurations like Barracuda.

Since I personally still don't use nginx I don't can't spare the time to start testing this myself but would like to know:
- Does it work on plain vanilla nginx and what versions are recommended?
- If not or barely so, should we at least label this experimental for now?

Thanks all for contributing!

attiks’s picture

I tried it as well and it seems to work, here's what I did

1/ nginx.conf, same code as #1048000-27: nginx support
2/ downloaded purge into sites/all/modules/o_contrib
3/ added the following to local.settings.php

  error_reporting(E_ALL & ~E_NOTICE);
  ini_set('display_errors', TRUE);
  ini_set('display_startup_errors', TRUE);

  if (file_exists('./sites/all/modules/o_contrib/purge/purge.inc')) {
    global $conf;
    $conf['purge_proxy_urls'] = "http://127.0.0.1:8888/purge?purge_method=get";
  }

4/ did the following as root

cd /opt/tmp
wget http://nginx.org/download/nginx-1.0.6.tar.gz
tar xvf nginx-1.0.6.tar.gz
mv nginx-1.0.6/ core-nginx
wget http://labs.frickle.com/files/ngx_cache_purge-1.3.tar.gz
tar xvf ngx_cache_purge-1.3.tar.gz
git clone git://github.com/masterzen/nginx-upload-progress-module.git
cd core-nginx/
nginx -V
./configure --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --user=www-data --group=www-data --with-http_realip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_ssl_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_ssi_module --without-http_scgi_module --without-http_uwsgi_module --with-debug --with-ipv6 --add-module=/opt/tmp/nginx-upload-progress-module --add-module=/opt/tmp/ngx_cache_purge-1.3/
/etc/init.d/nginx stop
make && make install
/etc/init.d/nginx configtest
/etc/init.d/nginx restart
SqyD’s picture

Status: Needs review » Reviewed & tested by the community

Thanks for your helpfull post Attiks. I'll link to it from a few places so other nginx users can find it too.

attiks’s picture

@Sqyd: this is done a a development server, and isn't really stress tested, so don't know how it will handle real life situations.

SqyD’s picture

Status: Reviewed & tested by the community » Needs review

Ok. Good to know. I'll keep revert it to "needs review" untill I hear about a successful production rollout.

attiks’s picture

Status: Needs review » Needs work

I tried latest barracuda Barracuda version BOA-1.4S (nginx 1.0.8) with ngx_cache_purge 1.3, 1.4 and HEAD version but all result in segfault at 3 ip 000000000043e119 sp 00007fff92ddf808 error 4 in nginx[400000+80000]

attiks’s picture

tried with nginx 1.0.9 as well but same problem

SqyD’s picture

I would recommend sending bug reports upstream to the frickle.com people that provided the ngx_cache_purge module for nginx. I don't think there's anything I can do about these issues here.

attiks’s picture

@SqyD it was FYI, I'm debugging right now and have it working again doing it manually, for reference:

git clone git://github.com/FRiCKLE/ngx_cache_purge.git
cd ngx_cache_purge/
git checkout 1.3
cd ..

git clone git://github.com/omega8cc/nginx-upload-progress-module.git
wget -q -U iCab http://nginx.org/download/nginx-1.0.8.tar.gz
tar -xzf nginx-1.0.8.tar.gz
sh ./configure --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --user=www-data --group=www-data --with-http_realip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_ssl_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_ssi_module --without-http_scgi_module  --without-http_uwsgi_module --with-debug --with-ipv6 --add-module=/root/BARRACUDA/ngx_cache_purge  --add-module=/root/BARRACUDA/nginx-upload-progress-module
make 
make install
attiks’s picture

Got it all working again, by commenting out one line inside BARRACUDA
for reference:

omega8cc’s picture

Version: 6.x-1.x-dev » 6.x-1.4
Status: Needs work » Reviewed & tested by the community

It works great, thanks!

See also: http://drupal.org/node/1329770#comment-5378232

SqyD’s picture

Status: Reviewed & tested by the community » Closed (fixed)

Awesome. I'll close this feature request. From now on please create separate support requests for upstream problems and bug reports when there are strong indications the underlying problem lies within this projects code.

Thanks to all contributing!