Please help testing the file caching patch with writes and docu

Hi, here are some new tips for testing this patch with writes not just reads.

http://drupal.org/node/45414#comment-76471

Also please review the settings.php configuration instructions for the performance testings.

Cheers,
Kieran

cache intensive taxonomy calls

I have some blocks that make fairly intensive calls on the database with several taxonomy get tree and sorting calls etc. These calls are noticably slowing Drupal down. However, the taxonomies in question are only updated once a week or so. How can I go about caching these blocks? I guess there is no way within drupal to ask it to cache certain blocks, but not others?

MySQL performance tuning snippet

http://drupal.org/node/50291

Hi, we are looking for some people to test out this snippet of code and see if it helps you to tune your MySQL database. Please provide feedback in the comments.

Cheers,
Kieran

why 'group by' when fetching comment?

Here is a part of code from comments module in the function function comment_render

=====================================
........
........
..........

if ($cid) {
// Single comment view.
$result = db_query('SELECT c.cid, c.pid, c.nid, c.subject, c.comment, c.format, c.timestamp, c.name, c.mail, c.homepage, u.uid, u.name AS registered_name, u.picture, u.data, c.score, c.users FROM {comments} c INNER JOIN {users} u ON c.uid = u.uid WHERE c.cid = %d AND c.status = %d GROUP BY c.cid, c.pid, c.nid, c.subject, c.comment, c.format, c.timestamp, c.name, c.mail, u.picture, c.homepage, u.uid, u.name, u.picture, u.data, c.score, c.users', $cid, COMMENT_PUBLISHED);

if ($comment = db_fetch_object($result)) {
$comment->name = $comment->uid ? $comment->registered_name : $comment->name;
$output .= theme('comment_view', $comment, module_invoke_all('link', 'comment', $comment, 1));
}
}
else {
// Multiple comment view
$query .= "SELECT c.cid as cid, c.pid, c.nid, c.subject, c.comment, c.format, c.timestamp, c.name, c.mail, c.homepage, u.uid, u.name AS registered_name, u.picture, u.data, c.score, c.users, c.thread FROM {comments} c INNER JOIN {users} u ON c.uid = u.uid WHERE c.nid = %d AND c.status = %d";

$query .= ' GROUP BY c.cid, c.pid, c.nid, c.subject, c.comment, c.format, c.timestamp, c.name, c.mail, u.picture, c.homepage, u.uid, u.name, u.picture, u.data, c.score, c.users, c.thread';

Dimensioning question

I know this kind of question has been posted before, but usually answers depend on the hardware being used.

I wanted to know a gross estimate of concurrent users and number of nodes that could be served with a P4 Xeon 3.4Ghz, 2GB RAM server on Linux with My SQL. The modules that we plan to provide initially are blog and photo albums.

Lighttpd, fastcgi and Drupal

Hi

I'm becoming convinced that

a) Drupal (particularly 4.6) with mod_php and apache/apache2 simply cannot scale to >2 pages per second on a site which has fast-changing content like busy and large forums, if you only have one server to run it

b) There's a decent chance that replacing Apache in the chain and using lighttpd/fastcgi has a good chance of increasing that performance to enable a single box to cope.

Pages

Subscribe with RSS Subscribe to RSS - Deprecated - Performance and scalability