See the Mailing lists or Drupal Issue queue. There are also various working groups on groups.drupal.org

Status of Postgres support and patches

we are working on the Eduforge.org project, offering free hosting for education related FOSS software and CC-licensed content projects on a custom GForge install. As part of the setup, we are starting to use Drupal with a Postgres backend.

We found a few minor SQL syntax issues, and you'll see Patrick Lee posting patches and comments about them. Please let us know if they need rework or polishing to be merged in. We also found that there are a few pending Postgres-related patches, which we are keen on trying. Any reason they are not being merged in?

regards,

martin

Speed optimization for many revisions

The default behavior for node_load() is to return the latest revision, however all of the revisions are retreived from the database in all situations. This could be slow if the serialized data is very large (with many revisions).

It would be faster to not select the revisions field if $revision is set to NULL with many revisions (it is currently selecting node.*).

Database Scaling

It appears as though there are a few problems with deploying Drupal in a large replicated mysql environment because of the amount of database interactivity that is required.

What I would reccommend doing would be to have an optional read-only database connection and a read/write database connection.

Caching architecture weaknesses

I was reviewing the Drupal caching code and it looks as though drupal will never really be able to scale under serious load. While it is capable of caching data to a database and returning that to a user, that still requires an apache process to fork off and start the php interpreter. Even with a php opcode cache, a person can't get the kind of performance available through static object caching such as available through Squid.

Some of the claims of scalability listed on this site list being able to push 60-100 hits per minute with Drupal. I need 300 hits/second on frequently accessed static objects in order to be able to consider Drupal as a platform, and that's not a hard number to obtain. With a couple of initial tests, I was able to get 38 hits/second out of drupal with the caching enabled.

My proposal would be to add a cache manager to interoperate with Squid allowing an administrator to set headers that would allow certain objects to be cached through a squid cache.

The current code does something like:

if(cache) { /*done*/ }
else {
header("Expires: Sun, 19 Nov 1978 05:00:00 GMT");
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Cache-Control: no-store, no-cache, must-revalidate");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
}

/* finish processing */

Reviewing old bugs/features

What do you think about start to review old bugs/features entries, to clean all those that are implemented somehow in the current version, or that just don't matter anymore.
I'm asking this, because these days I was looking at the bugs/features list and I've found some that are already implemented with contribution modules; but I don't know if I can close them.

Thanks

Suggest for Module/Taxonomy/Node Access Control

I've noticed a lot of discussion on group-based, fine-grained access control:

Pages

Subscribe with RSS Subscribe to RSS - Deprecated - Drupal core