Here is a patch that stores the executed db queries in an abstracted way (without values) similar to mysql query log. The code is currently mysql specific.

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

killes@www.drop.org’s picture

FileSize
464 bytes

here is the sql file

moshe weitzman’s picture

what do others think of this. useful? i am thinking yes.

maybe you can batch insert the entries into devel_times tables. see mysql docs for load csv. i'd hate to add 100+ INSERTS to each page. we already are adding that many SELECTs.

killes@www.drop.org’s picture

FileSize
7.84 KB

Here is an updated patch. Some bugs removed, Changed the settings page so that you can collect info without displaying it. Also uses extended inserts as suggested.

moshe weitzman’s picture

a few typos: "rest won'r work", "t('Stored query statstics deleted.')"

also, i don't yet understand the special form of db_query? is that for extended inserts? i was thinking that you could build up some CSV text and then run a single query into devel_times

also, i don't want to make this module unusable on postgres. we might need to switch on db_type.

killes@www.drop.org’s picture

fixed the typos locally, I will speak to Cvbge about pgsql stuff. Yes, the extended inserts work like this. They also don't work on pgsql.

Dries’s picture

+1! (Not tested.)

To reduce overhead, we could use sampling and only record a query once every x queries.

Cvbge’s picture

Here's a comment to the issue we discussed on irc.

I suggested to killes to use just 1 table instead of two. It probably will be faster when using just 1 table instead of 2.

First, you won't need to do SELECT first to see if the query already is in the {devel_queries}.
Second, you won't need to INSERT if it isn't.
Third, you won't need any indexes on the tables => much faster INSERTs.

It would be possibile to make delayed inserts of all queries.

The downside is that the speed of statistic page would be slower. How much? I don't know. Maybe not much? Admin could wait a couple of seconds (or even a minute). Could be benchmarker, right?

But if you prefer to have 2 tables, I'd say to just use db_next_id() for postgresql. There are ways to make it faster but it's kind of ugly, and besides, after a bit of time, every query will be in the table so you won't need to insert it again.

moshe weitzman’s picture

whats going on with this? @killes: feel free to commit when you are satisfied.

moshe weitzman’s picture

when i said build some CSV and do a single query, i refer to mysql's LOAD DATA INFILE statement: http://dev.mysql.com/doc/refman/5.0/en/load-data.html

lets discuss and get this one in

killes@www.drop.org’s picture

FileSize
9.29 KB

Here's the patch which includes the sampling that Dries wanted. I does not apply against today's devel.module.

killes@www.drop.org’s picture

FileSize
7.52 KB

updated patch, needs install update.

moshe weitzman’s picture

Title: store database queries » store database queries - query not properly stored or hashed
Status: Needs review » Active

still one more bug here. killes will chase it down. we are not logging the SQL query properly.

don't enable this just yet, folks. the stats are not correct. nothing bad happens to your install though.

moshe weitzman’s picture

Status: Active » Fixed

fix committed to 4.7 and HEAD. feel free to enable this feature after a trip to update.php

Anonymous’s picture

Status: Fixed » Closed (fixed)