MergeQuery_mysql breaks the MergeQuery contract, described in MergeQuery::key():

If the record does not yet exist, they will be inserted into the table along with the values set in the fields() method. If the record does exist, these fields will be used in the WHERE clause to select the record to update.

MergeQuery_mysql completely ignore the keys specified by MergeQuery::key(), and simply executes a INSERT ... ON DUPLICATE KEY UPDATE .... The behavior of this query is completely different: the update query is only executed if there is a duplicate key error, regardless of the values of MergeQuery::key().

You could believe the difference is not that big, because MergeQuery::key() will typically contain a unique or primary key of the table. But even this case can result in an unexpected behavior. For example, _drupal_session_write() executes the following query when in HTTPs mode:

<?php
db_merge
('session')
  ->
key(array('sid', 'ssid'))
  ->
fields(array(
   
'uid' => $user->uid,
   
'cache' => ...
  ))
  ->
execute();
?>

... here both sid and ssid are unique keys.

On MySQL this will fail on a collision from *either* sid or ssid. On any other engine it will fail on a collision from *both* sid and ssid.

Files: 
CommentFileSizeAuthor
#108 715108-sqlite.patch3.52 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 21,944 pass(es).
[ View ]
#106 715108-sqlite.patch3.52 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 21,937 pass(es).
[ View ]
#98 merge.query_.patch4.43 KBbeejeebus
PASSED: [[SimpleTest]]: [MySQL] 21,942 pass(es).
[ View ]
#94 sqlite.merge_.patch3.46 KBbeejeebus
PASSED: [[SimpleTest]]: [MySQL] 21,240 pass(es).
[ View ]
#86 merge.query_.patch9.42 KBbeejeebus
PASSED: [[SimpleTest]]: [MySQL] 20,933 pass(es).
[ View ]
#82 715108.patch9.37 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 20,928 pass(es).
[ View ]
#72 715108.patch10 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 20,933 pass(es).
[ View ]
#68 715108.patch9.22 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 20,886 pass(es).
[ View ]
#66 715108-66.patch8.66 KBchx
PASSED: [[SimpleTest]]: [MySQL] 20,857 pass(es).
[ View ]
#57 715108.patch9.35 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 20,425 pass(es).
[ View ]
#40 715108-select-for-update_always2.patch5.52 KBBerdir
FAILED: [[SimpleTest]]: [MySQL] 18,485 pass(es), 1,403 fail(s), and 4,329 exception(es).
[ View ]
#36 715108-select-for-update_only_for_multiple_keys.patch5.31 KBBerdir
FAILED: [[SimpleTest]]: [MySQL] 14,480 pass(es), 6,970 fail(s), and 1,584 exception(es).
[ View ]
#36 715108-select-for-update_always.patch5.32 KBBerdir
FAILED: [[SimpleTest]]: [MySQL] Drupal installation failed.
[ View ]
#35 715108-select-for-update.patch3.44 KBJosh Waihi
PASSED: [[SimpleTest]]: [MySQL] 20,249 pass(es).
[ View ]
#32 301036-Optimize-Merge-queries.patch3.87 KBJosh Waihi
FAILED: [[SimpleTest]]: [MySQL] Invalid PHP syntax in query.inc.
[ View ]
#24 715108-mysql-merge2.patch1.8 KBBerdir
PASSED: [[SimpleTest]]: [MySQL] 19,201 pass(es).
[ View ]
#19 715108-mysql-merge.patch1.24 KBCrell
FAILED: [[SimpleTest]]: [MySQL] 19,200 pass(es), 1 fail(s), and 1 exception(es).
[ View ]
#14 pg7-session-error.png66.01 KBandypost
#11 715108-dumb-out-merge-queries.patch1.92 KBDamien Tournoud
PASSED: [[SimpleTest]]: [MySQL] 17,901 pass(es).
[ View ]

Comments

David Strauss’s picture

We ought to be using a transaction on InnoDB and REPLACE INTO on MyISAM. Neither will operate correctly and well on both engines. We could optimize to use INSERT ... ON DUPLICATE KEY UPDATE if the key is equivalent to a unique key.

Crell’s picture

Oh for the love of Pete... MergeQuery was designed to model INSERT ODKU.

REPLACE INTO AFAIK does two queries internally, so it's not truly atomic. (It does a delete/insert every time.)

I'm not sure I follow what Damien is saying in the first case above. The value specified in key() is getting used. The different failure logic on multi-column keys is a problem, though. I'd almost prefer to convert our degenerate case (used by PostgreSQL and SQLite) to match MySQL's behavior, since it's the only one with a native implementation.

Josh Waihi’s picture

wow, so the entire MergeQuery layer doesn't work as epected ha? (#301036: Optimize Merge queries in Postgres driver)

We the use of #669794: Use savepoints for nested transactions we could make the default MergeQuery work atomically and be used by all three layers.

Damien Tournoud’s picture

Quick comparison:

Operation MergeQuery MergeQuery_mysql
A row conflicts with all the columns of key(),
when all the columns of key() are unique keys of the table
UPDATE UPDATE
A row conflicts with one of the columns of key(),
when all the columns of key() are unique keys of the table
INSERT UPDATE
A row conflicts with any of the columns of key()
that is not an unique key of the table
UPDATE ERROR
A row conflicts with an unique key of the table,
not specified in key()
ERROR UPDATE
Any other cases INSERT INSERT

We really have a problem.

David Strauss’s picture

REPLACE INTO AFAIK does two queries internally, so it's not truly atomic. (It does a delete/insert every time.)

Atomicity is generally irrelevant in a system with MyISAM.

Crell’s picture

Even at that, then, I've generally been warned away from REPLACE INTO because internally it morphs into "DELETE... INSERT" rather than doing a single operation. That has other odd things it does to the table and indexes and query cache. (It's been a long time since I used it so I don't recall what exactly the issue was, just that I was encouraged by other MySQLites to use ODKU instead.)

What's REPLACE's semantics with the table Damien posted in #4?

Damien Tournoud’s picture

REPLACE will have the same semantic problems: it only considers UNIQUE and PRIMARY keys, and considers *all* of them.

I would be happy in just removing the whole MergeQuery_mysql implementation, and rely on the default SELECT/UPDATE/INSERT. It is far from atomic when run on MyISAM, but do we really care (given that we already default to InnoDB if available)?

Damien Tournoud’s picture

Alternative solution: we could use INSERT ... ON DUPLICATE KEY UPDATE... in the only case it works (key() has only one column, which is the only UNIQUE/PRIMARY KEY of the table) [this is the more frequent case], or else fall-back to the parent implementation.

David Strauss’s picture

Alternative solution: we could use INSERT ... ON DUPLICATE KEY UPDATE... in the only case it works (key() has only one column, which is the only UNIQUE/PRIMARY KEY of the table) [this is the more frequent case], or else fall-back to the parent implementation.

That's exactly what I suggested here:

We could optimize to use INSERT ... ON DUPLICATE KEY UPDATE if the key is equivalent to a unique key.

Or we can change MergeQuery's defined behavior to only work with keys that are enforced as UNIQUE in the database. That would make the default/PostgreSQL/sqlite implementations go *beyond* the defined behavior. This solution may be best because MergeQuery mostly makes sense only for unique keys.

Crell’s picture

If changing the defined behavior of MergeQuery to match its current actual behavior resolves this issue, I am happy with that approach. Far and away the most common use case for it is when the key field(s) is/are the primary key of the table, which shouldn't have a problem either way.

Damien Tournoud’s picture

Status:Active» Needs review
StatusFileSize
new1.92 KB
PASSED: [[SimpleTest]]: [MySQL] 17,901 pass(es).
[ View ]

If we are to dumb out MergeQuery, the heart of the stuff is to change our current SELECT / then INSERT or DELETE into a INSERT, if failed, UPDATE.

Here is a minimum patch for that. That, with two other open issues should make PostgreSQL pass 100%, but frankly, I hate that idea.

andypost’s picture

Why not use savepoints proposed in #669794: Use savepoints for nested transactions
If transactions enabled by default whole try-catch could be in transaction. On myisam we never have atomic actions

Crell’s picture

We originally used transactions in the original DBTNG patch, but they broke horribly because one b0rked query would cause the entire transaction to need to be rolled back. We cannot use a transaction here unless we have save points implemented. (I'm assuming that will let this approach work, anyway.)

andypost’s picture

Status:Needs review» Needs work
StatusFileSize
new66.01 KB

Patch #11 breaks UI - after each page I get another page with

PDOException: SQLSTATE[25P02]: In failed sql transaction: 7 ERROR: current transaction is aborted, commands ignored until end of transaction block: UPDATE p_sessions SET uid=:db_update_placeholder_0, cache=:db_update_placeholder_1, hostname=:db_update_placeholder_2, session=:db_update_placeholder_3, timestamp=:db_update_placeholder_4 WHERE (sid = :db_condition_placeholder_0) ; Array ( [target] => default [return] => 2 [already_prepared] => 1 ) in _drupal_session_write() (line 168 of /var/www/d7/includes/session.inc).
shunting’s picture

Same here, on a different table.

PDOException: SQLSTATE[25P02]: In failed sql transaction: 7 ERROR: current transaction is aborted, commands ignored until end of transaction block: DELETE FROM {semaphore} WHERE (name = :db_condition_placeholder_0) AND (value = :db_condition_placeholder_1) ; Array ( [:db_condition_placeholder_0] => menu_rebuild [:db_condition_placeholder_1] => 16455391364ba2ed67066644.51457089 ) in lock_release() (line 227 of /Library/WebServer/Documents/drupal7/includes/lock.inc).

moshe weitzman’s picture

re #13 - we have savepoints now. can we get a new patch here?

Berdir’s picture

I'm wondering, doesn't "INSERT, if failed, UPDATE" classify as using Exceptions for flow control which is a bad thing not to mention that it is *slow* (throwing exceptions always is).

I mean, the "fail" will not happen seldom, we use db_merge() for session/variable/cache saving so the chances are imho quite high that we throw an exception on most page loads, and saving a admin page with dozens of settings would result in dozens of variable_set() and therefor dozens of thrown exceptions? This atleast needs benchmarks for those cases...

Maybe something like the following pseudo-code might work, I assume it's still faster than throwing exceptions:

try {
  if (SELECT 1 FROM ...) {
    UPDATE
  }
  else {
    INSERT
  }
}
catch (Exception $e) {
  // if we have a duplicate key exception, we had a race condition. In that case, isn't the behavior unspecified anyway? Who is supposed to win? the first? the latter?
  // Anyway, we could still try a UPDATE or throw a proper Exception, indicating that we hade a race condition.
}

Another approach might be to assume that we update more often than insert and insert if we had no affected rows but this is afaik problematic too, because affected_rows != rows_matched_by_update_conditions so we'd still need a SELECT 1 FROM query and that would result in the worst case of three queries for new entries but fast update queries.

Crell’s picture

Yes, exceptions are a poor use here for various reasons. Berdir is correct.

If we can't use ODKU (which is very sad, because it's a really really nice approach; are we sure we can't make it work?), the base implementation already has a no-exception implementation that is fine aside from being always 2 queries and not perfectly atomic. Any driver-specific implementation needs to be better than the default or it's not worth it.

So the options I see are:

1) Throw out MergeQuery_mysql entirely; everyone uses the 2 query approach.

2) Make MergeQuery_mysql use ODKU for the single-key case and fall back to parent for the multi-key case. Most of our merge queries are single key, I think, so that gets us atomicity in the common case at least.

3) Figure out if REPLACE INTO would actually work and be preferable to ODKU. I'm uncertain if it would.

4) ?

I think I'm leaning toward #2 right now.

Crell’s picture

Status:Needs work» Needs review
StatusFileSize
new1.24 KB
FAILED: [[SimpleTest]]: [MySQL] 19,200 pass(es), 1 fail(s), and 1 exception(es).
[ View ]

Here's a stab at approach #2.

Status:Needs review» Needs work

The last submitted patch, 715108-mysql-merge.patch, failed testing.

Crell’s picture

I suspect the failing test is assuming the old, technically wrong behavior and it just needs to be updated. Someone who knows that system please confirm and reroll. :-)

Toktik’s picture

Status:Needs work» Needs review

#19: 715108-mysql-merge.patch queued for re-testing.

Updated: sorry. please delete this post.

Status:Needs review» Needs work

The last submitted patch, 715108-mysql-merge.patch, failed testing.

Berdir’s picture

Status:Needs work» Needs review
StatusFileSize
new1.8 KB
PASSED: [[SimpleTest]]: [MySQL] 19,201 pass(es).
[ View ]

Ok, the issue is that we have a discrepancy between the code and the database configuration, which is actually the issue that DamZ pointed out in the initial issue (this already fails on PostgreSQL)

- In our code, we assume that sid and ssid are not duplicate on their own, but only if they occur together.

- However, in the database, we have a PRIMARY KEY on sid and a UNIQUE INDEX on ssid. So if either one of these values is duplicated, the query fails.

I first tried to make a PRIMARY KEY on both columns, but that didn't work because of multiple NULL values in ssid (apparently, doesn't make any sense to me). So the current patch just converts the UNIQUE index to a normal index. I'm a but confused as I can't find a update function that adds ssid as it doesn't seem to exist in D6 at all... ?

Anyway, the updated patch should pass now, but we might want another fix here, I'm not sure.....

EDIT: Me writes strange stuff, I need sleep...

Berdir’s picture

Also, can we really assume that there is a PRIMARY KEY/UNIQUE INDEX on the provided $key? Nobody stops someone from calling it with a $key that is not I'd say....

catch’s picture

Title:MergeQuery_mysql should not use INSERT ... ON DUPLICATE KEY UPDATE ...» Enforce atomicity for merge queries
Status:Needs review» Needs work

per andypost, marking #301036: Optimize Merge queries in Postgres driver as duplicate, issues are close enough they should be fixed together even if they're not exactly the same fix.

chx’s picture

Title:Enforce atomicity for merge queries» MergeQuery_mysql should not use INSERT ... ON DUPLICATE KEY UPDATE ...
Status:Needs work» Needs review

On which planet mysql and postgresql issues are duplicates?

Berdir’s picture

Maybe on this one when they need the same code to be fixed? :)

This patch uses the default implementation for MySQL when there is more then one key which should be merged on. That means that MySQL does have the same atomic issues as PostgresQL. I don't think that we can actually optimize the Merge on PostgreSQL but we at least need a way to make it atomic or have a way to handle conflicts.

As I explained in #17, try insert and if an exception is thrown, update is not a good idea but we probably need a try/catch to handle the cases where we have a conflict. I still have no idea what should happen then.. Break since we don't know the expected behavior? Let the first insert win and don't update? Execute an update?

Damien Tournoud’s picture

The base implementation can be made perfectly atomic by just changing the SELECT into a SELECT FOR UPDATE. This construct is not supported by SQLite, but not required there because SQLite locks the entire database making row-level locking redundant.

Crell’s picture

I am not familiar with SELECT FOR UPDATE. Is that ANSI SQL? (If not, it shouldn't be in the base implementation but I'm open to it in driver-specific implementations if it works.)

Berdir’s picture

I used Google, so correct my if something of the following is wrong....

- To use SELECT FOR UPDATE, we need a transaction, So we need to start one if there isn't any yet. I don't think we'd need a save point since we don't want to roll back, we just want the locking to persist at least until the end of the function

- I have not found any information that FOR UPDATE is ANSI SQL, but it seems to be supported by almost every dbms (read: MySQL, PostgreSQL, Oracle, DB2 not sure about others) except SQL Server (http://stackoverflow.com/questions/1483725/select-for-update-with-sql-se...)

So what about the following:

We keep the default implementation, document that it is not guaranteed to be atomic and implement the SELECT FOR UPDATE for PostgreSQL and MySQL if count(keys) is > 1 (maybe always since there is no guarantee that the key is a primary or unique key...)

Josh Waihi’s picture

StatusFileSize
new3.87 KB
FAILED: [[SimpleTest]]: [MySQL] Invalid PHP syntax in query.inc.
[ View ]

So what about this? IT will apply to both MySQL and PostgreSQL and is about as close to atomic as we can get (especially for PostgreSQL)

Status:Needs review» Needs work

The last submitted patch, 301036-Optimize-Merge-queries.patch, failed testing.

Damien Tournoud’s picture

@Josh Waihi: the INSERT, if fails, UPDATE has the same semantic as INSERT ON DUPLICATE KEY UPDATE and this is *not* what we want. We want UPDATE if a row matching the key exists, else INSERT, which is completely different as I outlined above.

The following strategy is perfectly atomic and perfectly fine (we can probably move that to a stored procedure to improve performance):

BEGIN TRANSACTION
SELECT FOR UPDATE on the key
if there are rows: UPDATE
else: INSERT
COMMIT

(on exception: ROLLBACK to close the transaction, but there should be no modification to actually rollback)

Josh Waihi’s picture

Status:Needs work» Needs review
StatusFileSize
new3.44 KB
PASSED: [[SimpleTest]]: [MySQL] 20,249 pass(es).
[ View ]

PostgreSQL Docs:

If the FOR UPDATE or FOR SHARE clause is specified, the SELECT statement locks the selected rows against concurrent updates.

Awesome. Patch attached.

Berdir’s picture

Status:Needs review» Needs work
StatusFileSize
new5.32 KB
FAILED: [[SimpleTest]]: [MySQL] Drupal installation failed.
[ View ]
new5.31 KB
FAILED: [[SimpleTest]]: [MySQL] 14,480 pass(es), 6,970 fail(s), and 1,584 exception(es).
[ View ]

This looks good. Can we mark #301036: Optimize Merge queries in Postgres driver as a duplicate (again)?

After all, the patch in #35 actually only addresses the PostgreSQL issue, not this one. We need to at least add the bits from #24 back.

Repeating my question from #25, can we safely assume that there is a UNIQUE or PRIMARY KEY on the defined key for MySQL? It isn't really the expected behavior imho, since you are not required to define your keys as UNIQUE/PRIMARY, according to the definition, are you?

Adding a merged patch of #35 and #24 and also a version which always uses the default merge query implementation, just for testing....

Berdir’s picture

Status:Needs work» Needs review

Status:Needs review» Needs work

The last submitted patch, 715108-select-for-update_always.patch, failed testing.

Berdir’s picture

Ok, I either did something wrong ( AFAIK not, I used the same code as in #24, which passed) or current implementation does not work very well :)

Berdir’s picture

StatusFileSize
new5.52 KB
FAILED: [[SimpleTest]]: [MySQL] 18,485 pass(es), 1,403 fail(s), and 4,329 exception(es).
[ View ]

Ok, updated the implementation..

- Using fetchField() instead of fetch() (the latter always equals to TRUE)
- The patch did not add any arguments but dropped the countQuery() call. adding an "1" expression

I do now get strange undefined index plid in menu.module (5x for every test function), not sure if it is a local issue. Trying with the test bot... Only attaching the always patch.

Berdir’s picture

Status:Needs work» Needs review
Damien Tournoud’s picture

We need to drop the current MergeQuery_mysql implementation completely: it just doesn't work in most of the cases (see #4). We need to use the same strategy as the base implementation (UPDATE if a row match the conditions, or else INSERT), potentially moving this logic to a stored procedure to avoid sending a bunch of queries remotely.

I'll work on the stored procedure part later today.

Berdir’s picture

Ok, I think I agree :)

The "always" patch is basically the same as droping MergeQuery_mysql completely, I always call parent::execute(). I just made it like this to easier switch between both approaches.

Status:Needs review» Needs work

The last submitted patch, 715108-select-for-update_always2.patch, failed testing.

Crell’s picture

OK, I'm confused. Why are we discussing Postgres in a MySQL issue in the MySQL component? :-)

As I understand it, in the single-key case MySQL's ODKU implementation *does* work. It's when there's multiple keys that things get screwy.

Postgres is its own odd animal that may need a stored procedure.

And now we're talking about a more optimized generic implementation as well.

These all sound like separate issues to me. Can someone explain why we can't just apply #24 and then deal with the generic fallback and Postgres implementation in separate issues where they belong, and which already exist?

Josh Waihi’s picture

Crell, its my understanding that at least, MySQL and PostgreSQL can use the same code here. Hence the reason why #301036: Optimize Merge queries in Postgres driver could be a duplicate

Damien Tournoud’s picture

@Crell#45: no, ODKU never works, even in the single key case. The key(s) specified in the merge query will get ignored, and the choice will be made based on the Primary/Unique key(s) of the table, which is not at all what we want.

Damien Tournoud’s picture

I studied if it is possible to use a stored procedure on MySQL, but it is a no go: MySQL doesn't support stored procedures with variable number of arguments, and our Merge Query implementation has a maximum of two queries, which doesn't make work studying other options.

Crell’s picture

OK, so ODKU only works if we're using the primary key(s) of the table as our key fields. Are there any cases where we'd actually want to do otherwise?

I don't mean to harp on ODKU, but it did serve as the original model for the Merge API and it's a single-query implementation. Remember that one of the most common merge queries we run is cache writes, which can be long queries. So anything we can do to keep down the size and quantity of the queries is a good thing.

Berdir’s picture

- We only send the keys twice, they should not be that big...

- Benchmarks would be good however, to have an idea of the impact...

- But first, we need a working patch :)

- Maybe we can get away with documenting that the key needs a PK or UK but even then we need a working, atomic way for multiple keys

Crell’s picture

It's not the key size that is the issue; it's the other fields, like the cache value, which can be over a megabyte. We want to make sure we don't send that twice.

Berdir’s picture

Yes, and we only send them once, so it's fine? That's what I wrote above. We only ever execute a SELECT (with only the keys as condition, obviously) and then either a INSERT OR an UPDATE.

Crell’s picture

OK, I looked over the most recent patch. I think I get what's going on there, but I don't get why it's failing so many tests. Is someone still working on this?

Josh Waihi’s picture

Could this be it?

diff --git modules/system/system.install modules/system/system.install
index 0d73c64..0693dd5 100644
--- modules/system/system.install
+++ modules/system/system.install
@@ -1393,8 +1393,6 @@ function system_schema() {
     'indexes' => array(
       'timestamp' => array('timestamp'),
       'uid' => array('uid'),
-    ),
-    'unique keys' => array(
       'ssid' => array('ssid'),
     ),
     'foreign keys' => array(
Berdir’s picture

See #24, that was at least back then required and necessary.

Damien Tournoud’s picture

#40 fails tests because it doesn't change anything, except the schema of the session table. At a minimum, we need to drop the MergeQuery_mysql implementation if we want this to work.

Damien Tournoud’s picture

Status:Needs work» Needs review
StatusFileSize
new9.35 KB
PASSED: [[SimpleTest]]: [MySQL] 20,425 pass(es).
[ View ]

Rerolled.

It turns out we also need to make sure that TruncateQuery uses a DELETE when we are in a transaction, or there is an implicit COMMIT happening and the transaction stack gets out of sync with the database connection.

Berdir’s picture

@DamZ: #40 *did* use the default implementation, there was a hardcoded call to the parent implementation, I just didn't delete the whole file to make it easier to switch when testing.

But that DELETE/TRUNCATE change might do the trick.

Damien Tournoud’s picture

@Berdir: my bad, I failed to see the call in the middle of the commented lines.

Damien Tournoud’s picture

yay!

Berdir’s picture

There is a speed difference between those, obviously.

I called the same db_merge() query 5000 times, below are the results:

Without patch:

Needed: 3.00339s
Needed: 3.08914s
Needed: 3.11475s

Average time per query:
0.0006s

With patch:

Needed: 11.22645
Needed: 10.78288
Needed: 10.87379

Average time per query:
0.00214s

The code, if someone wants to reproduce this:

<?php
function conc_test_page()
{
 
$start = microtime(true);
  for (
$i = 0; $i < 5000; $i++) {
   
db_merge('variable')
      ->
key(array('name' => 'conc_test_variable'))
      ->
fields(array('value' => serialize('value_' . $i)))
      ->
execute();
  }

  return
t('Needed: %time', array('%time' => round(microtime(true) - $start, 5)));
}
?>
chx’s picture

Status:Needs review» Needs work

wow a DB isue that yanks a unique key from ssid. Is that still necessary? Also what's with ->addExpression('1') ? If that's necessary then a big, big fat comment is necessary.

Berdir’s picture

The unique key only on ssid is definitly wrong, but changing it to a normal key is probably wrong too but was the only solution I could get working. Defining the PK on sid and ssid gave me errors.

chx’s picture

Would hate to stall this issue, care to enlighten me why ssid unique is wrong?

Damien Tournoud’s picture

I opened a spin-off issue for the merge query in _drupal_session_write(): #813492: HTTPS sessions use an invalid merge query

chx’s picture

Status:Needs work» Reviewed & tested by the community
StatusFileSize
new8.66 KB
PASSED: [[SimpleTest]]: [MySQL] 20,857 pass(es).
[ View ]

sessions has been fixed! Go :) Note that I have not changed anything on the patch just rerolled without the system.install hunk which now obviously fails.

chx’s picture

Status:Reviewed & tested by the community» Needs work

Um sorry, not yet, I had a question above for the addExpression('1')

Damien Tournoud’s picture

Status:Needs work» Needs review
StatusFileSize
new9.22 KB
PASSED: [[SimpleTest]]: [MySQL] 20,886 pass(es).
[ View ]

Added the requested comment. This looks good to go for me.

chx’s picture

Status:Needs review» Reviewed & tested by the community

I like it too :)

Crell’s picture

I want to have a look at this tonight when I get home before it gets committed. Flagging for myself.

Crell’s picture

Title:MergeQuery_mysql should not use INSERT ... ON DUPLICATE KEY UPDATE ...» Make Merge queries more consistent and robust
Status:Reviewed & tested by the community» Needs work

The complete "SELECT 1 FOR UPDATE" query should move inside the try { block. Right now it's partially inside it, partially outside.

Also, this comment makes no sense at all: // Only run the update if there are no fields or expressions to update.

Since the default implementation is now non-sucky, we should remove the comment block that says "this is sucky, real drivers should reimplement it". :-)

I talked with chx and DamZ in IRC and DamZ should be fixing these shortly. It's sad that we can't use ODKU, but we go with what works.

Also, retitling.

Damien Tournoud’s picture

Status:Needs work» Needs review
StatusFileSize
new10 KB
PASSED: [[SimpleTest]]: [MySQL] 20,933 pass(es).
[ View ]

Moved the SELECT query inside the try/catch block, and fixed the comment (to be fair, this one was already there :p).

Crell’s picture

Component:mysql database» database system
Status:Needs review» Reviewed & tested by the community

Looks good to me. The bot shouldn't have any problems either, I wager.

Dries, if you'd care to do the honors?

beejeebus’s picture

Component:database system» mysql database
Status:Reviewed & tested by the community» Needs review

doesn't this still leave us not handling the case where two inserts race each other?

as far as i can see, the current code will let the second insert will fail with a duplicate key error without trying an update.

i say as far as i can see, because i'm also getting fatal errors with this patch.

[Wed Jun 16 20:02:50 2010] [apc-error] Cannot redeclare class insertquery_mysql in /var/www/code/drupal/7/clean/includes/bootstrap.inc on line 2705.

investigationing.

beejeebus’s picture

Status:Needs review» Needs work

woops, that's need work.

beejeebus’s picture

Component:mysql database» database system
Damien Tournoud’s picture

Component:database system» mysql database
Status:Needs work» Reviewed & tested by the community

doesn't this still leave us not handling the case where two inserts race each other?

as far as i can see, the current code will let the second insert will fail with a duplicate key error without trying an update.

No, that's exactly why we run a SELECT FOR UPDATE query, which triggers a row-level locking.

[Wed Jun 16 20:02:50 2010] [apc-error] Cannot redeclare class insertquery_mysql in /var/www/code/drupal/7/clean/includes/bootstrap.inc on line 2705.

investigationing.

That looks pretty much unrelated.

Back to RTBC, except if you have any other objections.

beejeebus’s picture

Component:mysql database» database system
Status:Reviewed & tested by the community» Needs work

and the missing class would be the not-yet-updated registry trying to load the code removed by the patch. awesome.

Damien Tournoud’s picture

Component:database system» mysql database
Status:Needs work» Needs review

Oh, two INSERTs racing each other. Yes, that's a decent objection.

beejeebus’s picture

Status:Needs review» Needs work

as far as i understand it on mysql, the two "SELECT ... FOR UPDATE" queries will succeed, and both threads will try to insert.

then, one of them hangs, one of them wins the race and inserts, and the other blows up. now, we can either ignore that because its rare, or try to catch the error that means "i lost the race" and fire an update.

Berdir’s picture

Status:Needs review» Needs work

I'm not 100% sure, but doesn't "FOR UPDATE" (in contrast to LOCK IN SHARE MODE" lock the row for reading too, so the second SELECT will wait until the first transaction is commited and then see the updated row and try an update from the beginning?

Damien Tournoud’s picture

Status:Needs work» Needs review
StatusFileSize
new9.37 KB
PASSED: [[SimpleTest]]: [MySQL] 20,928 pass(es).
[ View ]

Indeed. What about this?

  • SELECT FOR UPDATE
  • if returns no rows, try to insert
    • if fails, re-SELECT FOR UPDATE and proceed to UPDATE if returns rows
  • else UPDATE
Damien Tournoud’s picture

Status:Needs work» Needs review

@Berdir#81: the SELECT FOR UPDATE will have no effect if there are no rows returned, which is the case for the INSERT branch.

beejeebus’s picture

much better :-)

if the bot returns green, this is back to RTBC in my opinion, but i'll wait for Crell to make that call.

chx’s picture

Status:Needs review» Needs work
<?php
// Retry the select query, if it returns a row, ignore the error and continue with the update query below.
if ($this->connection->query($sql, $arguments)->fetchField()) {
  throw
$e;
}
?>

I think the if needs a ! because this re-throws the error instead of ignoring if a row is returned.

beejeebus’s picture

Status:Needs work» Needs review
StatusFileSize
new9.42 KB
PASSED: [[SimpleTest]]: [MySQL] 20,933 pass(es).
[ View ]

yes, yes, it should. patch attached with the one character fix chx noticed in #85.

Damien Tournoud’s picture

Status:Needs review» Reviewed & tested by the community

Let's get this in.

It is not really possible to test the branch about the race condition (that I got wrong), but at least the code is now sound.

Dries’s picture

Status:Reviewed & tested by the community» Fixed

Committed to CVS HEAD. Thanks.

Pedro Lozano’s picture

Status:Fixed» Needs work

After this patch I cannot install Drupal using sqlite.

SQLSTATE[HY000]: General error: 1 near "FOR": syntax error
Pedro Lozano’s picture

http://www.sqlite.org/cvstrac/wiki?p=UnsupportedSql

SELECT ... FOR UPDATE OF ... is not supported. This is understandable considering the mechanics of SQLite in that row locking is redundant as the entire database is locked when updating any bit of it. However, it would be good if a future version of SQLite supports it for SQL interchageability reasons if nothing else. The only functionality required is to ensure a "RESERVED" lock is placed on the database if not already there.
Crell’s picture

1) We should just write an alternate version for SQLite that skips the FOR UPDATE part and accept that it's non-atomic.

2) This is why we need at least some level of testing framework for SQLite and Postgres.

3) I hate databases.

beejeebus’s picture

i can tackle 1). 2) + 3) amen to that.

pwolanin’s picture

after this patch, an existing HEAD install has a fatal:

Fatal error: Cannot redeclare class InsertQuery_mysql in ~/drupal-7/includes/database/mysql/query.inc on line
88 Call Stack: 0.0003 57396 1. {main}() ~/drupal-7/index.php:0 0.0111 505184 2. drupal_bootstrap()

Follow-up issue: #839006: SPL autoloader should use require_once to be more robust.

beejeebus’s picture

Assigned:Unassigned» beejeebus
Status:Needs work» Needs review
StatusFileSize
new3.46 KB
PASSED: [[SimpleTest]]: [MySQL] 21,240 pass(es).
[ View ]

ok, here's an implementation of MergeQuery_sqlite.

i'm not 100% about how sqlite transactions work, so it might need further tweaking, but this gets drupal installing again on sqlite.

Crell’s picture

#94 looks OK to me on visual inspection. The comment on the class, though, should also mention that as a result of the lack of FOR UPDATE support its Merge implementation may not be atomic.

We should also get one of the SQLite maintainers to try running it and give it a nod of approval.

webchick’s picture

Component:mysql database» sqlite database
Issue tags:+sql

Let's try that.

webchick’s picture

Issue tags:-sql

Oops. Didn't mean to tag that.

beejeebus’s picture

StatusFileSize
new4.43 KB
PASSED: [[SimpleTest]]: [MySQL] 21,942 pass(es).
[ View ]

docs only update to the patch. updated the docs as per Crell's comments in #95, and removed a duplicate comment line from the base MergeQuery implementation.

Max K.’s picture

Status:Needs review» Reviewed & tested by the community

Fixed all issues for me. There are unrelated sqlite issues throughout this codebase, however, and I believe this will only be the beginning of our woes. Modules will be developed with mysql in mind, and alternative engines will suffer. Anyway, it works.

webchick’s picture

Nice docs! I asked Damien to take a look.

Max K: If you find other problems with Drupal's SQLite support, please by all means file bug reports about them! (if they don't already exist ;)) We seem to have a really hard time finding SQLite testers, and I'm sure the bugs would get fixed much faster if you were able to chime in on issues with your test results (and if you can write patches to help solve the issues, even better!).

Furthermore, if we were able to get the full test suite passing on SQLite, we'd be able to set up an automated testbot that would constantly test patches to make sure we don't break SQLite going forward. There's a crew working on this for Pgsql already. Would be great to have some folks dedicated to this on the SQLite side!

Damien Tournoud’s picture

Status:Reviewed & tested by the community» Needs work

For SQLite, we should rather do a:

- UPDATE
- if no matched rows, INSERT

That because the first UPDATE will acquire a RESERVED lock on the database, protecting the INSERT from race conditions.

Actually, I'm not sure why we don't use the same strategy for the base implementation. Probably because of the inconsistency between the database engines with the handling of matched vs. modified rows. But we could probably do something like this:

- UPDATE
- if no matched rows, and the database engine is known to return only modified rows, SELECT 1
- if no matched rows, INSERT
Crell’s picture

Update-fail-Insert was the standard logic in D6. That fails in D7 because of PDO, which returns the rows modified, not matched, by a query. We only tested that with MySQL but I think it's across PDO. Merge queries were introduced in the original DBTNG patch exactly for that reason.

Damien Tournoud’s picture

@Crell: I think that's MySQL-specific (we have code in UpdateQuery_sqlite to return the number of rows modified not matched). Anyway, we could very well implement UPDATE / SELECT 1 / INSERT in that order. That way, we don't have to deal with the vendor-specific row locking at all.

Crell’s picture

That also needlessly complicates the code and adds a third query. We started this issue with one query. Let's not go to three.

Unless UPDATE/SELECT 1/INSERT is considerably faster than what is in HEAD now, let's leave well enough alone. All we need here is a working SQLite-specific version and then we can close this critical. Anything else is for a separate non-critical issue, if at all.

Damien Tournoud’s picture

We already have three queries. Just not in the same order.

Damien Tournoud’s picture

Status:Needs work» Needs review
StatusFileSize
new3.52 KB
PASSED: [[SimpleTest]]: [MySQL] 21,937 pass(es).
[ View ]

That was easy enough.

Berdir’s picture

+++ includes/database/sqlite/query.inc
@@ -116,6 +119,65 @@ class UpdateQuery_sqlite extends UpdateQuery {
+ * The first UPDATE query with acquire a RESERVED lock on the database.

I guess that should read ... will acquire...

37 critical left. Go review some!

Damien Tournoud’s picture

StatusFileSize
new3.52 KB
PASSED: [[SimpleTest]]: [MySQL] 21,944 pass(es).
[ View ]

Indeed.

chx’s picture

I would like to understand why do we need FOR UPDATE in the default implementation. If there is an existing record and two processes race for writing that record, there is official word on which one should win. If there is no existing record, we have code that already deals with racing INSERTs. What's the FOR UPDATE for? What's the use case? What are trying to solve here?

Josh Waihi’s picture

Damian, doesn't that sqlite implementation make it non-atomic? Or can SQLite only ever have one connection to it?

Damien Tournoud’s picture

I explained in the comments why this is atomic. SQLite only have full-database locks.

chx’s picture

I am talking of the default, not the SQLite implementation. What is "atomic" when you are firing one query? Again, what is the reason we so need a lock here?

ddorian’s picture

With the july2.drupal7.dev it didnt install with sqlite but with 108# patch from damien it did install perfectly. Will test more later.

David Strauss’s picture

I'm just working off the summary of the issue chx gave me, but here are my thoughts. We can model potential MERGE race conditions as effects on the row being updated between SELECT and INSERT/UPDATE. Let's say there are two threads, A and B.

## Scenario: other thread deletes the row

A: SELECT (with decision to UPDATE)
B: DELETE
A: UPDATE

This results in the same data, even if we lock, as this:

A: SELECT (with decision to UPDATE)
A: UPDATE
B: DELETE

## Scenario: other thread updates the row

A: SELECT (with decision to UPDATE)
B: UPDATE
A: UPDATE

This results in the same data, even if we lock, as this:

B: UPDATE
A: SELECT (with decision to UPDATE)
A: UPDATE

## Scenario: other thread inserts the row

A: SELECT (with decision to INSERT)
B: INSERT
A: INSERT (which fails)
A: UPDATE (because our code falls back to UPDATE on INSERT failure)

This results in the same data, even if we lock, as this:

B: INSERT
A: SELECT (with decision to INSERT)
A: UPDATE

So, in all possible race conditions, our code could have equivalent behavior -- even with locking and proper behavior -- if thread B has queries scheduled certain ways relative to thread A's queries.

David Strauss’s picture

To be concise, I also support dropping the FOR UPDATE from the MERGE implementation. FOR UPDATE functions as a pessimistic lock, making us pay the cost of locking even though race conditions should be rare. Switching strategy to UPDATE from INSERT on failure functions as an optimistic lock, making us pay a larger cost for race conditions but only in the rare scenarios they occur.

Damien Tournoud’s picture

So, in all possible race conditions, our code could have equivalent behavior -- even with locking and proper behavior -- if thread B has queries scheduled certain ways relative to thread A's queries.

Actually, that's not exactly true in case of two concurrent updates that affect a different set of columns:

If you start with (col1 = 'X', col2 = 'Y', key = 'K'):

A: UPDATE a = 'A' WHERE key = 'K'
B: UPDATE a = 'B', b = 'B' WHERE key = 'K'

is not the same as:

B: UPDATE a = 'B', b = 'B' WHERE key = 'K'
A: UPDATE a = 'A' WHERE key = 'K'

That said I'm not sure this matters.

David Strauss’s picture

@Damien Tournoud

I did discuss two threads updating different columns with chx, but it turned out to not be relevant to the issue as long as we have the optimistic locking logic (try UPDATE if INSERT fails). The scenario I was afraid of was thread A INSERTing into col1 and thread B INSERTing into col2 when both had intended to MERGE. That would be a real race condition because no ordering of the operations with FOR UPDATE locking would result in a row with only one of such columns filled. (Proper locking would always result in the latter thread using an UPDATE and resulting in a row with both columns filled.)

The orderings you're suggesting are included in "...if thread B has queries scheduled certain ways relative to thread A's queries." If you have thread A and B running the updates you mention, there's no reason the first ordering is more or less correct than the second.

The question I'm answering is, "If we remove FOR UPDATE locking, what unique race conditions do we introduce that we didn't have before?" Yes, threads will clobber each other's data. But, it looks like we don't introduce any unique scenarios of clobbering data by dropping FOR UPDATE locking.

Max K.’s picture

Status:Needs review» Reviewed & tested by the community

webchick: You asked for it, you got it. Consider me part of the sqlite testing squadron.

#108 works for me. Recommending implementation, since locking, despite the performance hit, is the correct approach here. Although the update on fail may handle most race conditions (except, possibly some edge cases we're not considering) this does not address what would happen in the unlikely event of a three-way race. Regardless of the astronomically low chance of such an event occurring, we cannot have logic like Ubercart handling sensitive information on top of a foundation that is not fundamentally secure.

beejeebus’s picture

ok, two parts to this. first, i defer to DamZ on sqlite locks etc. #108 looks fine, though i'm curious about the scope of the RESERVED lock. so it is released when the transaction is committed/rolled back?

as to chx's questions. i'll try to present the discussion we had in #drupal about why i'd prefer we keep the lock, or change the documentation/return value of the "SELECT --> UPDATE" code path.

here's my example "competition" edge case, and assumes a well written application, because all bets are off otherwise.

we only enter the MergeQuery block if

<?php
if ($beforeDeadline) {
?>

process A, tick 1, MergeQuery runs select part (say email = 'foo@example.com'), returns 1. the row in question has token = NULL, and the update is intended to set it to a valid token
process B, tick 2, runs query to close competition, deletes all entries where token = NULL
process A, tick 3, runs the update path of MergeQuery to set token = "somevalidtokenforthecompetition", this affects no rows, because the row was deleted at tick 2, but returns STATUS_UPDATED

as far as i can tell, without the lock, a return status of STATUS_UPDATED tells the MergeQuery caller nothing at all about whether the row exists or not. if process A assumes that a successful MergeQuery with a valid token should trigger a "congratulations on your entry" email, then we would send an email out for a non-existent entry.

with the lock, process A can safely send the email (or whatever action follows a successful entry).

so if we take out the lock, we have to document very clearly that STATUS_UPDATED, even when you have a well-behaved application, may not mean you've updated a row at all, and you need to a) loop back and check its existence and/or b) make sure you structure your "timed out" conditions carefully. i even wonder if we should consider another return status for "the row changed in between us checking it and trying to update it".

i'm not convinced this is worth it for what i think is a small performance consideration.

Crell’s picture

Let's commit #108, as that brings us up to SQLite feature parity with MySQL and Postgres. That ends the criticality of this issue and we can mark it fixed and be done with it.

We can then continue refining merge queries over here: #844186: Clarify merge queries

webchick’s picture

Status:Reviewed & tested by the community» Fixed

Sounds good. Committed #108 to HEAD.

Max K: YAY! :D

Status:Fixed» Closed (fixed)

Automatically closed -- issue fixed for 2 weeks with no activity.