Steps to reproduce :
- create a non-multiple text field
- share it between two content types
- create one node for each content type sharing the field.
- remove the field from one content type
=> storage layout is broken with no new columns to store the field in the 'per type' table for the new content type (plus a buggy table with an incomplete name gets created)

Looking into it, it appears to be a mix of several complex things (schema refresh, batch processing...). Being not-too-familiar with the new crud / mask material yet, I stop here and report my findings. Sorry for the painful reading :-) :

content_field_instance_delete() calls content_alter_schema() twice :
- content_alter_schema($field, array());, which calls content_alter_db($field, array());, which drops the data table completely, thus wiping data stored by other types sharing that field.
- if there is only one instance left, content_alter_schema($instance, $new_instance);, which then adds the columns to the per-type table, but the data is gone and we have nothing left to migrate.
(If there remained more than one instance, the table is not recreated)

This harsh content_alter_schema($field, array()); code has no equivalent in D5 : content_field_instance_delete() only had the second call ("if there is only one instance left..."), to handles data migration.
So I tried to comment it out.

Works better (after fixing $new_instance not being correctly initialized), except by the time we get to the content_alter_db() call that should handle data migration, the field is still registered as STORAGE_PER_FIELD in {content_node_field}, so the db schema is incorrect and drupal_write_record() fails to migrate the data.
So I added a line to update the db storage in {content_node_field} before calling content_alter_schema() (similar code existed in D5).

Works much better, data is migrated and the db layout is correct at the and of the operation, except now it's the node_load / node_save, performed in content_mask_batch_update() before we get to the content_alter_db() part, that triggers warnings
because the db state does not yet match the new schema that's been recreated at the very end of content_field_instance_delete() (but *before* the batch processing actually starts)

Attached patch contains the modifications described above. I'm not saying it goes in the right direction :-)

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

KarenS’s picture

Actually the two calls to content_alter_schema is right, if the field's values are right. The first one will delete the current instance. When it gets to content_db_alter, the table is only dropped if the storage type is set to content type storage, which should only happen if this is the last instance of the field. After doing that and deleting that instance, the database is checked to see if there are other instances, and if so they must be altered, too, which is the second call to content_alter_schema(), and that one should only happen if there is another instance.

I suspect that there is some problem with the setting of content storage causing the problem, and/or some value stuck in the cache or the schema that needs to be updated.

Anyway, I'm trying to track through the processing to see if I can pin down where things go wrong.

yched’s picture

When it gets to content_db_alter, the table is only dropped if the storage type is set to content type storage

:-) That's not how I read

  // Deleting this field entirely is a simple case.
  if (empty($new_field)) {
    if ($previous_field['db_storage'] == CONTENT_DB_STORAGE_PER_FIELD) {
      db_drop_table($ret, $previous_table);
    }

When a field is shared, it does use CONTENT_DB_STORAGE_PER_FIELD, and content_alter_schema($field, array()) thus drops the table.

KarenS’s picture

Ahah! You're right, I had that logic wrong. It was doing exactly what I intended it to do, I just had the wrong intentions :)

OK, I'll try to get this working right.

KarenS’s picture

Status: Active » Fixed

OK, I think I have this working right now. Now that the batch operation has been removed from the processing and with the changes you noted above, it looks to me like deleting shared fields is working right.

FWIW, the reason for the extra content_alter_schema() was to provide an opportunity to remove the extra values from the shared table with the batch processing. The error was in then going ahead and trying to alter the schema, too.

This should get us back to the way things work in D5, but there is still some quirkiness in the process that I think we can clean up (for both D5 and D6) and I think we can add back the batch processing of node_saves() as an extra step in the UI (not in the API), but I'll open separate issues for those things.

yched’s picture

OK, problem now is that when the last instance of field_foo is deleted, content_alter_schema() is not called and thus no action is taken on the db tables :
- if the field was multiple, {content_field_foo} is not dropped
- if the field was single, {content_type_bar} still contains the columns for field_foo
- if the field was the last field for this content type, {content_type_bar} is not dropped

(er, yes, I started to play with simpletests ;-) )

yched’s picture

Title: content_field_instance_delete() broken with shared fields » content_field_instance_delete() does not clean tables
Priority: Critical » Normal
Status: Fixed » Active

Not critical anymore (no data loss), but leaves us with a messy db...

coltrane’s picture

Marked #260597: Field deletion not working as expected as a duplicate of this.

(I'm not sure I need to update issues that dupes exist, or if it helps. -- Let me know if it's too much.)

pwolanin’s picture

subscribing - this shows up as failed tests.

pwolanin’s picture

Status: Active » Needs work
FileSize
1.05 KB

It seems to me that the attached patch is the right place to add the code, but so far the tests are still failing.

yched’s picture

Priority: Normal » Critical

Raising priority - RC blocker.

yched’s picture

Status: Needs work » Fixed
FileSize
1.72 KB

This is the right place for the content_alter_schema() call, fixes the issue of field columns not always removed from the tables.
In addition, there was simply no code to drop the per-type table when deleting the last field for a content type.

I committed the attached patch (along with some cleanup), tests now all pass without any fail - yay (ok, we don't have 100% test coverage, but yay all the same.)

Thx !

yched’s picture

Status: Fixed » Active

Reopening : the code I committed will drop the per-type table even if it contains data from a field whose module or widget-module is disabled (might happen especially when upgrading from D5, if some contrib field modules are not available yet).

I think the fix is to let content_field_instance_read() accept an additional $include_disabled_modules flag, that lets us test whether a content type still has *any* fields (not only active ones). No time tonight, will try to patch this tomorrow.

yched’s picture

Status: Active » Fixed

Closing again, let's deal with this remaining issue in http://drupal.org/node/198508

Anonymous’s picture

Status: Fixed » Closed (fixed)

Automatically closed -- issue fixed for two weeks with no activity.