Problem/Motivation
Automatic entity updates were initially introduced as part of @plach's vision to allow dynamically changing entity type revision and translation support. In this light core would only be responsible to handle no-data scenarios, thus being consistent with the D7 Field API's original handling of field tables. OTOH cases requiring data migrations were meant to be handled in contrib by a dedicated module that never came to being.
In the meantime we discovered how automatic entity updates were dangerous in the context of regular DB updates, so we already decided to ban them from regular/real life workflows before D8 was even released as stable. At that point automatic entity updates were only meant to improve DX during development. We didn't foresee people relying on drush entup
to fix broken updates and making things even worse, but apparently that's exactly what people are doing, because they can.
Proposed resolution
Given the criticality of potential issues arising from automatic entity updates, we should be justified to rip them off, even if this is formally an hard BC break. Mitigating factors are:
- They are not really supposed to be used in any production scenario.
- They are not really useful, unless someone is trying to build an
entity_storage
module implementing the original vision. Even in this case now we have theEntityDefinitionUpdateManagerInterface::updateFieldableEntityType()
that is a good starting point to implement this functionality without relying on them. drush entup
will be removed from drush core and added back to a contribdevel_entity_updates
module, so the logic won't be lost, just (hopefully) moved away from production environments, where it's mostly harmful.
@amateescu and @plach agreed that as part of finally fixing entity updates we need to make sure we always rely on installed definitions, except when installing a new entity type, so we will have to fix this issue and all the related ones to finally find ourselves in a good place.
Remaining tasks
Review the patch, decide whether we want to fix it in 8.6.x or only in 8.7.x.
User interface changes
Nope.
API changes
- EntityDefinitionUpdateManager::applyUpdates()
is now deprecated with no direct replacement; entity type and field storage definition updates must always be done explicitely in update functions, which is also the recommended practice since 8.0.0;
- new method: EntityDefinitionUpdateManagerInterface::getChangeList()
- EntityDefinitionUpdateManager
already had it as protected and it has been promoted to public visibility;
- EntityDefinitionUpdateManager::updateEntityType()
will no longer delete and re-create the entity schema if there is no data for the updated entity type
- kernel tests need to install entity type schema definitions before anything else in their setUp()
method.
Data model changes
Entity type and field storage CRUD schema operations will no longer rely on the (live) definitions provided by the Entity Manager, and they will use the last installed definitions instead.
Release notes snippet
Starting with 8.7.0, Drupal core no longer provides support for automatic entity updates. Whenever an entity type or field storage definition needs to be created, changed or deleted, it has to be done with an explicit update function as provided by the Update API, and using the API provided by the entity definition update manager.
Comment | File | Size | Author |
---|---|---|---|
#112 | 2976035-112.patch | 162.72 KB | amateescu |
Comments
Comment #2
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThis is the sister issue of #2860654: Field storage CRUD operations must use the last installed entity type and field storage definitions :)
Comment #3
tstoecklerYes, that's a great point. In fact this is not just a problem with updating entity types, but also with creating them, when they are created as part of an update, as shown by #2976034: system_update_8004() accidentally installs the 'revision_default' field if run on a 8.5 codebase. Re-titling accordingly.
Comment #4
tstoecklerSo I looked into this a bit.
First I took a look at the code that was committed over in #2860654: Field storage CRUD operations must use the last installed entity type and field storage definitions. I was wondering why, even though the issue title mentions it, the code actually only deals with setting the entity type and not at all the field storage definitions.
We cannot set the directly field storage definitions at the moment, but we can now set the table mapping, so I thought the following might be enough to fix the issue:
I added that to the entity type CRUD methods in
EntityTypeListener
and thought that might fix (some of) the test fails over in #2841291: Fix NOT NULL handling in the entity storage and 'primary key' changes when updating the storage definition of an identifier field. No such luck, however. Even though the entity type is correctly persisted throughout the CRUD operations, the table mapping is not. This is becauseSqlContentEntityStorage::setEntityType()
is called at various places down the chain and that (correctly, I guess) resets the table mapping.So next I set out to avoid all those
setEntityType()
calls. That was not super easily done, but I was able to hack something together just to check whether that actually fixes the issue.The table mapping was then properly persisted without all the
setEntityType()
calls, but before I could check whether this fixes the test problem, this madeblock_content_update_8400()
fail in the upgrade path tests, which turned out to be a very interesting case.block_content_update_8400()
adds apublished
entity key to Custom blocks and then calls::updateEntityType()
. It then later adds the published field with::installFieldStorageDefinition()
. With my hacked together code, the::updateEntityType()
call was now using the table mapping built from the last installed field storage definitions - which does not (yet) include the published field.SqlContentEntityStorageSchema::getEntitySchema()
however, adds an index on the published column for each entity type with a published key. This then leads to an exception when$table_mapping->getFieldTableName($published_key)
is called, because at call time, no field storage definition can be found for the published key.At this point, I gave up for now. I am not really unsure what the expected behavior is, to be honest? Should we instead of using the last installed field storage definitions build up the field storage definitions from the runtime code but using the last installed entity type? Or is the behavior in
block_content_update_8400()
actually a bug, because it creates an entity key for a non-existing entity key?Comment #5
tstoecklerSo having thought about this for a bit, I think that theoretically using the last installed field definitions during updates is the only correct thing to do. However, I also think that
block_content_update_8400()
proves that we cannot actually do that. Even if we accepted the consequence of a behavior change (which is unlikely in its own right) we cannot possibly consider the code inblock_content_update_8400()
as incorrect as it's currently the only way to have a field get aNOT NULL
in the database. We are of course trying to fix that over in #2841291: Fix NOT NULL handling in the entity storage and 'primary key' changes when updating the storage definition of an identifier field but nevertheless that is the current state of affairs, so we cannot blame people using the APIs in the way they currently function even if they are weird or arguably broken.So in order to fix this I think what we need to is build the field definitions using the dynamic runtime code but with the passed-in (last installed) entity type during entity type CRUD. Will try to cook something up in that direction but not sure when I will get to it.
Comment #6
tstoecklerOK, moving this concrete discussion / problem over to #2976040: default_revision can be left around in data table due to broken entity type CRUD after discussion with @amateescu at DrupalHackCamp. Failing patch now over there, so we can more concretely talk about the actual bug. Hopefully it will not be a requirement for the bug here.
Comment #7
tstoecklerAs part of the discussion with @amateescu we also discussed the
option and both agreed that theoretically it makes the most sense. So just to explore how feasible that actually is I cooked up a patch for that and realized something interesting due to that. (The patch is over at #2274167-397: [ignore] Patch testing issue if anyone wants to follow along at home.) If we create the field storage definition first and only update the entity type afterwards, we need to make sure that upon installation the field schema is already created in the correct way, regardless of the entity type. This is currently a problem, because whether or not a field will get a
NOT NULL
constraint depends on whether or not it is an entity key. And with this approach a field cannot possibly be an entity key yet at time of creation. This will be fixed by #2841291: Fix NOT NULL handling in the entity storage and 'primary key' changes when updating the storage definition of an identifier field, though, at which point whether or not a field isNOT NULL
solely depends on the field itself. So being very optimistic we can assume that this is at least theoretically possible and ignore it for now.However, looking at the test failures revealed a similar issue which is (even) more tricky and is already somewhat hinted at in #2976040: default_revision can be left around in data table due to broken entity type CRUD and possibly in #2951279: After 8.5.0 upgrade: Unknown column revision.revision_default as well: With the "add field first - then update the entity type" flow you cannot add a revision metadata key. Because when adding the field it will not be a revision metadata key so it will be added to the data table and data revision table. When updating the entity type and making it a revision metadata key it is then expected to be in the revision table (and in the revision table only) from then on, but we have no code to actually move it over. And we cannot possibly do that, because that would require batching the update but we have no way to batch the updating of an entity type.
So not really sure how to proceed here. Having written this, I just thought about an idea that I could try to pursue, not sure what others think about it. We could add the information on whether or not the field is "going to be used as a revision metadata key" as a parameter to
EntityDefinitionUpdateManager::installFieldStorageDefinition()
and have it adapt the generated schema automatically - probably by runtime-adapting the entity type it will pass down into the storage layer. We could have the same logic for the entity keys themselves per the first paragraph. So we could have something like:Having default values for those parameters would also make it backwards-compatible.
Thoughts?
Comment #8
tstoecklerHaving thought about the proposal in #7 for a bit, I think it's actually not that bad ;-)
I thought about how to go about implementing it and I though what would have to happen is that
EntityTypeUpdateManager
- the$entity_key
or$revision_metadata_key
options are passed, would have to fetch the last installed entity type, modify it arcordingly and then pass it down to the storage layer when actually creating the field schema (this is where the scope of this issue comes in). I then thought that while it could work it would be a downside of the approach to be working on "ephemeral"/temporary entity types, while the caller would then have to do the actual adding of the (revision metadata) key afterwards themselves.But then it dawned on me, that we could actually use this approach to fix the fundamental problem that we are facing here. Adding an entity key or revision metadata key must install a field storage definition and update the entity type. Currently we are updating the entity type first and we discussed above that we should instead install the field storage definition first, but if we pursue the approach in #7 we could actually have
EntityDefinitionUpdateManager::installFieldStorageDefinition()
call::updateEntityType()
at the end so that instead of the entity type being "ephemeral" it will actually be the last installed entity type after the operation. That would alleviate the need for the calling code to update the entity type manually but at the same time it should be fully backwards compatible.Comment #9
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI was also thinking about this in the last few days and in my mind I came to the same conclusion, that
EntityDefinitionUpdateManager::installFieldStorageDefinition()
should callupdateEntityType()
at the end if any key or revision metadata key were passed in.So a big +1 for #7 and #8, let's try to implement it and see how it looks :)
Comment #10
tstoecklerYay, thanks for that. I will see if I find some time for it. But it's good to have some reassurance.
Comment #11
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI started poking around a bit to see whether we can use the last installed field storage definitions in
\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema::onEntityTypeUpdate()
and I quickly found out from\Drupal\KernelTests\Core\Entity\EntityDefinitionUpdateTest::testEntityTypeUpdateWithoutData()
that this approach is inherently incompatible with our "automatic entity updates" system, because automatic updates rely on the dynamic runtime field storage definitions, like you also observed in #4/#5.After spending more than a day deep down in this code, the only sane conclusion that I could come up with is that we need to remove the automatic entity updates functionality (i.e. make
\Drupal\Core\Entity\EntityDefinitionUpdateManagerInterface::applyUpdates()
a no-op) and force every update to be explicit about which fields are being added/updated/deleted, otherwise we'll be stuck with a completely broken entity update system for the rest of D8's life cycle..In other words, the only time when we can (and need) to use the live entity type and field storage definitions is when you first install an entity type. Every other operation needs to be made through explicit entity type updates and field storage installations/updates in
hook_update_N()
functions.The most recent example of why we can not rely on dynamic runtime definitions is the major brokenness of
system_update_8501()
and its fallout: #2951279: After 8.5.0 upgrade: Unknown column revision.revision_defaultComment #12
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedFWIW, here's what I tried before giving up and venting in #11 :)
Comment #13
plachI just had a decently long discussion about this with @amateescu: the outcome is that I'm definitely onboard with ripping off automatic entity updates from core entirely.
Automatic entity updates were initially introduced as part of my vision to allow dynamically changing entity type revision and translation support. In this light core would only be responsible to handle no-data scenarios, thus being consistent with the D7 Field API's original handling of field tables. OTOH cases requiring data migrations were meant to be handled in contrib by a dedicated module that never came to being.
In the meantime we discovered how automatic entity updates were dangerous in the context of regular DB updates, so we already decided to ban them from regular/real life workflows before D8 was even released as stable. At that point automatic entity updates were only meant to improve DX during development. We didn't foresee people relying on
drush entup
to fix broken updates and making things even worse, but apparently that's exactly what people are doing, because they can.Given the criticality of potential issues arising from automatic entity updates, I think we’d be well justified to rip them off, even if this is formally an hard BC break. Mitigating factors are:
entity_storage
module implementing the original vision. Even in this case now we have theSqlContentEntityStorageSchemaConverter
that is a good starting point to implement this functionality without relying on them.devel
or even a more obscuredevel_entity_updates
module along with thedrush entup
command, so the logic wouldn't be lost, just moved away from core, where it's mostly harmful.@amateescu and I agreed that as part of finally fixing entity updates we need to make sure we always rely on installed definitions, except when installing a new entity type, so we will have to fix this issue and all the related ones to finally find ourselves in a good place.
Comment #14
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedIf anyone wants to take a look, here's a WIP patch with my current approach after the discussion from #13 :)
Comment #16
hchonovI like the patch and all the decisions that have been made.
One note - currently the live entity type is used when installing a new base field in
hook_update_N
while returning the base field definition from\Drupal\Core\Entity\ContentEntityBase::baseFieldDefinitions()
based on specific key set on the entity type. If we use the the original entity type then the field will not be returned but it is required by the storage when installing the field. This is solvable by requiring that in such cases first the entity type is updated through\Drupal\Core\Entity\EntityDefinitionUpdateManager::updateEntityType()
and then the field is installed. This is something we would have to explicitly outline in the CR.I see that @tstoeckler talked about this case where the new field can be an entity or revision metadata key, but a custom code could return the field in
\Drupal\Core\Entity\ContentEntityBase::baseFieldDefinitions()
based on another property of the entity type.I think that the the "add field first - then update the entity type" flow is not universally correct and therefore it would be better to use it the other way around or is there any reason why we can't?
----
I think the removed test
is testing an use case where a field is being installed, which is not being returned by
\Drupal\Core\Entity\ContentEntityBase::baseFieldDefinitions()
. Even with the decisions made here this test is still valid so let's put it back and probably describe it better :).Comment #17
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI haven't posted my full train of thoughts in #14 because of the late hour, but you're right, I arrived at the same conclusion that entity type updates needs to be done first, followed by field storage operations.
As for the removed test, it is no longer valid with this new paradigm of not using live definitions for entity type CRUD operations. Take the following example:
Let's say a field is added to the code base in Drupal 8.6.0, with a corresponding
hook_update_8601()
implementation.Then, the field is removed from the code base in Drupal 8.6.1, again with a corresponding
hook_update_8602()
implementation.A site that is updating from 8.5.x directly to 8.6.1 (where the field doesn't exist anymore), should be able to run successfully through both
hook_update_8601()
andhook_update_8602()
(in that order), with the final result being that the field was added and then removed from the last installed definitions.Comment #18
hchonovCurrently our storage implementation requires that a field which is being installed is also returned either by CEB::baseFieldDefinitions or by some hook. You cannot remove it from the code base until
hook_update_8602
has been executed, but also you should not return it in the code base afterhook_update_8602
is executed - so writing such an update requires that you add a condition to your code base that the field is to be returned only if the schema version of the module is< 8602
. What do you think - how many custom and contrib developers are aware of this?I like this example because we could add a new test update testing exactly this behavior :).
Comment #19
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedCan you point to the storage code where this happens? Because that would be a bug that needs to be fixed.
Comment #20
hchonovFirst to describe how I came to this conclusion:
I was debugging for problems for a colleague of mine. He had used
hook_entity_field_storage_info
to introduce a new BaseFieldDefinition, but didn't manually set the name and entity type on the field definition itself. Later I've found out that the EntityFieldManager is not doing this automatically for fields introduced though this hook, but only for fields introduced by the hooksentity_base_field_info
andentity_bundle_field_info
. When the field was installed I got exceptions from the storage because it had no name which on itself didn't cause errors but subsequent errors. In\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema::createEntitySchemaIndexes()
there is the loopand here the $specifier was an array with two NULL values, which crashed the following code:
because here $item was NULL instead a string or array.
The problem was solved by manually setting the name on the field definition returned by the hook
hook_entity_field_storage_info
.This is how I came to the conclusion that the storage is accessing the field definitions as returned by the entity class and by the hooks.
Comment #21
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThe most important parts from the story are missing :) How was the field installed? What did you do after setting the name on the field storage definition in your
hook_entity_field_storage_info
, just cleared the cache / reinstalled the field / nothing at all?Comment #22
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI've started to branch out test coverage updates needed by the solution agreed upon here, otherwise this patch would get way too big.
The first one is: #2982759: EntityUpdateToRevisionableAndPublishableTest wrongly assumes entities should be converted to revisionable and publishable in a single step
Comment #23
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedWhile looking into the next steps needed here, the problems that we need to overcome are tests like
\Drupal\KernelTests\Core\Entity\EntityDefinitionUpdateTest::testEntityTypeUpdateWithoutData()
, which calls to\Drupal\Tests\system\Functional\Entity\Traits\EntityDefinitionTestTrait::updateEntityTypeToRevisionable()
and then expect that calling\Drupal\Core\Entity\EntityDefinitionUpdateManagerInterface::applyUpdates()
will automatically create all the revisionable tables using the live (in code) field storage definitions.The major architectural change brought by the decision to stop using live definitions in
\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema
is that a few methods from that class need to receive more parameters. For example:-
SqlContentEntityStorageSchema::onEntityTypeCreate(EntityTypeInterface $entity_type)
- creating the schema for an entity type is an operation that needs to work with a given set of entity type and field storage definitions-
SqlContentEntityStorageSchema::onEntityTypeUpdate(EntityTypeInterface $entity_type, EntityTypeInterface $original)
- similar to above, updating the schema of an entity type also needs to receive a set of (possibly) updated field storage definitions-
SqlContentEntityStorageSchema::onEntityTypeDelete(EntityTypeInterface $entity_type)
- this one doesn't need any change because it should always work with the last installed entity type and field storage definitionsChanging the signature of those methods will be quite an API break since they are provided by the very generic
EntityTypeListenerInterface
, so I think the only realistic option that we have is to improve\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchemaConverter()
and make it the "official" API for converting an entity type to (non-)revisionable and (non-)translatable, with or without data, through a set of helper methods onEntityDefinitionUpdateManager
.Those new methods could be something like
updateEntityTypeToRevisionable($entity_type, $field_storage_definitions)
,updateEntityTypeToTranslatable($entity_type, $field_storage_definitions)
andupdateEntityTypeToRevisionableAndTranslatable($entity_type, $field_storage_definitions)
.Are there are any other options that I missed? In any case, thoughts on the above would be very helpful :)
Edit: fixed typos.
Comment #24
hchonovI've prepared a patch for you, so that you can test with it if you want. The places where there are if statements for debugging is where you can observe that the dynamic field storage definitions are being used when a field is installed.
1. Install Drupal. It is not necessary to create content.
2. Implement hook_entity_field_storage_info without setting manually the name on the field definition.
3. Implement hook_update_N to install the field through the entity update manager.
4. Rebuild the cache.
5. Run the update.
6. Watch how in
\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema::createSharedTableSchema
the call to$column_names = $table_mapping->getColumnNames($created_field_name);
evaluate to6. Set the name on the field storage definition manually in the implemented hook_entity_field_storage_info() and watch how in 5. the column names are properly populated and the column names evaluate to
Comment #25
hchonovNow I've tested further and I think that thanks to #2960046: Allow the table mapping to be instantiated with a custom entity type definition, on which your current patch seems to be based, is fixing the problem I am talking about.
This definitely is going in the right direction. I am sorry if I've caused some confusion.
@amateescu++
Comment #26
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedYeah, all these patches improve quite a bit of things in this area, glad your problem is one of them :)
So we're back to "feedback needed" on #23.
Comment #27
hchonovI personally agree with every point in #23. If we have to break the API to solve a serious drawback of the storage, then I think that it is justified. I am not sure if we can do it on short notice in 8.6.x, but 8.7.x should be fine. The only problem that might arise is if there are updates in contrib relying on the current functionality.
What about if we decide to leave the old functionality there for already installed sites and use the new one for newly installed sites? Probably the overhead of doing this and maintaining both isn't worth it?
Comment #29
plachAfter discussing this with @amateescu it turns out this is "semi-blocked" on #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data. If we don't do that first, we need to comment out some tests in order for the patch to be green.
Comment #30
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRe-uploading the patch from #14 which seems to apply just fine on 8.6.x.
Comment #31
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedComment #33
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThis should fix most of the upgrade and kernel tests. Let's see what left :)
Comment #35
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedA few more test fixes.
Edit:
EntityUpdateToRevisionableAndPublishableTest
will be fixed by #2982759: EntityUpdateToRevisionableAndPublishableTest wrongly assumes entities should be converted to revisionable and publishable in a single step, so it would be nice to get that in :)Comment #37
plach#2982759: EntityUpdateToRevisionableAndPublishableTest wrongly assumes entities should be converted to revisionable and publishable in a single step was committed, triggering a retest.
Comment #38
amateescu CreditAttribution: amateescu for Pfizer, Inc. commented@plach, note that #2982759: EntityUpdateToRevisionableAndPublishableTest wrongly assumes entities should be converted to revisionable and publishable in a single step was only committed to 8.7.x, while the patch in #35 is being tested for 8.6.x. Anyway, it's been reverted in the meantime so we still have the same number of failures.
I'm starting to open separate issues for the rest of the fails, because at least some of them are pre-existing broken tests. The first one is #2989469: MediaUpdateTest is broken.
Comment #39
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThis will fix most of the test failures. After #2989469: MediaUpdateTest is broken gets in, the remaining failures are only those that test the functionality removed by this patch, and we need to decide what we want to do about them, and we have two options:
1) if we want to fix this bug in 8.6.x, we can disable the tests and bring them back in #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data, which is a 8.7.x-only issue
2) if we decide to fix this in 8.7.x, we have to postpone this issue on #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data
In any case, the patch is ready for final reviews :) I'm not very happy with the hacks introduced by this interdiff to keep
SqlContentEntityStorageSchemaConverter
working, but I'm not very worried about them because they will be removed in the issue mentioned above.Updated the IS with the outcome of my discussion with @plach from #13.
Comment #41
amateescu CreditAttribution: amateescu for Pfizer, Inc. commented#2989469: MediaUpdateTest is broken is in, we should be down to 7 fails now :)
Comment #42
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedAnd NR for #39.
Comment #43
plachComment #44
plachOverall this looks good to me, there are a few lovely clean-ups as a bonus :)
Why are we removing this?
What about defining accessor methods for this property and making it protected? So we can formally declare it as @internal. Also, I was wondering whether there is any chance that the property might be serialized and stored in the latest installed definitions. If this were the case we could get weird bugs.
Huh? Is this a hack to temporarily exclude these fields from schema updates/testing? If so, can we document it somehow?
Can we rename
$original
to$entity_type
? It would make the new code easier to follow.Why are we removing this case? I don't understand why this is not supported with this patch.
Comment #45
plachI discussed this with @alexpott and @catch and we all agreed that we would be more comfortable with this change happening in 8.7.x (hopefully ASAP), to have more time to test it and possibly get all the satellite issues in as well.
Comment #46
plachPer #39 postponing this to #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data.
Comment #47
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRe #44:
1. For no good reason :) Brought it back.
2. This was just a workaround in case we wanted to fix this without #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data. Now that we have a decision to depend on that issue, the workaround will not be needed anymore and can be removed when we rebase this patch on top of that one.
3. That's not a hack at all, it's actually the right thing to do because
\Drupal\Tests\field\Functional\Update\FieldUpdateTest
and\Drupal\Tests\file\Functional\Update\FileUpdateTest
are not testing the values of those fields, they test field settings sodrupal-8.views_entity_reference_plugins-2429191.php
anddrupal-8.file_formatters_update_2677990.php
are not even creating the database schema for those fields.4. Sure thing, done.
5. See #17 for why that use-case is not valid anymore with this patch.
Comment #48
blazey CreditAttribution: blazey at Amazee Labs commentedHey, here's the patch from #47 re-rolled on top of #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data (#29). The test failures are the same as in #39 (report).
Comment #49
blazey CreditAttribution: blazey at Amazee Labs commentedComment #50
blazey CreditAttribution: blazey at Amazee Labs commented... and here's the diff file
Comment #53
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedNow that #2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data is very close, here's what's needed to finish this one.
Comment #55
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedSome tests were updated since the last patches were posted here and another one needs to use the
entity_test_update
entity type now.Comment #56
plachThe parent issue is in.
Comment #57
jibran#2984782: Add an API for converting entity type schemas to (non-)revisionable/(non-)translatable, with or without data is in so these can be removed.
Comment #58
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRerolled for now. I think that removing the
_skip_schema_operation
stuff is blocked on #3004642: Deprecate SqlContentEntityStorageSchemaConverter in favor of the new API provided by EntityTypeListenerInterface::onFieldableEntityTypeUpdate() though.Comment #59
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThat issue is now RTBC, so we can get rid of the
_skip_schema_operation
cruft. Not sending this patch to the testbot because it will fail until the other patch is committed.Comment #60
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedTo make this huge patch easier to review, I've split the test-related changes from the "interesting" part of the patch :)
Comment #61
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedAlso extracted the
custom_storage: true
into its own issue: #3031086: Configurable fields that are created only for update path tests should be defined with custom_storage = TRUEComment #62
plachI had another look at the review patch:
I have a feeling of deja-vu in asking this, but is it safe to rely on the field storage definitions coming from the object state while we use a passed entity type definition? Couldn't there be a mismatch?
Nit: can we rename this to
entity_test_update_field_data
for consistency?Comment #63
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThe problem is.. we don't really have a choice :)
SqlContentEntityStorageSchema::getEntitySchema()
is one of two main extension points of the storage schema, and most entity types subclass it and override that method, so we can't change its signature.But I think it's fine because we set the proper field storage definitions in the object state whenever we need to: in
onEntityTypeCreate
andonFieldableEntityTypeUpdate()
.We could, but that means making *a lot* of changes to files/code that are not related to this issue, because
\Drupal\system\Tests\Entity\EntityDefinitionTestTrait::updateEntityTypeToTranslatable()
is the one which controls the name of that table in every definition update test :)I've opened a separate issue for that: #3032611: Rename the data and revision data tables of the entity_test_update entity type to be consistent with the default table naming scheme
Comment #64
plachAh, sorry, didn't realize it was a far reaching change, +1 to follow-up.
Comment #65
plachBoth "blockers" are committed, let's get a new patch!
Comment #66
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedHere we go.
Comment #68
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedForgot to include the test changes from #60.
Comment #69
plachFinal review:
(hopefully :)
According to the documentation, the hash MAY be reused when the object is destroyed, so it seems it's not guaranteed that a clone of the original object gets the same hash. Couldn't we use
serialize
instead? Do we even need this cache?Just curious: is this actually needed? It seems those methods don't rely on the internal state.
Shouldn't we actually check whether a live definition of the field exists before proceeding?
::class
We will need to mention we no longer support these use cases in the CR.
::createMock()
Comment #70
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThanks for the final review! :P
Re #69:
EntityDefinitionUpdateTest
and recording the duration in different scenarios:Which shows that the cache is somewhat useful, but if you're really worried about
spl_object_hash()
collisions, it's better to just remove it than usingserialize()
. That's what I did in this patch.Here's the change record: https://www.drupal.org/node/3034742
Comment #71
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedWhile discussing this patch with @plach, he pointed that I forgot to actually make
\Drupal\Core\Entity\EntityDefinitionUpdateManagerInterface::applyUpdates()
a no-op and deprecated. So.. here it is :)Comment #73
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI think I got them all.
Comment #75
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI think that was a random fail in a javascript test.
Comment #76
plachWe're missing the deprecation test if I'm not mistaken.
::requiresEntityDataMigration()
also checks whether the original storage class is defined, I think we should reuse that logic as well. Actually I think we should also check that the storage class didn't change as that will also imply an unsupported change.No longer needed.
Instead of removing it, can we actually repurpose this test to provide test coverage for the table structure changes exception?
Comment #77
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRe #76:
applyUpdates()
so I don't think there's anything we can test..Comment #79
plachThanks, interdiff looks great!
But we can test that error is triggered, right? Otherwise we might as well have an empty method body...
Comment #80
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedDiscussed with @plach and we decided to leave the additional check whether the storage class has changed to a followup.
Added a deprecation test for
EntityDefinitionUpdateManager::applyUpdates()
.Comment #81
plachLooks great, thanks!
Comment #82
plachComment #83
plachComment #84
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedUpdated the API changes section from the issue summary.
Comment #85
plachComment #86
plachI spoke with Moshe about the plan proposed in the IS: he’s +1 on removing
entup
from drush core, but -1 on adding it back todevel
, he’d prefer a separate module, so that's the new plan to keep BC :)Comment #87
larowlanDoesn't apply to 8.7 anymore
shouldn't we add the class prefix here - these aren't global constants
any reason we can't make the first parameter required? Could possible reduce the chance of side-effects by making it explicit for callers
i'm not sure checking static is enough here, because kernel tests do the magic 'walk through all the parents and merge to form a unique set of values for modules' logic. We might need to do similar here for the sake of sub-classes in contrib/client land - especially given these are base classes designed for extension
Normally we also do the BC thing for people, so their code keeps working. Here we just trigger the error and bail out. Is there a reason for that? If so - we should have a comment saying why.
Also, the change record doesn't give an example of how to 'update manually instead'. Looking at the test trait, it appears to be pretty involved, so we should definitely be providing sample code for folks to follow.
Comment #88
larowlanfor my comment above
Comment #89
catchI'm not fully up-to-date with the comments yet, went straight to the patch, so apologies for any duplication:
nit: ID
Is this the right error message in the exception? It looks like the logic has changed but the message has not.
This might be explained in the issue somewhere, but why is this necessary? It could probably use its own mini change record. This seems like the more disruptive thing that the patch does. The actual change itself given it makes things more robust I think is good for a minor.
Comment #90
catchRemoving the release manager review tag, change record requests already made here ought to be enough I think, I can't think of special release management considerations here beyond that really.
Comment #91
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedThank you both for the reviews!
@larowlan, re #87:
This is the part of the CR which tells what needs to be done instead:
I guess you think that's not explicit enough, right? I'll add some code examples there.
@catch, re #89:
\Drupal\field\Entity\FieldStorageConfig::preSave()
triggers a call to\Drupal\Core\Field\FieldStorageDefinitionListener::onFieldStorageDefinitionCreate()
\Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema::onFieldStorageDefinitionCreate()
I'll add a separate CR for this, so keeping that tag for now.
Comment #92
plach@amateescu:
Re #87.4, I was quickly discussing this in Slack with @alexpott and he suggested not to silence the deprecation error: given that we are providing no replacement for that functionality, we should be more explicit about the deprecation.
Comment #93
larowlanYep, code examples is what I'm asking for - so we tell people the right way to do it.
Comment #94
plachInterdiff looks good to me. Aside from #92 and the CR updates, I think this is good to go, if the testbot agrees.
Comment #95
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedUpdated the CR from https://www.drupal.org/node/3034742 to include code examples and added https://www.drupal.org/node/3036689 for #89.3.
I also agree with not silencing the deprecation :)
Comment #97
plachCrediting @alexpott for #92.
Comment #98
plachInterdiff looks good to me and the CRs have plenty of information. Back to RTBC.
Comment #99
plachComment #100
catchOK I missed the change, but the new message just says 'can't change for a new entity type', it gives no indication of why, which the older message did. Is there a way to make it more descriptive? (leaving RTBC, just asking).
Comment #101
jibranI think we should add deleting an existing entity type and field storage definition examples to https://www.drupal.org/node/3034742
Comment #102
amateescu CreditAttribution: amateescu for Pfizer, Inc. commented@catch, I'm a bit confused by #100, so maybe it's easier if we compare the two messages next to each other, with an example entity type ID:
The SQL storage cannot change the schema for an existing entity type (block_content) with data.
The SQL storage cannot change the schema for an existing entity type: block_content.
So the only difference (aside from removing the parenthesis) is the end of the message, the "with data" part. I'm not sure why you see the new message problematic :)
@jibran: uninstalling entity types is broken (#2655162: Fatal error when updating or deleting an entity type after removing its class/definition from the codebase) and I wouldn't really advertise it in the CR. Added an example for uninstalling a field.
Comment #103
catch"The SQL storage cannot change the schema for an existing entity type" this isn't true all the time right? Sometimes it can change the schema, otherwise we'd not be able to make entity types translatable/revisionable etc., and sometimes it can't. It gives the impression that it's impossible to change the schema (now whether or not there's existing content), as opposed to specifically via this code path.
Comment #104
jibranIMO #2655162: Fatal error when updating or deleting an entity type after removing its class/definition from the codebase is showing that only one usecase is broken. After committing this patch wouldn't this work if we don't remove the definition from the code?
If yes, then we should add this to CR.
Comment #105
amateescu CreditAttribution: amateescu for Pfizer, Inc. commented@catch, ahh, I finally got it! Discussed a bit with @plach and he suggested the change from this interdiff.
@jibran, ok, added it.
Comment #107
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRerolled. HEAD moves fast these days :)
Comment #108
jibranThanks @amateescu
Maybe this is not relevant here but I'm little confused.
After fixing this bug, if I change the cardinality of an existing configurable field on staging, export my config and then import on prod; would it update the cardinality of the configurable field or do I have to write an update hook to make the same change on prod?
Comment #109
plachYour regular workflows will remain unaffected, the only actual change is that
drush entup
won't work anymore until we reimplement it in contrib, which should be as easy as copying the previous function body of the::applyUpdates()
method. However, as the IS states, this code will live in a module that depends ondevel
to make it clear thatdrush entup
is a developer tool and should never be part of a production workflow.Comment #110
plachAdded a paragraph to the CR summarizing #109.
Comment #112
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedRerolled.
Comment #113
catchReally nice to commit this one, I know how many issues it took to get here.
Committed fc7f649 and pushed to 8.7.x. Thanks!
Comment #114
larowlanComment #115
jibranNot pushed yet.
Comment #116
jibranComment #117
alexpottWe did discuss this on the initiative call yesterday. I raised concerns with the breaking change. Whilst I understand the motivations to remove hard to maintain code especially when it is broken in some situations there is no way can possibly guarantee that no one depends on this functionality for their production site / custom code.
Just because someone is not supposed to doesn't mean that they won't.
We could trigger a recoverable warning if this code is used to say that it shouldn't be used on production and will be removed from Drupal 9 instead of breaking our BC promise and removing from Drupal 8.
Comment #118
alexpottOn the call in response to my concerns @amateescu said it would be possible to go patch to a version of the patch that left the functionality in core but made the necessary changes to core to not use it.
Comment #119
BerdirI brought up similar concerns yesterday in a discussion with @amateescu. I would be fine with restoring that in a follow-up, I think it is fine for that to come back during the alpha phase?
Comment #120
catchI think having the code in core is dangerous, people have broken their sites using drush entup to 'fix' updates rather than restoring a backup. Having said that I'm OK with it being in core with E_WARNING or E_ERROR. I just pushed the patch though before seeing these updates, can we open a follow-up? Or if people feel strongly we could revert and recommit, but I'll mark as fixed for now.
Comment #121
jibranPublished the change records.
Comment #122
Wim LeersThis broke two tests in JSON:API against 8.7.x:
\Drupal\Tests\jsonapi\Kernel\Context\FieldResolverTest
\Drupal\Tests\jsonapi\Functional\EntityTestTest::testGetIndividual()
See https://www.drupal.org/pift-ci-job/1222392.
Neither of the change records help me fix those failures :(
Comment #123
Wim Leers🙏 Kudos to @amateescu for helping me fix JSON:API's tests: #3038604: Fix tests following a change in Drupal 8.7.0: #2976035. It would have taken me many hours to figure out the
EntityTestTest
solution!Comment #124
amateescu CreditAttribution: amateescu for Pfizer, Inc. commentedI've updated one of the change records of this issue (https://www.drupal.org/node/3036689) to specify clearly that kernel tests must install the definitions of the entity types that they're testing.
Comment #127
alexpottJust to day I'm happy to overridden by @catch and that the commit has occurred. Hopefully the beta update programme and contrib will surface anything serious before 8.7.0 and give us time to adjust as necessary. These things are a hard call, especially when the incumbent code is doing dangerous things.
Comment #128
jibranDER 1.x and 2.x also failed after this change so I committed https://cgit.drupalcode.org/dynamic_entity_reference/commit/?id=4b25207 and https://cgit.drupalcode.org/dynamic_entity_reference/commit/?id=fd19cd0
Comment #129
tstoecklerAwesome that this landed! Impressive work by everyone, in particular @amateescu!
Now that we have fixed this maybe someone can take a look at: #2976040: default_revision can be left around in data table due to broken entity type CRUD
It's a problem in the update path that was caused by this bug. So with this in we can no be sure that things like that will never happen again, but we still need to fix existing installations.
Comment #130
jhedstromThis issue appears to have introduced a quite odd failure in kernel tests if they attempt to install config which depends on yet-to-be-installed entity schemas. I've opened #3041224: Kernel tests fail with bad error if attempt to install config before entity schemas.
Comment #131
plach@jhedstrom:
That is documented in https://www.drupal.org/node/3036689.