Deleted a Feed Aggregator (after deleting the items in that feed), ran cron, and the feed reappeared, without the feed title. Exhausted other remedies such as clearing cache, and even using backup & migrate to a newly installed database.
My fix was to search the MySQL database for a unique term in the immortal feed ("yahoo"), and then deleting those rows in queue and watchdog tables.
I do not know the function of queue and watchdog tables. Perhaps feed aggregator scheduled updates are stored in one or both. If so, deleting a feed should clear associated feed updates.
Also, under "Edit feed," the statement that update interval "Requires a correctly configured cron maintenance task" needs clarification. Are feeds updated on the longer of feed update interval or cron scheduled interval? Can a feed update more frequent than cron is run?
Comment | File | Size | Author |
---|---|---|---|
#82 | 1067532-82.patch | 5.87 KB | dcam |
#38 | interdiff.txt | 1.16 KB | ParisLiakos |
#38 | aggregator-1067532-38.patch | 3.52 KB | ParisLiakos |
#36 | interdiff.txt | 1.05 KB | ParisLiakos |
#36 | aggregator-1067532-36.patch | 3.51 KB | ParisLiakos |
Issue fork drupal-1067532
Show commands
Start within a Git clone of the project using the version control instructions.
Or, if you do not have SSH keys set up on git.drupalcode.org:
Comments
Comment #1
yoroy CreditAttribution: yoroy commentedCan't reproduce this on my local D7 and D8 installs. Does it still happen for you? If so, maybe more specific info about php and mysql versions used might help.
Comment #2
mr.baileysMarked #1208912: Aggregator deleted feed magicaly reappears as a duplicate.
Comment #3
Anonymous (not verified) CreditAttribution: Anonymous commentedSame problem on my site D6.. The delete link is missing for the feed and feed-item content types.
To fix just call the delete option direct in the address bar as a url...
www./admin/content/node-type/Feed/delete
www./admin/content/node-type/Feed-item/delete
Comment #4
LIQUID VISUAL CreditAttribution: LIQUID VISUAL commentedI, too, find that a feed that is deleted reappears with no title. It is more of a problem because a field as more than 255 characters so an error message is reproduced. This is in D7.8.
Comment #5
LIQUID VISUAL CreditAttribution: LIQUID VISUAL commentedI was able to remove the feed by using phpMyAdmin to search all database tables for the title of the feed, then delete references in aggregator and watchdog tables.
Comment #6
yoroy CreditAttribution: yoroy commentedOk, this is really happening then :)
Comment #7
LIQUID VISUAL CreditAttribution: LIQUID VISUAL commentedYes. If it helps, deleting only the aggregator table references deleted the aggregator items from the site, but running cron brought them back again. It was only after re-deleting the aggregator table references AND deleting the watchdog table references that the problem ceased to recur.
Comment #8
BenStallings CreditAttribution: BenStallings commentedI can confirm that I was able to fix this problem in phpMyAdmin by A) deleting the problem feed from the aggregator_feed table, B) deleting all aggregator% entries from the queue table (since I was unable to distinguish which ones pertained to the problem feed), C) running cron to verify that the problem had gone. I did not need to do anything to the watchdog table. However, I'm not sure how to get the rest of my feeds back into the queue, or whether this is necessary. Editing and saving each of the feeds did not cause them to reappear in the queue.
Comment #9
Pomliane CreditAttribution: Pomliane commentedSame issue in D7.12.
When the feed is corrupted, this leads to an error at each cron run: "The website encountered an unexpected error. Please try again later."
If one replaces the feed address, some feed items form this new source can be fetch before the new address gets eventually replaced by the old one and the issue comes back... In this case, the "aggregator/sources/source_id" page displays a source with feed items which do not correspond.
For both reasons think this bug has at least a priority of "normal", if not "major" (especially in the reduced scope of aggregator.module).
Comment #10
vineet.osscube CreditAttribution: vineet.osscube commentedI have test the issue and got a issue that when I delete the feed and confirmed for deletion it is reappeared without any items in particular feed name.
To solve this issue I have attach a patch file. PFA patch file and kindly review it. It might be a solution.
Comment #12
Québec CreditAttribution: Québec commentedHi,
I am having the same problem: some «ghost» feed that keeps «coming back» without title, blocking if I understand the cron process.
I have tried to erase the feed directly from phpmyadmin, but I still miss something: the feed keeps coming back. What are exactly the files to erase?
In the «queue» I cannot identify the feed. Is there a way to do so?
Thanks.
Comment #13
Québec CreditAttribution: Québec commentedHi,
I have managed to erase all the feeds blocking the Cron job. Cron jobs work well: there are no alerts in admin/reports.
But, if I run cron manually, I get a white page. I have to refresh the browser's page twice to see the cron page again with a message telling me that the cron job went well (no error messages in reports).
What could possibly cause this kind of behavior?
Thanks.
Comment #14
rsjaffe CreditAttribution: rsjaffe commentedI had same problem, with the feed coming back without a title. Removing the feed from the aggregator_feed table resulted in it reappearing when cron was rerun--clearing the site cache didn't help.
Ben Stalling's approach worked for me:
Comment #15
saweyer CreditAttribution: saweyer commentedI too am having problems with a zombie feed causing errors.
I've just migrated to 7.14.
this feed worked fine in 6.24: http://feeds.feedburner.com/uua/Lxnn?format=xml
however, in 7.14 it's causing cron to fail and putting an error in the log:
PDOException: SQLSTATE[22003]: Numeric value out of range: 1264
Out of range value for column 'timestamp' at row 1: ... [:db_insert_placeholder_5] => -2209143600
I also was having problems just removing the feed via the u.i. and also directly from the aggregator_feed table
until I found this thread. and deleted everything from queue. then cron worked fine and the zombie feed did not reappear.
I then re-added the above feed, with the same PDOException error message.
so, now my 2 good feeds aren't updating (since I removed everything from the queue) -- I didn't see anything above about how to re-enable those.
and my bad feed is updating but causing errors... -- so I'll remove it again...
suggestions?
Steve
Comment #16
saweyer CreditAttribution: saweyer commentedok. I dropped feeds and items from the aggregator_feed and aggregator_item tables.
I re-created my 2 good feeds, ran update items, and they worked properly.
I tried adding the 3rd feed, this time w/o the ?format=xml at end, i.e.,
http://feeds.feedburner.com/uua/Lxnn
but still got PDOException errors. does this imply that the feed has something bad in it, even though it works fine in Drupal 6.26 ?
(or maybe Drupal 7.14 is just processing it 'properly' and getting the error?)
for now, I'll just leave that feed out.
Steve
Comment #17
Québec CreditAttribution: Québec commentedI think it has something to do with the lenght of the url:
- http://drupal.org/node/218004
- http://drupal.org/node/1622974
Deleting the feed from the DB broke something in my D7 install: I still get a white page when I run cron manually. I have to reload the page many times to make it «work».
Hoping for a 7.15 version with a working feed module.
R.
Comment #18
SilviaT CreditAttribution: SilviaT commentedI've got the same problem here: after deleting a feed, it reappears with empty title. This doesn't let the all the feeds updating when cron runs and makes the aggregator module quite useless.
Comment #19
allaboutmuine CreditAttribution: allaboutmuine commentedI could reproduce the problem this morning with Drupal 7.15
Comment #20
Patricia_W CreditAttribution: Patricia_W commentedI tried most of the suggestions above but missed the recommendation to delete the aggregator rows from the queue table. That fixed my problem. Cron runs fine now.
Comment #21
femrich CreditAttribution: femrich commentedI am experiencing the same problem with Drupal 7.21. I am not database savvy, so am leery of the database deletion suggestions here. Seems this should be fixed in core so deletions through the aggregator interface actually get deleted...
Comment #22
femrich CreditAttribution: femrich commentedI have tried the solution in #8 -- Deleting the re-appearing feeds from the aggregator feeds table and deleting all the aggregator feeds from the queue table. Already seeing some progress, as some of my feeds have begun updating upon cron run. Not sure the problems are all gone, but this seems a step in the right direction.
Still, it seems the ability to reliably delete a feed without delving into tables is so basic that this should be a priority fix for core aggregator module.
Comment #23
dcam CreditAttribution: dcam commentedThis issue is also mentioned as part of #1805282: Weird bug with the aggregation module: feed refuse to delete itself .
The above comments are correct. The feeds are reappearing due to leftover data in the queue table. This not only causes deleted feeds to reappear, but will result in edited feeds being overwritten by old settings. Here is the order of events:
1. aggregator_cron() marks a feed as needing to be updated. It saves an object representing the feed with all of the feed's properties in the queue table.
2. An administrator edits/deletes the feed.
3. When a feed is automatically updated by the queue system, aggregator_refresh() is called with the queued feed object passed as an argument.
aggregator_refresh() does not check to see if there has been any change in the status of the feed, whether it has been edited or deleted. If it fetches the feed and finds new items, then it overwrites the settings stored in the aggregator_feed table with the settings of the feed object that was stored in the queue table.
It seems likely that this issue could affect D8. I'll check it next.
Comment #24
LIQUID VISUAL CreditAttribution: LIQUID VISUAL commentedThank you, Thank you, Thank you for this! Deleted all data in aggregator table and all name = aggregator_feeds rows in queue tables and cron now drives aggregator properly again!
Happy New Year!
Comment #25
dcam CreditAttribution: dcam commentedThere are bugs in D8. As I described in #23 there are two issues occurring, though they both have the same root cause. When feeds are queued for update, the feed object is stored in the queue table.
If a feed is edited after being queued, then that feed's original settings will be restored from the queued object when the queue is run. This occurs in D7 and D8.
If a feed is deleted after being queued, then what happens differs between D7 and D8. In D7, the feed is restored from the queued object when the queue is run, although the feed title will be blank. In D8 the error shown in the attached image is produced. This happened during a cron run. Presumably the change to the Entity system prevents the feed from being restored. Instead cron runs are broken.
I wrote D8 tests for the two issue cases. The feed editing case is failing as it should be. Unfortunately, the feed deleting case is passing. I would expect the failed cron run to produce a test failure, but it isn't locally. So, I figured I should post the test for others to see.
Comment #27
ParisLiakos CreditAttribution: ParisLiakos commentedhm, sounds nasty and probably major for D7 since it leads to data loss..
What could we do, is, instead of passing the feed to queue worker, just pass the ID?
Comment #28
dcam CreditAttribution: dcam commentedI'm assuming something like that is how the issue will ultimately be fixed, but I haven't tried to do it yet. I've only tried to write tests thus far.
Comment #29
dcam CreditAttribution: dcam commentedPSR-4 reroll.
After weeks of working on it, I still can't get the "queued & deleted" feed test to fail. I'm not sure where to go from here.
Comment #31
ParisLiakos CreditAttribution: ParisLiakos commentedseems that the delete error is triggered by the edit module (judging by the screenshot)
so the test maybe is not fail, because it is not enabled in the test run?
Comment #32
dcam CreditAttribution: dcam commented@ParisLiakos
Thanks for the tip. Now that I see it, I should have realized it sooner.
Changes:
Added Editor module to enabled modules array.
Removed unneeded assertResponse().
Added asserts that check the number of records in queue table.
Removed unneeded drupalGet().
Comment #33
dcam CreditAttribution: dcam commentedForgot the status.
Comment #35
dcam CreditAttribution: dcam commentedAlright! This is what I'm seeing during manual testing: edited feeds being restored and deleted feeds breaking cron runs in D8.
Comment #36
ParisLiakos CreditAttribution: ParisLiakos commentedthanks a lot for the tests! :)
now lets see if this change fixes them
Comment #38
ParisLiakos CreditAttribution: ParisLiakos commentedeh, of course, i need to adjust the test
Comment #40
ParisLiakos CreditAttribution: ParisLiakos commentedwell it needs to clear the cache
Comment #41
jhedstromNeeds a reroll. It would also be nice to upload the new test separately to illustrate the current failure.
Comment #42
dcam CreditAttribution: dcam commentedYeah, nearly all the code the patch was written for has changed in the last 7 months. I've looked it over and the source of the problem still exists: the entire feed object is stored in the queue. Whether or not D8 still acts on that object in the same way - restoring deleted and edited feeds based on that object - I can't say. So I've rerolled the original tests-only patch just to try and re-verify the failures. I can't test it locally. This could fail spectacularly and not in the right way.
Comment #44
dcam CreditAttribution: dcam commentedI think I fixed the problem.
Comment #45
dcam CreditAttribution: dcam commentedComment #47
dcam CreditAttribution: dcam as a volunteer commentedI think this one will fail correctly now. I edited the wrong lines in #42 and #44.
Comment #49
jhedstromNow we need the patch with the actual fix from #40 and the test from #47 :)
Comment #50
dcam CreditAttribution: dcam as a volunteer commentededit: Nevermind this whole comment and any previous edits. I really wasn't thinking when I posted any of it. I'll upload the combined patch in a minute.
Comment #51
dcam CreditAttribution: dcam as a volunteer commentedThis is a reroll of #40.
Comment #53
plasticdoc_E7m9eW CreditAttribution: plasticdoc_E7m9eW as a volunteer commentedI am still having this problem in Drupal 7.50:
1. In case you want to replicate it, the feed in question is: http://feeds2.feedburner.com/InvestingRss
2. Each time cron is run the watchdog logs errors like:
35919 15/Jul 10:20 cron error PDOException: SQLSTATE[22003]: Numeric value out of range: 7 ERROR: value "-62106566400" is out of range for type integer LINE 14: ... alt=""/>', 'http://www.investors.com/?p=249064', '-6
3. After deleting the feed, it returns after the cron with a blank title.
I wonder, has the fix to patch it been ported to Drupal 7 ?
JA
Comment #54
gisleI also experience this problem in Drupal 7.50. I can verify that the method for removing the zombie feed described by BenStallings in #8 works. However, BenStallings had some reservations about this method, he wrote:
The
{queue}
table is used by the BatchQueue class. From its manpage:So I don't think we need to worry about deleting the entries that are about aggregator entries. This is junk that cron fails to clean up because the zombie feed makes cron crash before it gets around to cleaning them up, which leads to cron trying to run them again the next time it is run. If your feed is imported without incident during a batch operation, it will not be registered in
{queue}
.Out of curiosity, I examined the blob data in
{queue}
(it is data serialized as text, so you can inspect it in any text editor). In my case, it turned out that all the aggregator entries was about the zombie feed.It will be nice to see the patch committed and backported to Drupal 7, but in the meantime, I think the procedure outlined in #8 is a safe way to kill a zombie.
Comment #56
mareksal CreditAttribution: mareksal commentedHello all,
I confirm the issue in Drupal 7.51. The procedure described in #8 worked for me.
Cheers
Marek
Comment #58
mmenavas CreditAttribution: mmenavas commentedI checked the code for the aggregator module in Drupal 7, and I was unable to find a deleteItem() call. Could this be the source of the problem? I think unless there's code to delete the aggregator_feeds queue items, they'll linger around forever.
Comment #59
Pavan B S CreditAttribution: Pavan B S at Valuebound commentedRerolled the patch
Comment #68
quietone CreditAttribution: quietone at PreviousNext commentedThe
aggregator
module has been removed from Core in10.0.x-dev
and now lives on as a contrib module. Issues in the Core queue about theaggregator
module, like this one, have been moved to the contrib module queue.Comment #69
larowlanNeeds reroll for contrib
Comment #70
dcam CreditAttribution: dcam as a volunteer commentedThis is still a problem. Old settings are restored to edited feeds. Deleting a queued feed breaks queue runs, although I'm not sure it's broken in the same way. Here's a reroll.
Comment #71
dcam CreditAttribution: dcam as a volunteer commentedComment #72
larowlanThis looks great to me, the only thing I think we're missing is a hook_post_update to truncate the existing queue items that would have the entity stored, which is no longer compatible with the queue processor.
Thanks for working on this!
Comment #73
dcam CreditAttribution: dcam as a volunteer commented@larowlan I'm sorry for all the test spam above. I'm not sure why, but it kept being weird. It queued up two identical tests at the same time for one branch. It tested the wrong branch repeatedly (and just did it again). I feel bad for taking up resources on the test server, but it wasn't something I did intentionally.
No problem. The other day I looked through my old unclosed tickets and saw these. It's disappointing that this had tests and a fix, but never got a proper review. I guess it just shows how little attention Aggregator got in Core.
Comment #74
larowlanNo worries, if you're interested I'd be keen for a co-maintainer here - let me know
Comment #75
dcam CreditAttribution: dcam as a volunteer commentedHere's an attempt at a post update function and a test for it. I'm positive the test isn't going to work, but I can't test it locally. I've never written a post update function before, let alone a test for one.
And yes, @larowlan, if you would like some help with the module, then I'm willing to co-maintain it. Thank you for the offer!
edit: The problem with this one is that I forgot to update the fixture file path.
Comment #77
larowlanThe test just looks like the path to the dump file isn't right
Other than that it looks good to me
Comment #78
larowlanAdded #3358993: Add dcam as a co-maintainer
Comment #79
larowlanI've added you as a maintainer - thanks!
Comment #80
dcam CreditAttribution: dcam as a volunteer commentedWe don't actually need a fixture anyway. It's sufficient to create a queue item and then let it be deleted by the update. The update system just has to be tricked into running the update right after installing the module in the test
setup()
. This one is passing for me locally.Comment #81
larowlanWhat's the significance of editor here?
is
\Drupal::entityTypeManager()->getStorage('aggregator_feed')->loadUnchanged($feed->id())
enough here? ::resetAll is a bit of a sledgehammer approachCan we use
assertCount(0, Feed::loadMultiple())
instead of a DB query?Can we use $query->numberOfItems() instead?
here too
Neat trick!
Comment #82
dcam CreditAttribution: dcam as a volunteer commentedSee comment #31. Years ago the test simply failed unless the Editor module was installed. We weren't sure why. But I just re-ran the test without it and it passed. So maybe it's no longer a problem outside of Core or it was a symptom of a separate problem that got fixed. I'll upload a new patch without it.
Works for me.
That's a good idea.
I meant to change that, but forgot. Thanks for catching it. The only reason that I can think of that I might have avoided it years ago was because the docblock for
numberOfItems()
says 'This is intended to provide a "best guess" count of the number of items in the queue' and it isn't clear why. So maybe I didn't trust it.Thanks!
Comment #85
dcam CreditAttribution: dcam as a volunteer commentedThis is fixed for contrib Aggregator. The patch in #82 needs to be rerolled for Core.
Comment #88
rpayanmComment #89
rpayanmComment #90
catch9.5 only gets security fixes now, moving back to fixed.