I have a setup where for some time I tried out Storage API with Amazon S3,
but now would like to disable the module again and move all the files back to the default Drupal files directory.

I had one storage class ("Everything") with S3 as a single container.

This is how far I got:
I added the filesystem container to the class, propagated the files from the S3 container to the new filesystem container and then deleted the S3 container.
Now, I am left with the single filesystem container, which contains all files that were uploaded while I was testing the module.

How do I proceed from here?
Can I just mark the filesystem container to remove and the files will be copied back to the Drupal files directory?

Thank you in advance!

Comments

Andre-B’s picture

I would add a new Filesystem container to that specific class and have it populated, afterwards you will have to process each element for that classs and change the storage api scheme to a (public://|private://) scheme that matches your needs.

Perignon’s picture

I would have to say what you are asking would be far too much to put in the module because you would face timeout issues with PHP attempting to do this. My own case in point, I store files that are 2 to 10 GB, so Drupal/PHP would time out attempting to copy those files.

The best approach would be to add another container to the class now. Manually copy all the files using an S3 client back to your server then you would have to code up an update to all of the entities and the file fields to change the file stream on them.

Andre-B’s picture

if you have your cron run by cli there shouldnt be a timeout issue, though I am still missing the core feature of having the storage api queues beeing processed by drush directly rather than relying on cron.
see #1984906: Is there a way to propagate files without having to run cron?

Perignon’s picture

Status: Active » Fixed

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.