After upgrading to version 7.x-3.5 from 7.x-3.2, the PHP memory requirements have skyrocketed when backing up to a download. Before I had the memory_limit set to 128M. With this new version it was running out of memory. So I bumped the limit up in 128M increments until it worked.

It finally completed a backup/download when I set the memory_limit to 896M.

Support from Acquia helps fund testing for Drupal Acquia logo

Comments

rwetherill created an issue. See original summary.

DamienMcKenna’s picture

Version: 7.x-3.5 » 7.x-3.x-dev

Am hitting this problem myself. I bumped the backup_migrate_backup_memory_limit variable to 1gb, and then increased it 0.1gb each time until it finally worked without hitting an error, and it only worked when I got to 2gb.

It also didn't matter whether I set the file to be compressed or not - I started with the file being compressed with 'zip', then changed it to 'uncompressed', but it failed at 1.9gb in both scenarios while succeeded at 2.0gb.

Gah.

This site's backup resulted in a file which was 750mb. One of the tables has 150,000 rows, and after I reimport the backup into a fresh database my db tool (Navicat) says this one table uses 826mb.

The table configuration is not set to exclude any tables. However, it is set to exclude "accesslog", all "cache" tables, tables for Search API DB and core's search module, and "sessions".

I wonder if maybe something in how it handles compiling the data from the database is using up an endless amount of memory and not clearing the memory?

DamienMcKenna’s picture

The database code works from the following:

    $data = $this->query("SELECT * FROM `" . $table['name'] . "`", array(), array('fetch' => PDO::FETCH_ASSOC));

This means it builds a large associated array of all of the data from the table, so if the table has hundreds of thousands, or even millions of records, it'll load the entire thing into memory as an array, and then process each row.

I wonder if we could do a ranged query instead?

DamienMcKenna’s picture

Lets at least improve this for the next release.

DamienMcKenna’s picture

The data is obtained using DatabaseConnection::query() inside _dump_table_data_sql_to_file(). Maybe if we change that entire function so that it does a queryRange() using the $rows_per_line variable to control the range, and then jumping straight to the output.

DamienMcKenna’s picture

Status: Active » Needs review
FileSize
7.07 KB

WIP to use a queryRange().

DamienMcKenna’s picture

I set the memory limit to 512mb, set backup_migrate_data_rows_per_query to 1000, and the backup completed in just under 6 minutes and the peak memory usage was 126mb. However, I don't think the memory report is accurate..

DamienMcKenna’s picture

Ok, I changed the memory size check to follow the same logic that Devel uses, it now reports 130mb creating a 742mb uncompressed backup in just over 5 minutes.

Please give it a try.

DamienMcKenna’s picture

This updates the destination file too, and adds new items to hook_uninstall() to remove the three variables.

Status: Needs review » Needs work

The last submitted patch, 9: backup_migrate-n2943719-9.patch, failed testing. View results

DamienMcKenna’s picture

  • DamienMcKenna committed 612f153 on 7.x-3.x
    Issue #2943719 by DamienMcKenna: Reduce memory usage during MySQL...
DamienMcKenna’s picture

Status: Needs review » Fixed

Committed.

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.