So this rather tricky. Here is my use case. I have server B where I have a bunch of sites talking to the database server D. Those are not yet managed by Aegir, which sits on server A and where I want to migrate all my sites. So I take one site (a single-site Drupal install), copy the files from server B over to server A and add it as a platform. The first problem I stumble on is #453540: consider the 'default' site like a regular site, with exceptions but I will leave the details in that issue. Once passed that, the site runs fine on server A. But when I try to migrate the site from that custom platform to my regular, standard platform, the migration script craps out on:
Database import failed: ERROR 1045 (28000): Access denied for user 'user'@'A.example.com' (using password: YES)
...
Revoking privileges of user@127.0.0.1 from database
So to clarify this: mysql fails to import the database because mysql refuses access to the 'user' from the A server. Then the migrate script rolls back and tries to remove the grant, but with a host part of 127.0.0.1. That's actually right: that is actually the configured IP in the webserver. What's wrong here is that the grant is removed from the localhost database server. (It was also probably created on the localhost server in the first place, which is also wrong.)
Indeed, when looking at the site, the node page tells me that it's associated with the localhost database server, which is wrong: the settings.php clearly state that it's on D.example.com.
So that's the first problem here: the imported site's database server is basically ignored and assumed to be 'localhost'.
The second problem here is much more tricky: if you have a localhost *and* remote database server, you're screwed, because there's no "right" IP address. As seen from the localhost server, it's 127.0.0.1, as seen from the D server, it's the public IP of the server.
This could probably be fixed by workarounds outside of Aegir (like running the localhost db server on the public IP), but that's not really clean. But I would be satisfied if we solve just the first part of this problem within this particular issue here.
Comment | File | Size | Author |
---|---|---|---|
#9 | 485646_hack.patch | 835 bytes | anarcat |
Comments
Comment #1
anarcat CreditAttribution: anarcat commentedComment #2
anarcat CreditAttribution: anarcat commentedSo I'm not sure the problem lies in import anymore. Here the site doesn't actually show up as connected to the 'localhost' dbserver (which doesn't exist) nor is localhost in the drushrc. So the site seems to be imported properly. But somehow, 127.0.0.1 is still used in the backend:
That array is a print_r() i added before the GRANT to figure out the _provision_mysql_grant_host() logic error. So it seems it's the web_ip that doesn't get passed properly down to the migrate task.It is set properly in all of the platform's drushrc.php, so I have no idea where it sets it to 127.0.0.1, really weird. I'll try to figure out where that is done.
Comment #3
anarcat CreditAttribution: anarcat commentedSo I figured out where that IP is set, and it's in
provision_apache_drush_init()
. Using the following patch, I can succesfully migrate the site:Obviously, that's a crude hack, because then the web_ip is null, so that's not the right fix:
I don't understand why the ip doesn't go down to the migrate task...
Comment #4
anarcat CreditAttribution: anarcat commentedSo I figured out part of this, i think... The issue is that migrate calls deploy, but doesn't pass along the right options. Here's a trivial patch that forces those options to be passed along:
I'm really not sure this is the right way, those things should get passed along... the context? Anyways, I was able to migrate with that...
Comment #5
anarcat CreditAttribution: anarcat commentedI know this is not a release objective, but since this is a common configuration, I would very much like to get this in the tree. I've been running with this patch forever now and I can't see why we shouldn't commit it. It's just waiting for Adrian's review since i'm unsure why this was failing in the first place.
Comment #6
adrian CreditAttribution: adrian commentedCommitted. thanks
Comment #8
anarcat CreditAttribution: anarcat commentedSo I'm seeing this bug rear its ugly head again:
It seems the grant is not being created properly somehow. This is while cloning a site, but I assume it's related.
Comment #9
anarcat CreditAttribution: anarcat commentedI have a crude hack I use here to clone sites through the commandline, based on a suggestion by adrian. Basically, I pass on the commandline the master_db* details and I patched clone to pass those details along in the backend.
To be able to migrate, apply the attach patch, then use the following command:
Comment #10
anarcat CreditAttribution: anarcat commentedThis hack was actually committed to both migrate and hosting: http://drupal.org/cvs?commit=278438
For this to be fixed properly, we need server verification that will properly store all database details and connect to the right one depending on the settings.php. See also #586000: make a "server verify" task.
Comment #11
anarcat CreditAttribution: anarcat commentedThat last patch was reverted as it was introducing a new security hole: the options are passed on the commandline and therefore the master db password was showing on the deploy commandline. We need to find another way.
Comment #12
adrian CreditAttribution: adrian commentedwe can make the drush_backend call to deploy use POST instead of GET as the mechanism, which would hide the parameters.
Comment #13
anarcat CreditAttribution: anarcat commentedI rolled that into http://git.koumbit.net/?p=drupal/modules/provision/.git;a=commitdiff;h=8...
Still that's not really not the right way...
Comment #14
adrian CreditAttribution: adrian commentedSo this stuff should be fixed now. The db creds are being stored separately for each server.
it no longer needs to be passed to the deploy command.
Comment #15
adrian CreditAttribution: adrian commentedThis is fixed in head. there's no need to pass anything around anywhere. as long as you created the db server in the front end already everything should just work.