Support from Acquia helps fund testing for Drupal Acquia logo

Comments

ohthehugemanatee’s picture

Status: Active » Needs review
FileSize
8.53 KB

Attaching a first stab at this. Unfortunately we have to use our own iterator to be able to handle header rows.

The source takes the following arguments:

* path (required, the path to your CSV)
* keys (required, an array of the key fields in your CSV)
* header_rows (optional, number of header rows in your CSV. Defaults to 0)
* fields (optional, any fields you want to manually specify for mapping. Shouldn't be necessary most of the time)
* delimiter (optional, the delimiter character for the file. Defaults to a comma ,)
* enclosure (optional, the enclosure character for the file. Defaults to a double quote ")
* escape (optional, the escape character for the file. Defaults to a backslash \)

Do we still need the computeCount method?

chx’s picture

Just a few notes at this hour. Thanks for the port!

We usually do not use SplFileObject; but rather just do class CSVFileObject extends \SplFileObject.

public $headerRows = 0;

Please do not use naked public properties. That rarely works well. Setter/getter pattern is a strong preference. In this case the getter would go unused. Also, $file->headerRows = !empty($this->configuration['header_rows']) ? $this->configuration['header_rows'] : 0; can be broken into an if -- it is already 0 so why bother setting it to 0?

The CSV constructor does way too much work. Also it should be the first function for readability purposes , and the class properties should preceed even that. You probably want to move most of the logic to rewind or initializeIterator.

Not quite sure why there is a getNextRow and a getNextLine when they do the exact same...?

heddn’s picture

  1. +++ b/src/CSVFileObject.php
    @@ -0,0 +1,80 @@
    +  public function __construct ($filename) {
    

    The constructor here drops the other default argument values from http://php.net/manual/en/splfileobject.construct.php. That seems... wrong.

  2. I'd like to see some more comments about the allowed YAML configuration somewhere and their purposes. Either in the @file (preferable) or somewhere else. This is a pattern I see in a lot of the D7 migration source classes and I think it really helps out a lot.
mikeryan’s picture

I'm at an offsite meeting this week and probably won't have time to look carefully at this - just want to suggest that it would be great if someone could get started on #2451331: Add documentation to People example migration in migrate_source_csv_test, both to help demonstrate how to use this plugin and as a practical test.

heddn’s picture

Is this issue here a subset or child of #2451331: Add documentation to People example migration in migrate_source_csv_test? There seems to be a lot of similarity between the two right now.

saltednut’s picture

mikeryan’s picture

Status: Needs review » Needs work

Working on #2451331: Add documentation to People example migration in migrate_source_csv_test, it would be helpful to give an example of how the .yml for a real source would look. I'm thinking it will be something like this?

source:
  plugin: csv
    delimiter: ,
    enclosure: \"
    escape: \\
    header_rows: 0
    embedded_newlines: 0
    keys:
      - start_date
      - home_team
      - home_game_number
    csvColumns:
      0:
        start_date: Date of game
      3:
        visiting_team: Visiting team
...
+++ b/src/Plugin/migrate/source/CSV.php
@@ -0,0 +1,201 @@
+    if ($this->headerRows && empty($this->configuration['csvColumns'])) {

Convention is for configuration keys to be lower case, underscore-separated (i.e., 'csv_columns').

ohthehugemanatee’s picture

Just noting that I will be all over this during drupalcon... but first I have a prenote and session to take care of. Meanwhile, here's a yaml example:

id: basics
label: Basic state information from the US Census
migration_groups:
  - US Census

source:
  plugin: csv
  path: '/var/www/drupal8/import/PEP_2014_PEPANNRES_with_ann.csv'
  header_rows: 2
  keys:
    - Id2
  name: Basics

process:
  title:
    plugin: concat
    source:
      - Geography
      - name
    delimiter: ': '
  field_population: Population Estimate (as of July 1) - 2014
  field_state_term:
    plugin: migration
    migration: states
    source:
      - Geography
  type:
    plugin: default_value
    default_value: basics

destination:
  plugin: entity:node
mikeryan’s picture

Thanks. I need to turn my attention elsewhere shortly, but will submit a patch to #2451331: Add documentation to People example migration in migrate_source_csv_test of my progress that you can work with.

+++ b/src/Plugin/migrate/source/CSV.php
@@ -0,0 +1,201 @@
+  public function computeCount() {

computeCount() was a D7 method, it's not in D8 - in D8, the counting is done by a count() method on the iterator (CSVFileObject in this case). To test, enable both migrate_example_baseball and migrate_plus and do drush ms.

mikeryan’s picture

  1. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,201 @@
    +      return new MigrateException('You must declare the "path" to the source CSV file in your source settings.');
    

    throw, not return.

  2. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,201 @@
    +      return new MigrateException('You must declare the "keys" the source CSV file in your source settings.');
    

    throw, not return.

heddn’s picture

Status: Needs work » Needs review
FileSize
7.34 KB

I had a chance (and interest) to work on this on my looonnng flight yesterday. I picked up the last bits of feedback from #9-10.

mikeryan’s picture

Status: Needs review » Needs work
  1. +++ b/src/CSVFileObject.php
    @@ -0,0 +1,118 @@
    +  public function __construct($file_name, $open_mode, $use_include_path, $context) {
    

    This is being called with only the $file_name argument, which fails with Missing argument 2 for Drupal\migrate_plus\CSVFileObject::__construct(). The other arguments need to have the same default values set as in SplFileObject::__construct().

  2. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,138 @@
    +      $row = $file->next();
    

    next() returns no value - current() is the way to get the current row of the file iterator.

Fixing the above locally, I'm getting a hang on the iterator_count() call - don't have time to further debug at the moment.

ohthehugemanatee’s picture

I'm working on this at @DrupalconLA . I've since fixed some other issues with this as well, so it'll be combining my work with @heddn's .

ohthehugemanatee’s picture

I got @heddn's patch working, and applied @mikeryan's suggestions from #12.

I ran into a problem including the same __construct() arguments from SplFileObject in CSVFileObject. I tried making the arguments optional like this:

public function __construct($file_name, $open_mode = NULL, $use_include_path = NULL, $context = NULL) {
  parent::__construct(($file_name, $open_mode = NULL, $use_include_path = NULL, $context = NULL));

But SplFileObject complains that $context has to be a Resource, it cannot be NULL. Resources are a type of instance... you can't just say "new Resource".

@pwolanin suggested that since this class is exclusively for use in Drupal, we know it won't be called with the other arguments, so we're fine to just say __construct($file) { .

@crell suggested that we can satisfy pedantry by wrapping the parent constructor call in an if statement that checks to see if $context is null.

I chose the latter approach.

I tested this working, but there is still a problem where it tries to process the last line and errors out when it can't find an ID. The rest of the import goes fine, though. This patch is more of a "progress report" than anything.

ohthehugemanatee’s picture

@crell's solution didn't work for me either. Going back to limiting __construct to one argument, per @pwolanin.

the while loop that goes over every row and imports it wasn't failing $this->getIterator->valid() when we were past the last line of the file. It was because we weren't setting the flags for CSVFileObject::READ_AHEAD, CSVFileObject::DROP_NEW_LINE, and CSVFileObject::SKIP_EMPTY. That's fixed now.... and it works!

tadaaaaaaaaaa!

Still needs unit tests, though.

heddn’s picture

Status: Needs work » Needs review
FileSize
11.67 KB

Added unit tests (100% coverage) for CSVFileObject. Next up is the CSV plugin.

heddn’s picture

And now we are at 100% coverage for all additions here.

ohthehugemanatee’s picture

patch applies cleanly for me. But:

* there's only one test: CSVFileObjectTest . Did you miss the other file?
* trying to run CSVFileObjectTest locally throws a PHP fatal for me: " Cannot redeclare class Drupal\migrate_plus\UnitTests\CSVFileObjectTest in /vagrant/public/modules/migrate_plus/tests/src/Unit/CSVFileObjectTest.php on line 17"

I don't understand the fatal error; that name doesn't occur anywhere else in the codebase. I've rebuilt caches already. I don't see anything obvious in the code that looks wrong, though. Can you run the tests on your environment?

heddn’s picture

FileSize
18 KB

Oops, missed a file.

heddn’s picture

FileSize
17.83 KB

Some test clean-up.

ohthehugemanatee’s picture

got the new files now, but I get the same fatal error "cannot redeclare class" for both tests. @heddn Are you able to run the tests?

heddn’s picture

Maybe you are running into something like: http://stackoverflow.com/questions/2816173/cannot-redeclare-class-error-...

Updated patch includes a phpunit.xml that will (hopefully) help. But I've been able to run the unit tests in PHPStorm with no problem.

The other thing we should unit test is running count() on a massively large csv. Something like 20GB should be large enough. I'm worried that iterator_count() on a file that large might not scale. If that is the case, then we need to revert to looping over every line in a foreach or something so we don't have to hold the whole file in memory at a single time.

heddn’s picture

Got some eyes on this from @Xano. The namespace was all messed up on the unit tests. This fixes that and now tests run from the simpletest runner.

chx’s picture

+      // Only use rows specified in the defined CSV columns.
+      $row = array_intersect_key($row, $this->csvColumns);
+      // Set meaningful keys for the columns mentioned in $this->csvColumns.
+      foreach ($this->csvColumns as $key => $value) {

I would believe the first line wanted to be only use columns.

But why do we array_intersect when we loop over csvColumns anyways...?

maijs’s picture

+  public function __construct($file_name) {
+    parent::__construct($file_name);
+
+    $this->setFlags(CSVFileObject::READ_CSV | CSVFileObject::READ_AHEAD | CSVFileObject::DROP_NEW_LINE | CSVFileObject::SKIP_EMPTY);
+  }

For constructor of CSVFileObject class I'd suggest using a combination of func_get_args() and call_user_func_array() so that other arguments can be passed to the parent class if necessary.

public function __construct($file_name) {
  call_user_func_array(array('parent', '__construct'), func_get_args());

  $this->setFlags(CSVFileObject::READ_CSV | CSVFileObject::READ_AHEAD | CSVFileObject::DROP_NEW_LINE | CSVFileObject::SKIP_EMPTY);
}
heddn’s picture

FileSize
18.87 KB
1.08 KB

re #24

  1. I've fixed the comments. It was a copy/paste from D7 so we could in theory open a follow-up to fix there as well. I don't think that is worth it.
  2. Doing array_intersect_key reduces the columns in rows to only what is in csvColumns. If I remove it, then testInitializeIterator() fails on line 180 because the row data has extra columns that aren't expected.

re #25
Fixed. Tests still pass.

Lastly, can someone enable testing on this sandbox? I don't have the option to do that.

mikeryan’s picture

Lastly, can someone enable testing on this sandbox? I don't have the option to do that.

Neither do I;). Sandboxes don't support testing.

Going to start testing this with the baseball example now - one immediate piece of feedback, as explained in https://www.drupal.org/node/2451331#comment-9945601, the csvColumns should support human-readable labels.

mikeryan’s picture

Status: Needs review » Needs work
  1. --- /dev/null
    +++ b/phpunit.xml.dist
    

    Is this really necessary? FWIW, I can't seem to run the tests locally with or without this file.

  2. +++ b/src/CSVFileObject.php
    @@ -0,0 +1,120 @@
    +   * The human-readable column headers, keyed by column index in the CSV.
    

    Rather than human-readable (since we're dealing here with machine names rather than human labels), maybe say alphanumeric?

  3. +++ b/src/CSVFileObject.php
    @@ -0,0 +1,120 @@
    +    call_user_func_array(array('parent', '__construct'), func_get_args());
    

    Ugly, but hard to see an alternative...

  4. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,134 @@
    +   * List of available source fields.
    

    This list should be keyed by machine name, with values being human-readable labels.

  5. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,134 @@
    +      throw new MigrateException('You must declare the "keys" the source CSV file in your source settings.');
    

    Message needs work, nonsensical as it is. 'You must declare "keys" as a unique array of fields in your source settings' - something like that.

  6. +++ b/src/Plugin/migrate/source/CSV.php
    @@ -0,0 +1,134 @@
    +    foreach ($this->getIterator()->getCsvColumns() as $column) {
    

    Right now, this is assuming something like

        0: start_date
        3: visiting_team
        6: home_team
    

    It should be supporting machine_name: label structure...

        0:
          start_date: Date of game
        3:
          visiting_team: Visiting team
        6:
          home_team: Home team
    

The good news is, I have migrate_example_baseball working with this plugin without modifying the plugin! (for the moment, I had to modify the csv_columns in migrate_example_baseball because the plugin isn't supporting labels)

The not-so-good (but not bad) news is #2491643: How should migrate_plus be split up? - pending feedback to the contrary, for now let's move this to a submodule within migrate_plus named migrate_source_csv (and I will follow by moving migrate_example_baseball to a submodule of migrate_source_csv).

Thanks!

heddn’s picture

Status: Needs work » Needs review
FileSize
18.15 KB
2.13 KB

re #28:
#1, It is helpful to have it if you want to run coverage. It filters to only run coverage on the specific mentioned folders. I've left it out of this latest patch for now though. We can always bring it back.
#4 & #5 addressed in latest patch.
#2 & 6:
I don't understand the need for indexing the rows by numeric index. I was expecting something like:

  header_rows: 1        # The number of rows at the beginning which are not data.
  csv_columns:
    start_date: Date of game
    visiting_team: Visiting team
    home_team: Home team
    home_game_number: Home team game number
    home_score: Home score
    visiting_score: Visiting score
    outs: Length of game in outs
    park_id: Ballpark ID
    attendance: Attendance
    duration: Duration in minutes

If header_rows is set to 0, then the csv_columns default to numeric indexing and the key/value parings are a numeric machine name and description.

Notes from the baseball example:

  • The name in the csv migration plugin is csv_columns, no camel cased 'csvColumns' any more.
  • embedded_newlines is no longer needed for calculating the count since we have a full iterable object now.
saltednut’s picture

Status: Needs review » Needs work

What is the source of the data.csv and edge cases files? These seem like they could be real email addresses.

id,first_name,last_name,email,country,ip_address
1,Justin,Dean,jdean0@prlog.org,Indonesia,60.242.130.40
2,Joan,Jordan,jjordan1@tamu.edu,Thailand,137.230.209.171
3,William,Ray,wray2@sourceforge.net,Germany,4.75.251.71
4,Jack,Collins,jcollins3@patch.com,Indonesia,118.241.243.64
5,Jean,Moreno,jmoreno4@oracle.com,Portugal,12.24.215.20
6,Dennis,Mitchell,dmitchell5@so-net.ne.jp,Mexico,185.24.131.116
7,Harry,West,hwest6@vk.com,Uzbekistan,101.74.110.171
8,Rebecca,Hunt,rhunt7@ameblo.jp,France,253.107.6.23
9,Rose,Rogers,rrogers8@businesswire.com,China,21.2.126.228
10,Juan,Walker,jwalker9@fda.gov,Angola,192.118.77.225
11,Lois,Price,lpricea@nih.gov,Greece,231.185.100.19
12,Patricia,Bell,pbellb@narod.ru,Sweden,226.2.254.94
13,Gerald,Kelly,gkellyc@homestead.com,China,31.204.2.163
14,Kimberly,Jackson,kjacksond@blogspot.com,Thailand,19.187.65.116
15,Jason,Mason,jmasone@nature.com,Greece,225.129.68.203

^ from patch in #29

Should this contain something less specific to real persons?

heddn’s picture

I used https://www.mockaroo.com to generate the data.

saltednut’s picture

Status: Needs work » Needs review

Thanks for the clarification @heddn. Let #31 persist as the physical record of the content's OSS origin!

phenaproxima’s picture

Status: Needs review » Needs work

A couple of suggestions:

  1. +++ b/src/CSVFileObject.php
    +  /**
    +   * Number of header rows.
    +   *
    +   * @return int
    +   *   Get the number of header rows, zero if no header row.
    +   */
    +  public function getHeaderRows() {
    +    return $this->headerRows;
    +  }
    +
    +  /**
    +   * Number of header rows.
    +   *
    +   * @param int $header_rows
    +   *   Set the number of header rows, zero if no header row.
    +   */
    +  public function setHeaderRows($header_rows) {
    +    $this->headerRows = $header_rows;
    +  }
    +
    +  /**
    +   * CSV column names.
    +   *
    +   * @return array
    +   *   Get CSV column names.
    +   */
    +  public function getCsvColumns() {
    +    return $this->csvColumns;
    +  }
    +
    +  /**
    +   * CSV column names.
    +   *
    +   * @param array $csv_columns
    +   *   Set CSV column names.
    +   */
    +  public function setCsvColumns(array $csv_columns) {
    +    $this->csvColumns = $csv_columns;
    +  }
    

    Not the biggest deal in the world, but these method names are confusing. Can they be changed getHeaderRowCount(), getColumnNames(), etc.? Names which better describe what they do?

  2. +++ b/tests/src/Unit/artifacts/data.csv
    +++ b/tests/src/Unit/artifacts/data_edge_cases.csv
        

    There's no need to create real files for testing purposes. It's preferable to mock the file using vfsStream, if possible.

heddn’s picture

#33:
I agree with #1. I was having a mental block on what to name those things.
I disagree with #2. A real file leads to a better documented system. Yes, there is baseball example. But the real files in this test are also another method to document what the various CSVs could look like.

Watch for an updated patch soon.

phenaproxima’s picture

@heddn: I'm not sure I see how real files meant strictly for test consumption (the choice of the word "artifacts" was apt) make the system better documented, especially in this case. CSVs, as a format, are pretty self-explanatory.

mikeryan’s picture

@heddn:

I don't understand the need for indexing the rows by numeric index. I was expecting something like:

  header_rows: 1        # The number of rows at the beginning which are not data.
  csv_columns:
    start_date: Date of game
    visiting_team: Visiting team
    ...

Here's what I have in the migrate_example_baseball config file:

  header_rows: 0        # The number of rows at the beginning which are not data.
  csv_columns:
    # So, here we're saying that the first field (index 0) on each line will
    # be stored in the start_date field in the Row object during migration, and
    # that name can be used to map the value below. "Date of game" will appear
    # in the UI to describe this field.
    0:
      start_date: Date of game
    3:
      visiting_team: Visiting team

The numeric indices are necessary because we're not taking all columns in order, we need to specify the column number of each field we're importing. We also need to assign it a machine name (since we have no column headers) and the user-friendly description.

heddn’s picture

So, one suggestion, is that initializeIterator() isn't static. The nature of migrations is that a lot of times it requires migrations using the exact same CSV for different purposes. One pass is for the base entity, a second for related entities and maybe multiple passes over that file a third or 10th time for some field collections. When the file sizes are fairly large, this requires the very expensive initializeIterator() call to get run multiple times, on the exact same file. Is there a drupal_static like pattern that I could implement in Drupal\migrate_plus\Plugin\migrate\source\CSV to make this less expensive?

I'm dealing with this problem right now in D7. Where visiting /admin/content/migrate/groups/example_group, it causes D7's migrate to loop over this stuff 11 times because I have eleven migrations that are all based off the same exact CSV. It isn't exactly the same, but it is close enough that I'd like to pose the question here.

heddn’s picture

I had some time tonight to work on this. But I didn't get it rolled into a patch before I ran out of time. Tests still all pass.

Completed:
migrate_source_csv sub-module #28
#33.1 getHeaderRowCount & getColumnNames renames
#33.1 vfsStream mocks for CSVFileObject

Outstanding:
#33.1 vfsStream mocks for CSV
#36
#37 - Seeing if there is any performance tweaks I can throw at this.

I got bogged down trying to see what count() would return on a very large CSV file. I was wondering if iterator_count() would scale with a several megabyte or a gig CSV. Unfortunately, org\bovigo\vfs\content\LargeFileContent creates an empty file and count only returns 1 on it. The creation of the virtual file, even a fairly small 100mb file was taking too long as well.

mikeryan’s picture

@heddn: As far as performance of large CSVs goes, about all I can offer is that CSVs are inherently not very scalable - at a certain point it becomes much more effective to load the CSV into a real database and run the migration off of that. In real-world projects, I've only used CSVs for small manually-maintained datasets (dozens of rows, not thousands).

tannerjfco’s picture

I've done some testing with the patch in #29 and encountered a couple issues:

1) it looks like all the sourceid fields in the migrate_map_ table are setup as VARCHAR w/ 255 character limit, which prevents long text fields from being migrated:

exception 'PDOException' with message 'SQLSTATE[22001]: String data, right truncated: 1406 Data too long for [error]
column 'sourceid2' at row 1' in
/var/www/wcet/releases/20150611194557/docroot/core/lib/Drupal/Core/Database/Statement.php:61

2) On one of the test migrations I've setup, I'm migrating into a content type with 2 date fields. I'm able to migrate into one of the two fields, but if I try to have mappings for the other field, the migration won't show in migrate-status, and will throw a SQL syntax error if I try to run migrate-import on it anyways:

Migration failed with source plugin exception: SQLSTATE[42000]: Syntax error or access violation: 1071       [error]
Specified key was too long; max key length is 3072 bytes: CREATE TABLE {migrate_map_test_event} (
`sourceid1` VARCHAR(255) DEFAULT NULL,
`sourceid2` VARCHAR(255) DEFAULT NULL,
`sourceid3` VARCHAR(255) DEFAULT NULL,
`sourceid4` VARCHAR(255) DEFAULT NULL,
`sourceid5` VARCHAR(255) DEFAULT NULL,
`destid1` INT NULL DEFAULT NULL,
`source_row_status` TINYINT unsigned NOT NULL DEFAULT 0 COMMENT 'Indicates current status of the source row',

`rollback_action` TINYINT unsigned NOT NULL DEFAULT 0 COMMENT 'Flag indicating what to do for this item on
rollback',
`last_imported` INT unsigned NOT NULL DEFAULT 0 COMMENT 'UNIX timestamp of the last time this row was
imported',
`hash` VARCHAR(64) NULL DEFAULT NULL COMMENT 'Hash of source row data, for detecting changes',
PRIMARY KEY (`sourceid1`, `sourceid2`, `sourceid3`, `sourceid4`, `sourceid5`)
) ENGINE = InnoDB DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci COMMENT 'Mappings from source identifier
value(s) to destination…'; Array
(
)

Let me know if more info is needed on the 2nd one, I'm rather puzzled by it as I'm mapping that 2nd field the same way as the first one.

heddn’s picture

I've got two new interns in the office, so my time is getting sucked up. But here's what I have from #38. Hopefully soon I'll get a chance to respond to the rest of the feedback.

saltednut’s picture

Although I don't believe it should be a submodule, personally, the direction from #28 states that migrate_source_csv should be one.

Right now we're just nesting the code under the folder name.

drush en migrate_source_csv
migrate_source_csv was not found. [warning]
No release history was found for the requested project (migrate_source_csv).

I believe a migrate_source_csv.info.yml file needs to be established in order for #41 to work.

dalin’s picture

This is looking great!

I was a bit confused by some of the config options in the YAML, so I suggest that we carry the DX improvements that we made to CSVFileObject all the way up to the YAML file. So
header_rows becomes header_row_count
csv_columns becomes column_names

And thus the chunk of code in the constructor would become:

// Figure out what CSV column(s) to use. Use either the header row(s) or
// explicitly provided column name(s).
if (!empty($this->configuration['header_row_count'])) {
  $file->setHeaderRowCount($this->configuration['header_row_count']);

  // Find the last header line.
  $file->rewind();
  $file->seek($file->getHeaderRowCount() - 1);

  $header_row = $file->current();
  foreach ($header_row as $header_label) {
    $header_label = trim($header_label);
    $column_names[] = $header_label;
  }
  $file->setColumnNames($column_names);
}
// An explicit list of column name(s) will override any header row(s).
if (!empty($this->configuration['column_names'])) {
  $file->setColumnNames($this->configuration['column_names']);
}
heddn’s picture

#43: +1

Those names are carry-overs from D7. No need to keep the kruft.

heddn’s picture

Status: Needs work » Needs review
FileSize
18.38 KB
40.59 KB

Let's see how this fairs. The interdiff is almost useless.

I added the feedback from #43, plus renamed keys => identifiers. Why? Because that's what the migrate plugin calls the thing and I wanted to keep things consistent.

#36 is addressed.

  • mikeryan committed 85154dc on 8.x-1.x
    Issue #2458003 by ohthehugemanatee,heddn: Implement CSV source plugin...
mikeryan’s picture

Status: Needs review » Fixed

I've got my baseball migration (basically) working with this patch, looks good, committed!

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.