Drupal Association members fund grants that make connections all over the world.
If you run a large site (mine is around 200,000 pages), then search engine traffic volume and associated server load are significant issues as search engines revisit each page periodically. It's best if you can direct search engines to read only the versions of pages you want indexed. Telling them after they access the url that they shouldn't index the content means a lot of wasted traffic.
As such, it's best if URLs for print pages, email to friend pages, and any other per-node accessory pages are able to be excluded as a set using robots.txt.
Currently the Print Friendly Pages module uses URLs like: http://www.example.com/node/766/print which can't be excluded by robots.txt because they lack a common prefix. By contrast, if they used an URL like http://www.example.com/print/node/766 then you could have robots.txt exclude all of http://www.example.com/print/
If people specify URLs using the Path module, it would be nice to have the print module use corresponding URLs like http://www.example.com/print/user/specified/path/ that could be caught by the same robots.txt exclusion.