Well for the record: I finished the first prototype of a new module "scraper" in November, but have not contributed it yet. Currently you run this on a test/dev site then import the data into your production site. Requires: Drupal 4.7, PHP5, Tidy extension.
Here's how it works.
(1) You define the dynamic content you want to import into your website. Maybe it's prices of some products. Maybe it's weather data. Maybe it's your prospective fiancee's evolving criminal record.
(2) You create a new "scraper job". Using a combination of config settings, XPath and PHP scripting, you encode this job w/ all info to get the desired data.
(3) The output of this module is currently a CSV file (though I could easily spit out some flavor of XML or whatever).
(4) You print out the CSV file, sit on it, and spin. Or you use node_import to import the data as nodes. Whatever you want to do with it (as long as its legal).
I have this sucker working for all of my ~dozen test sites. It has some pretty slick capabilities
* ability to post form info (logins, search forms, etc),
* ability to read page/form info and traverse multiple pages of data,
* ability to recursively call other scraper jobs, to get data that is spread across multiple pages,
* ability to use regular expressions (& some regex helper functions) to extract e.g. phone number, dates out of a text field
I have worked with & reverse-engineered 3 commercial products in this space, and used these learnings in this product. But it is still in pre-beta mode.