I am running a huge scrape on thousands of pages and I am running into a problem where my .xls file ends up being 0bytes after a certain size.
I am running Win7 64bit, Zennoposter 4.5.02. My project is visiting multiple URLs each containing several rows of info that I am storing into the table, so the file updates each time it visits a new url.
When it first starts, my .xls file saves normally, slowly growing in size.
When the file gets to around 5MB I notice the file alternates between 0 bytes and the actual file size, still growing.
After around 15MB (around 55,000 rows I believe) or so, the file stays at 0 bytes even though it still appears to be scraping. I've never let it reach the end of a FULL scrape (I estimate it to be 450,000 rows total) because of this 0byte issue.
Is this a known issue?
I am running Win7 64bit, Zennoposter 4.5.02. My project is visiting multiple URLs each containing several rows of info that I am storing into the table, so the file updates each time it visits a new url.
When it first starts, my .xls file saves normally, slowly growing in size.
When the file gets to around 5MB I notice the file alternates between 0 bytes and the actual file size, still growing.
After around 15MB (around 55,000 rows I believe) or so, the file stays at 0 bytes even though it still appears to be scraping. I've never let it reach the end of a FULL scrape (I estimate it to be 450,000 rows total) because of this 0byte issue.
Is this a known issue?