Hi,
I need to read approx. 1.4M records within a single workbench process. No getting around it. They must all be read since they are participating in a FeatureMerger later and any one of the features might be a candidate.
The reader starts out fine reading 2500 records a time under 10 seconds a pop. However this slowly degrades so that after 800K records it is now 30 sec. a pop and near the end close to 50. Soup to nuts, the entire read takes about 4 hrs and 45 min. RAM/Network speed really isn't a problem. So the gradual increase of record processing time seems odd.
Is there any tip/trick for reading a large number of records (>300K ) from PostGIS (or any) reader so that the amount of time is significantly reduced? I was thinking of amping up the "Number of Records To Fetch At A Time" but I don't want to make matters worse.
Thanks very much,
Pete