Hi all,
My question is regarding performance actually...
I'm going through a process of reading data (shp) trimming it, doing change detection and writing it to a sde. Some of my data contains 5.000.000 objects, so it takes forever to run it, which is not a problem in production mode.
In the testing fase, when tweeking and changing settings it is a big hazzle though. I have added a tester to both my original data source and the revised data source to filter out a small portion of data, but it still has to go through every single object when reading the data...
So my question is: Is there a way to avoid having to read everything from both datasets?
Thanks :-)