I have a workbench that has to process about 600,000 lines and apply updates to a master dataset (.gdb). The attached workbench is a very small sample of what has to be completed. The main issue I am running into is the update detector is extremely slow. A sample run of about 1000 lines took just over 2 hours. So any suggestions of things I can do to speed it up would be much appreciated.
I have also found that when I try to run the whole the workbench with the complete dataset the FeatureMerger rejects the suppliers with the rejection code EXTRA_REFERENCEE_FEATURE. This only seems to occur when the number of suppliers gets quite large. I have seen this same rejection code mentioned in reference to duplicates but I don't have any duplicates in the complete supplier dataset.
Any help on either issues would be much appreciated.
Sample of bench
Pic of whole bench