Skip to main content

I'm trying to compare two datasets by looking at the address attribute string used in both datasets and find the fuzzy matching ratios. The read process was fast and read the 350,000 records in dataset 1 and the 14,000 records in dataset 2 in less than a minute. I then sort both lists separately and then use the FuzzyStringCompareFrom2Datasets transformer. I have been running this workspace all day (about 5 hours so far) and it has only output 288 records. Is there a way to speed this up?

How large are the strings that you're comparing using the FuzzyStringCompare?

Which format are you writing to?

What happens if you disable the writer?

Also, do you really need the two Sorters?


FuzzyStringCompare2datsets does not look to support those big datasets, cause what is does it takes for every feature of your 350 000 features and adds a list of 14 000 features it the searches through that list and compare each string to the string to the string attribute you choose to compare(which equals to approx 4 900 000 000 comparison), then it sorts every list ( 350 000 times its sorts a list of length 14 000) by its ratio, and chooses the one with greates accuracy. this will of course be very time consuming with the sizes you are operating with.


How large are the strings that you're comparing using the FuzzyStringCompare?

Which format are you writing to?

What happens if you disable the writer?

Also, do you really need the two Sorters?

The strings are over 100 in length because i'm combing address parts earlier in the workflow together (street number, prefix, name, type, suffix). I'm writing to excel. If I disable the writer or just connect it to an inspector it is just as slow. I don't need the two sorters but added them after the first couple of runs thinking that sorting them might make the transformer work more efficiently.

 

 


FuzzyStringCompare2datsets does not look to support those big datasets, cause what is does it takes for every feature of your 350 000 features and adds a list of 14 000 features it the searches through that list and compare each string to the string to the string attribute you choose to compare(which equals to approx 4 900 000 000 comparison), then it sorts every list ( 350 000 times its sorts a list of length 14 000) by its ratio, and chooses the one with greates accuracy. this will of course be very time consuming with the sizes you are operating with.

I'm looking for a way to compare address entries (dataset 2) in a dataset against a master address table (dataset 1). The addresses (2) initially did not have an exact match with the master table (1) so it'd be nice to see any other problems that might exist with the entry since the user can enter anything they want. I have a few ways to narrow down D2 (missing data, wrong city/county). This still is only a reduction of about 20%. Ideally the output using this transformer would prove that the users are inputting very dirty data that is not ideal to match with our address table, but I need to prove that. I guess there are other ways to do this but I was looking for something fast with FME I could run multiple times a week.

 

 


Reply