Skip to main content
Solved

FuzzyStringCompareFrom2Datasets Slow

  • August 8, 2018
  • 4 replies
  • 178 views

I'm trying to compare two datasets by looking at the address attribute string used in both datasets and find the fuzzy matching ratios. The read process was fast and read the 350,000 records in dataset 1 and the 14,000 records in dataset 2 in less than a minute. I then sort both lists separately and then use the FuzzyStringCompareFrom2Datasets transformer. I have been running this workspace all day (about 5 hours so far) and it has only output 288 records. Is there a way to speed this up?

Best answer by paalped

FuzzyStringCompare2datsets does not look to support those big datasets, cause what is does it takes for every feature of your 350 000 features and adds a list of 14 000 features it the searches through that list and compare each string to the string to the string attribute you choose to compare(which equals to approx 4 900 000 000 comparison), then it sorts every list ( 350 000 times its sorts a list of length 14 000) by its ratio, and chooses the one with greates accuracy. this will of course be very time consuming with the sizes you are operating with.

This post is closed to further activity.
It may be an old question, an answered question, an implemented idea, or a notification-only post.
Please check post dates before relying on any information in a question or answer.
For follow-up or related questions, please post a new question or idea.
If there is a genuine update to be made, please contact us and request that the post is reopened.

4 replies

david_r
Celebrity
  • 8391 replies
  • August 9, 2018

How large are the strings that you're comparing using the FuzzyStringCompare?

Which format are you writing to?

What happens if you disable the writer?

Also, do you really need the two Sorters?


paalped
Contributor
Forum|alt.badge.img+5
  • Contributor
  • 130 replies
  • Best Answer
  • August 9, 2018

FuzzyStringCompare2datsets does not look to support those big datasets, cause what is does it takes for every feature of your 350 000 features and adds a list of 14 000 features it the searches through that list and compare each string to the string to the string attribute you choose to compare(which equals to approx 4 900 000 000 comparison), then it sorts every list ( 350 000 times its sorts a list of length 14 000) by its ratio, and chooses the one with greates accuracy. this will of course be very time consuming with the sizes you are operating with.


  • Author
  • 2 replies
  • August 9, 2018

How large are the strings that you're comparing using the FuzzyStringCompare?

Which format are you writing to?

What happens if you disable the writer?

Also, do you really need the two Sorters?

The strings are over 100 in length because i'm combing address parts earlier in the workflow together (street number, prefix, name, type, suffix). I'm writing to excel. If I disable the writer or just connect it to an inspector it is just as slow. I don't need the two sorters but added them after the first couple of runs thinking that sorting them might make the transformer work more efficiently.

 

 


  • Author
  • 2 replies
  • August 9, 2018

FuzzyStringCompare2datsets does not look to support those big datasets, cause what is does it takes for every feature of your 350 000 features and adds a list of 14 000 features it the searches through that list and compare each string to the string to the string attribute you choose to compare(which equals to approx 4 900 000 000 comparison), then it sorts every list ( 350 000 times its sorts a list of length 14 000) by its ratio, and chooses the one with greates accuracy. this will of course be very time consuming with the sizes you are operating with.

I'm looking for a way to compare address entries (dataset 2) in a dataset against a master address table (dataset 1). The addresses (2) initially did not have an exact match with the master table (1) so it'd be nice to see any other problems that might exist with the entry since the user can enter anything they want. I have a few ways to narrow down D2 (missing data, wrong city/county). This still is only a reduction of about 20%. Ideally the output using this transformer would prove that the users are inputting very dirty data that is not ideal to match with our address table, but I need to prove that. I guess there are other ways to do this but I was looking for something fast with FME I could run multiple times a week.