Skip to main content

More information for my question. We are not using SQL AG in our system, ideally we will move to this. For now, I have 75 tables in one database that has roughly 300 tables. I only want 75 table in this data copy. We are building this new database as a 'datamart' for reporting and other tools to pull data from this source, as to not effect our Production system. I am testing this today and I am finding the data takes longer than I expect to copy from one server to another. I am testing this using Workbench for now but once I have good results, I'll push the workbench to FME server to schedule this work. For my first test, I am pulling all data and I see the 2500 features are pulled at a time. I am using a static reader not a dynamic reader. For my second test, I will refine the data from the larger tables (over 1 million records) to add in a where condition to limit the number of records we pull. I am attempting to get this to build within 5 minutes. I am looking for Safe's fastest way to pull data from one database to another. I have no transformers in the workbench, simply read and write the data. I am curious what the safe team would do to attempt to build this within a 5 minute time frame ( if it is even possible ). Data is stored within our network and is all located within the same geography. This data has no geometry data types either.

The data took roughly 2 1/2 hours to copy over 30 million features. For the second test when I added a where clause and reduced the features, I was able to populate the 75 tables in 1 minute 10 seconds with only 127,000 records required. When we start to use SQL AG, then this issue of moving 30 million records will not be present. My next set of tests will be to use a feature merger to update the 127,000 record set and to still keep this within my 5 minute time window.

 

 

 


Hi @tiewoods, which version of FME are you using? Our betas have a FeatureJoiner which can be used to perform SQL-based joins (inner, outer, left) and it's quite faster than the FeatureMerger. (See Don's and Dale's demo of it during the Unveiling 2018 webinar.)

Using a WHERE clause will definitely speed up the reading process. But of course, this limits the number of features you are reading in. Depending on your database and environment (ie- if other users need access), you can use a WHERE clause to break the data transfer into chunks.

Building a workspace in 5min is definitely do-able. Running it, with 30 million features on the other hand... will take some more time.

Creating a dump/export of the database might be a efficient way of handling this as well.


Hi @tiewoods, which version of FME are you using? Our betas have a FeatureJoiner which can be used to perform SQL-based joins (inner, outer, left) and it's quite faster than the FeatureMerger. (See Don's and Dale's demo of it during the Unveiling 2018 webinar.)

Using a WHERE clause will definitely speed up the reading process. But of course, this limits the number of features you are reading in. Depending on your database and environment (ie- if other users need access), you can use a WHERE clause to break the data transfer into chunks.

Building a workspace in 5min is definitely do-able. Running it, with 30 million features on the other hand... will take some more time.

Creating a dump/export of the database might be a efficient way of handling this as well.

I have yet to upgrade to FME 2018, great to know about the FeatureJoiner. We are using FME 2017.1 or 20170731 - Build 17539 - WIN64. I will be running more tests today, and I agree 30 million records in 5 minutes is something that will take much more time. Thank you for your reply.

 

 


Reply