More information for my question. We are not using SQL AG in our system, ideally we will move to this. For now, I have 75 tables in one database that has roughly 300 tables. I only want 75 table in this data copy. We are building this new database as a 'datamart' for reporting and other tools to pull data from this source, as to not effect our Production system. I am testing this today and I am finding the data takes longer than I expect to copy from one server to another. I am testing this using Workbench for now but once I have good results, I'll push the workbench to FME server to schedule this work. For my first test, I am pulling all data and I see the 2500 features are pulled at a time. I am using a static reader not a dynamic reader. For my second test, I will refine the data from the larger tables (over 1 million records) to add in a where condition to limit the number of records we pull. I am attempting to get this to build within 5 minutes. I am looking for Safe's fastest way to pull data from one database to another. I have no transformers in the workbench, simply read and write the data. I am curious what the safe team would do to attempt to build this within a 5 minute time frame ( if it is even possible ). Data is stored within our network and is all located within the same geography. This data has no geometry data types either.
Hi @tiewoods, which version of FME are you using? Our betas have a FeatureJoiner which can be used to perform SQL-based joins (inner, outer, left) and it's quite faster than the FeatureMerger. (See Don's and Dale's demo of it during the Unveiling 2018 webinar.)
Using a WHERE clause will definitely speed up the reading process. But of course, this limits the number of features you are reading in. Depending on your database and environment (ie- if other users need access), you can use a WHERE clause to break the data transfer into chunks.
Building a workspace in 5min is definitely do-able. Running it, with 30 million features on the other hand... will take some more time.
Creating a dump/export of the database might be a efficient way of handling this as well.
Hi @tiewoods, which version of FME are you using? Our betas have a FeatureJoiner which can be used to perform SQL-based joins (inner, outer, left) and it's quite faster than the FeatureMerger. (See Don's and Dale's demo of it during the Unveiling 2018 webinar.)
Using a WHERE clause will definitely speed up the reading process. But of course, this limits the number of features you are reading in. Depending on your database and environment (ie- if other users need access), you can use a WHERE clause to break the data transfer into chunks.
Building a workspace in 5min is definitely do-able. Running it, with 30 million features on the other hand... will take some more time.
Creating a dump/export of the database might be a efficient way of handling this as well.