Skip to main content

I am working on developing a workbench to "warehouse" my organization's data on a weekly basis. The data is spatial in nature and stored in a SQL Server database. It needs to be moved to an Oracle environment on a weekly basis.

 

Reading the data into the workbench has been very slow, especially for one feature class that contains around 9.5 million point records. During some testing, I noticed that when I remove the writer, the read time dramatically decreases (from around 40 mins to 10 mins). Is this expected behavior? Are there any ways that I can increase the performance of reading/writing these larger datasets?

A few things to consider


A few things to consider

Thanks for the answer. In my case, the writes actually complete relatively quickly. I'm primarily concerned with reading the data into the workbench. That's what is taking the most time in my case.

 

Maybe I misunderstand the logs but it seems like my features are only written once everything is read into the workspace. Are the writes continuously happening even though no log message is being written? What else would account for the reads taking 4x as long once a writer is connected in my workspace?


Thanks for the answer. In my case, the writes actually complete relatively quickly. I'm primarily concerned with reading the data into the workbench. That's what is taking the most time in my case.

 

Maybe I misunderstand the logs but it seems like my features are only written once everything is read into the workspace. Are the writes continuously happening even though no log message is being written? What else would account for the reads taking 4x as long once a writer is connected in my workspace?

@spalka​ If you have multiple writers, then the data for the second writer is cached. If you are writing to the same Oracle database ensure you only have one writer with multiple Feature Types associated with that single writer..

 

Make sure Feature Caching is OFF

 

If you can, attach a log file and your workspace. That might help the community identify the cause of the issue.


@spalka​ As @david_r​ suggests, experiment a little with the Transaction Size and Bulk Write Size on the Oracle Writer. I did a quite evaluation and the combination of transaction & Bulk Write definitely make a difference:

Bulk Write Size:

200 (default) transaction=1000 Duration 1' 17"

1000 transaction=1000 Duration 42"

5000 transaction=5000 Duration 29"

10000 transaction=10000 Duration 25"

20000 transaction=20000 Duration 25"

The sweet spot probably depends on the database configuration and the network latency (how close FME is too the database). In this case for Oracle Cloud between 5000 & 10000 would give the best performance.


Reply