I have a gazillion tables with SDO_GEOMETRY fields in a single Oracle instance. Most of those tables have a complex join to other tables to get interesting information to control the display.
I have a handful of workspaces to extract this data to FFS, mainly focused on the specific geometry type, that is: Point, Labels, Lines, Polygon.
Having just read about hadoop, map/reduce, and all that functional architecture, I got it in my head that I needed to map each geometry table to its function i.e. geometry workspace and let FME Server go wild on them.
What I decided to do is launch a workspace for each table and send in the join criteria as parameters for use in an SQLCreator. So I end up submitting 250+ jobs, each running 1 of 6 workspaces to process a single table. All processed by 8 engines.
Well, you can imagine what Oracle does in response to this. It basically locks up.
My question is, what is the right way to scale? I haven't tried a single workspace with 250+ reader/feature types, because the cutting and pasting and redundancy of the same transformers over and over, seems silly.
So I was wondereing: what is Oracle's problem is with servicing 8 connections at a time, each servicing one table and it's related records?
I've been told that it can't hard-parse that many SQL statements at once (8!? seriously?!) And that the queries need to use bind variables (I posted a different question about that) etc.
I've been told to use views, but that is problematic because the views would have to be created on the fly because the criteria is a subset of any given table, in my case, a wire-center, or a feeder, or a circuit. I can't create a view for every arbitrary sub-group of data that can be requested.
Any suggestions welcome,
Thanks
Brad