Hi,
Â
Â
here's an article that is relevant even though it uses an ESRI Geodatabase as the input database:Â
http://fmepedia.safe.com/articles/How_To/How-to-Read-All-Feature-Classes-from-Multiple-ESRI-GeodatabasesÂ
Â
The principle is the same for Oracle Spatial.
Â
Â
David
I am able to read single oracle spatial table using Schema (Any Format) reader, but how to read multiple feature classses i.e tables?
I am still finding it difficult to devlop this workbench.
Â
Â
In my case, I have a table which will have names of the tables I want to transform to SHAPE. To explain further, I have a table called "TAB_LIST" in which contains list of tables, may be 1 or 20. Also schema for these tables is different everytime.
Â
Â
As I said in original post, if pass these names as parameter (comma separated list) it works. So thinking of putting a list in oracle table and get list of tables to transform. Also I don't have to worry about writing a schema file.
Â
Â
-John
Hi John,
Â
Â
There could be several ways, but automating the process "create table names list and pass it to the Oracle reader" could be a quick way, since you have already succeeded the translation by passing manually the list to the reader.
Â
How about using the WorkspaceRunner?
Â
- Create another workspace, add a reader to read the "TAB_LIST" table.
Â
- Transform the features from the reader into a single feature which has the table names list as an attribute. The Aggregator (set "Attributes to Concatenate" and "Separator Character" parameters) may be easy to do that.
Â
- Add a WorkspaceRunner to call the main workspace and pass the attribute (i.e. table names list) to the "Feature Types to Read" parameter.
Â
Â
Takashi
Thanks Takashi. But I want to avoid use of workspacerunner as I will be publishing it to the FME server.
OK, my second approach.
Â
Applying "Destination Schema is Derived from List Attributes". (
http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)
Â
Â
1. Read data features
Â
Read the target table names from the "TAB_LIST" table,
Â
create a single feature having the table names list as an attribute,
Â
send the feature to the FeatureReader to read data features from the specified tables,
Â
and add an AttributeExposer to expose "fme_feature_type".
Â
Â
2. Read schema features
Â
Add an Oracle Spatial reader as a workspace resource, configure the connection appropriately, and check every table for the "Feature Types to Read".
Â
Add a Schema (Any Format) reader to the Canvas;
Â
FME 2015: Link its "Source Dataset" parameter to the Source Dataset parameter of the resource reader.
Â
FME 2014: Set the dataset (database) name to its "Source Dataset" parameter, and select the format name (Oracle Spatial) explicitly for the "Input Format" parameter.
Â
Â
The Schema reader creates features having schema information as list attribute called "attribute{}", for each feach typte (table) in the source dataset. The list can be used to dynamically configure destination schema for the writer.
Â
Â
4. Merge the schema features to the data features to add schema information ("attribute{}") to the data features, with the FeatureMerger.
Â
Join On:
Â
Requestor (data feature): fme_feature_type
Â
Supplier (schema feature): fme_feature_type_name
Â
Â
5. Setting dynamic writer
Â
Add a NULL Reader as a workspace resource.
Â
Send the Merged features from the FeatureMerger to the Shape writer.
Â
Check "Dynamic Properties" in the writer properties dialog, and select the NULL reader as its "Schema Source".
Â
Â
Hope this works.
Â
Â
Takashi
Sorry, there was an issue in the method I mentioned above. The Schema reader reads only schema of the "TAB_LIST" table, because it refers to the configuration of the first Oracle reader.
Â
Add the resource Oracle reader to the Canvas and put it to the top in the Navigator.
Use your table-list to query the usertables, u can query the columns_names, and even their formating. Create Schemas from that.
Â
Then Have your workspace query these schemas on demand.
Â
Â
..rest is standard dynamic writingstuff.. ;)
Â
Â
Â
 SELECT *           FROM    user_tables ut,                 user_tab_columns utc      WHERE   ut.TABLE_NAME IN (YourTableList)                       AND utc.TABLE_NAME =ut.TABLE_NAME                       and ut.num_rows >0
Â
Â
to read usercolumns ( use join to just read onl form utc)
Â
format information in "DATA_TYPE","DATA_LENGTH" etc. columns.
I thought too much. It's not necessary to use the Schema reader.
Â
The third approach.
Â
Â
Add an Oracl reader to read the "TAB_LIST",
Â
create a feature having table names list as an attribute,
Â
send it to the FeatureReader to read data features of specified feature types,
Â
send the resulting feature to the Shape writer.
Â
Â
Add another Oracle reader as a workspace resource to provide schema of every table.
Â
Check "Dynamic Properties" in the writer properties dialog, and select the resouce Oracle reader as its "Schema Source".
Thanks Takashi, I added Oracle reader to reader tab_list. and sent to featurereader. I also added another Oracle reader as resource but how can point it to individual table in the tab_list. I don't know the table names, they are not constant.
Â
I assume that all the data tables exist when you create the workspace, though you don't know which tables will be actually read.
Â
Â
Set all the data tables to "Feature Types to Read" of the resource Oracle reader, so that every schema will be read.
Â
And you don't need to consider about something like "link" between the resource reader and the "table_list" reader. Just set the resource reader to the "Schema Source" of the dynamic writer.
Â
Then, an appropriate schema will be applied to the features depending on the feature type name (i.e. table name).
Â
Â
I think this works as expected.
No, not all tables are created when workspace is being created. there will be new tables created by other application which I need to transform.
Oh, Â the tables may not exist when you create the workspace. That was the bottle neck. I misunderstood your requirement, sorry.
Â
I need reconsideration...
That is why reading tablenames and tabelcolumns from the system is suggested...
Â
See snippet i posted above.
Â
You dont need to know what is in there to read and use it, thats the beauty of it.
Â
Â
Make a list of your tables ( a string like Tabelist = 'Tabel1','table2', etc) by concatenating the tabelnames you extract with SQL on the user_tables
Â
Â
But for that you need to create schema. Can we do this in single workspace or do I need two workspaces, one to create schema and other's main?
I would like to avoid scripting as much as possible, but a Python scripting seems to be effective in this case.
Â
Â
This script reads schemas of tables specified by comma-delimited table names stored by the input feature as attribute "table_list", and then outputs features which have basic schema information ("fme_feature_type" and "attributre{}") for each table.
Â
-----
Â
# Script Example for PythonCaller
Â
import fme
Â
import fmeobjects
Â
Â
class SchemaReader(object): Â Â Â Â
Â
  def input(self, feature):
Â
    directives = 'IDLIST', feature.getAttribute('table_list')]
Â
    reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)
Â
    reader.open(FME_MacroValuesÂ'SourceDataset_POSTGRES'])
Â
    while True:
Â
      schema = reader.readSchema()
Â
      if schema == None:
Â
        break
Â
      Â
Â
      # Set Feature Type Name.
Â
      schema.setAttribute('fme_feature_type', schema.getFeatureType())
Â
      Â
Â
      # Create "attribute{}".
Â
      for i, name in enumerate(schema.getSequencedAttributeNames()):
Â
        type = schema.getAttribute(name)
Â
        schema.setAttribute('attribute{%d}.name' % i, name)
Â
        schema.setAttribute('attribute{%d}.fme_data_type' % i, type)
Â
        Â
Â
      # Ouput Schema Feature.
Â
      self.pyoutput(schema)
Â
      Â
Â
    reader.close()
Â
-----
Â
Â
Creating a feature containing table list would be easy, since you read "TAB_LIST".
Â
After merging "attribute{}" to data features from the FeatureRader using "fme_feature_type" as key, the list can be used to configure dynamic schema with this method.
Â
"Destination Schema is Derived from List " (
http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)
Â
Â
Although the example is for PostgreSQL database, I think the same mechanism can be applied to any other formats including Oracle.
P.S. I tested the script example with FME 2015.0.
Â
In FME 2014, probably appropriate parameters should be set to the second argument of the open method.
Thanks Takashi. I'll give it a try.
In FME 2015, you can select one of "Named Database Connection" or "Exposed (Embedded) Connection Parameters" for a aatabase connection.
Â
Using Named Database Connections (
http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Desktop_Help.htm#../Subsystems/FME_Readers_Writers/Content/_NamedConnections/Using_Named_Database_Connections.htm)
Â
Â
Since I used the "Named" connection in the example above, it was not necessary to pass parameters to the "open" method.
Â
However, if you publish your workspace to FME Server, possibly you have to use the "Embedded" parameters.
Â
This is an example using "Embedded" parameters for the existing POSTGRES reader.
Â
-----
Â
class SchemaReader(object): Â Â Â Â
Â
  def input(self, feature):
Â
    # Create POSTGRES reader.
Â
    # Assuming that the input feature contains comma-delimited table names list
Â
    # as an attribute called "table_list".
Â
    directives = <'IDLIST', feature.getAttribute('table_list')]
Â
    reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)
Â
    Â
Â
    # Open dataset.
Â
    # Assuming that a POSTGRES reader exists in the workspace,
Â
    # refer to its Dataset Name and Parameters.Â
Â
    # The following parameter setting is an example for the POSTGRES format.
Â
    # Required parameters are different depending on the format.
Â
    # To learn more about parameters, see "Mapping File Directives" section
Â
    # of help documentation on the specific format.
Â
    dataset = FME_MacroValuesl'SourceDataset_POSTGRES']
Â
    parameters = t
Â
      '_HOST', FME_MacroValues3'POSTGRES_IN_HOST_POSTGRES'],
Â
      '_PORT', FME_MacroValuesr'POSTGRES_IN_PORT_POSTGRES'],
Â
      '_USER_NAME', FME_MacroValuesa'POSTGRES_IN_USER_NAME_POSTGRES'],
Â
      '_PASSWORD', FME_MacroValuesE'POSTGRES_IN_PASSWORD_POSTGRES'],
Â
    ]
Â
    reader.open(dataset, parameters)
Â
    Â
Â
    while True:
Â
      schema = reader.readSchema()
Â
      if schema == None:
Â
        break
Â
      Â
Â
      # Set Feature Type Name.
Â
      schema.setAttribute('fme_feature_type', schema.getFeatureType())
Â
      Â
Â
      # Create "attribute{}".
Â
      for i, name in enumerate(schema.getSequencedAttributeNames()):
Â
        type = schema.getAttribute(name)
Â
        schema.setAttribute('attribute{%d}.name' % i, name)
Â
        schema.setAttribute('attribute{%d}.fme_data_type' % i, type)
Â
        Â
Â
      # Ouput Schema Feature.
Â
      self.pyoutput(schema)
Â
      Â
Â
    reader.close()
Â
-----
You could push the schema read from the oracle user tables to txetfile  using a AttributeFilewriter and use the written file as a schema. (and spare the use of pythonscripts).
Â
Â
Â
Â
I found a way to create a SCHEMA reader with Python script.
Â
"reader" method of the SCHEMA reader returns a schema feature, so it's not necessary to create "attribute{}" list.
Â
Just be aware that the attribute name for the real feature type name will be "fme_feature_type_name".
Â
-----
Â
class SchemaReader(object):
Â
  def input(self, feature):
Â
    # Create SCHEMA reader.
Â
    directives = 'IDLIST', feature.getAttribute('table_list')]
Â
    reader = fmeobjects.FMEUniversalReader('SCHEMA', False, directives)
Â
    Â
Â
    # Open dataset.
Â
    # It's not essential to set connection parameters (_HOST, _PORT, etc.),
Â
    # if those values are same as the existing POSTGRES reader.
Â
    # Just "REAL_FORMAT" is required.
Â
    dataset = FME_MacroValuese'SourceDataset_POSTGRES']
Â
    parameters = r
Â
      'REAL_FORMAT', 'POSTGRES',
Â
      '_HOST', FME_MacroValuesP'POSTGRES_IN_HOST_POSTGRES'],
Â
      '_PORT', FME_MacroValues_'POSTGRES_IN_PORT_POSTGRES'],
Â
      '_USER_NAME', FME_MacroValuesI'POSTGRES_IN_USER_NAME_POSTGRES'],
Â
      '_PASSWORD', FME_MacroValuesE'POSTGRES_IN_PASSWORD_POSTGRES'],
Â
    ]
Â
    reader.open(dataset, parameters)
Â
    Â
Â
    while True:
Â
      schema = reader.
read()Â
      if schema == None:
Â
        break
Â
      self.pyoutput(schema)
Â
      Â
Â
    reader.close()
Â
-----
correstion for a typo:
Â
"read" method of the SCHEMA reader...
Looks like the SCHEMA reader is smarter. However, I noticed that it will write all the parameter values including password into the log file.
Â
The log might be visible through the network, so I would not recommend you to use the SCHEMA reader.
A good news regarding enhancements of schema reading.
Â
See Dale's comment for this article.
Â
Create Schema Features with Python Script (
http://fme-memorandum-takashi.blogspot.com/2015/03/create-schema-features-with-python.html)