Skip to main content
Hello,

 

 

I have a workbench which reads featureclasses from parameters i.e. Oracle tables. I linked this parameter to "Feature Types To Read". Oracle Sptial Reader is used to read these feature classes and write it out to SHAPE. This works fine. Just want to know is there any other efficient way to achieve this. As I don't know how many tables I may need to pass in parameter.

 

 

Cheers,

 

John
Hi,

 

 

here's an article that is relevant even though it uses an ESRI Geodatabase as the input database: http://fmepedia.safe.com/articles/How_To/How-to-Read-All-Feature-Classes-from-Multiple-ESRI-Geodatabases

 

 

The principle is the same for Oracle Spatial.

 

 

David
I am able to read single oracle spatial table using Schema (Any Format) reader, but how to read multiple feature classses i.e tables?
I am still finding it difficult to devlop this workbench.

 

 

In my case, I have a table which will have names of the tables I want to transform to SHAPE. To explain further, I have a table called "TAB_LIST" in which contains list of tables, may be 1 or 20. Also schema for these tables is different everytime.

 

 

As I said in original post, if pass these names as parameter (comma separated list) it works. So thinking of putting a list in oracle table and get list of tables to transform. Also I don't have to worry about writing a schema file.

 

 

-John
Hi John,

 

 

There could be several ways, but automating the process "create table names list and pass it to the Oracle reader" could be a quick way, since you have already succeeded the translation by passing manually the list to the reader.

 

How about using the WorkspaceRunner?

 

- Create another workspace, add a reader to read the "TAB_LIST" table.

 

- Transform the features from the reader into a single feature which has the table names list as an attribute. The Aggregator (set "Attributes to Concatenate" and "Separator Character" parameters) may be easy to do that.

 

- Add a WorkspaceRunner to call the main workspace and pass the attribute (i.e. table names list) to the "Feature Types to Read" parameter.

 

 

Takashi
Thanks Takashi. But I want to avoid use of workspacerunner as I will be publishing it  to the FME server.
OK, my second approach.

 

Applying "Destination Schema is Derived from List Attributes". (http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)

 

 

1. Read data features

 

Read the target table names from the "TAB_LIST" table,

 

create a single feature having the table names list as an attribute,

 

send the feature to the FeatureReader to read data features from the specified tables,

 

and add an AttributeExposer to expose "fme_feature_type".

 

 

2. Read schema features

 

Add an Oracle Spatial reader as a workspace resource, configure the connection appropriately, and check every table for the "Feature Types to Read".

 

Add a Schema (Any Format) reader to the Canvas;

 

FME 2015: Link its "Source Dataset" parameter to the Source Dataset parameter of the resource reader.

 

FME 2014: Set the dataset (database) name to its "Source Dataset" parameter, and select the format name (Oracle Spatial) explicitly for the "Input Format" parameter.

 

 

The Schema reader creates features having schema information as list attribute called "attribute{}", for each feach typte (table) in the source dataset. The list can be used to dynamically configure destination schema for the writer.

 

 

4. Merge the schema features to the data features to add schema information ("attribute{}") to the data features, with the FeatureMerger.

 

Join On:

 

Requestor (data feature): fme_feature_type

 

Supplier (schema feature): fme_feature_type_name

 

 

5. Setting dynamic writer

 

Add a NULL Reader as a workspace resource.

 

Send the Merged features from the FeatureMerger to the Shape writer.

 

Check "Dynamic Properties" in the writer properties dialog, and select the NULL reader as its "Schema Source".

 

 

Hope this works.

 

 

Takashi
Sorry, there was an issue in the method I mentioned above. The Schema reader reads only schema of the "TAB_LIST" table, because it refers to the configuration of the first Oracle reader.

 

Add the resource Oracle reader to the Canvas and put it to the top in the Navigator.
Use your table-list to query the usertables,  u can query the columns_names, and even their formating. Create Schemas from that.

 

Then Have your workspace query these schemas on demand.

 

 

..rest is standard dynamic writingstuff.. ;)

 

 

 
 SELECT *           FROM    user_tables ut,                 user_tab_columns utc      WHERE   ut.TABLE_NAME IN (YourTableList)                       AND utc.TABLE_NAME =ut.TABLE_NAME                       and ut.num_rows >0
 

 

to read usercolumns ( use join to just read onl form utc)

 

format information in "DATA_TYPE","DATA_LENGTH" etc. columns.
I thought too much. It's not necessary to use the Schema reader.

 

The third approach.

 

 

Add an Oracl reader to read the "TAB_LIST",

 

create a feature having table names list as an attribute,

 

send it to the FeatureReader to read data features of specified feature types,

 

send the resulting feature to the Shape writer.

 

 

Add another Oracle reader as a workspace resource to provide schema of every table.

 

Check "Dynamic Properties" in the writer properties dialog, and select the resouce Oracle reader as its "Schema Source".
Thanks Takashi, I added Oracle reader to reader tab_list. and sent to featurereader. I also added another Oracle reader as resource but how can point it to individual table in the tab_list. I don't know the table names, they are not constant.

 

I assume that all the data tables exist when you create the workspace, though you don't know which tables will be actually read.

 

 

Set all the data tables to "Feature Types to Read" of the resource Oracle reader, so that every schema will be read.

 

And you don't need to consider about something like "link" between the resource reader and the "table_list" reader. Just set the resource reader to the "Schema Source" of the dynamic writer.

 

Then, an appropriate schema will be applied to the features depending on the feature type name (i.e. table name).

 

 

I think this works as expected.
No, not all tables are created when workspace is being created. there will be new tables created by other application which I need to transform.
Oh,  the tables may not exist when you create the workspace. That was the bottle neck. I misunderstood your requirement, sorry.

 

I need reconsideration...
That is why reading tablenames and tabelcolumns from the system is suggested...

 

See snippet i posted above.

 

You dont need to know what is in there to read and use it, thats the beauty of it.

 

 

Make a list of your tables ( a string like Tabelist = 'Tabel1','table2', etc) by concatenating the tabelnames you extract with SQL on the user_tables

 

 
But for that you need to create schema. Can we do this in single workspace or do I need two workspaces, one to create schema and other's main?
I would like to avoid scripting as much as possible, but a Python scripting seems to be effective in this case.

 

 

This script reads schemas of tables specified by comma-delimited table names stored by the input feature as attribute "table_list", and then outputs features which have basic schema information ("fme_feature_type" and "attributre{}") for each table.

 

-----

 

# Script Example for PythonCaller

 

import fme

 

import fmeobjects

 

 

class SchemaReader(object):        

 

    def input(self, feature):

 

        directives = 'IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)

 

        reader.open(FME_MacroValuesÂ'SourceDataset_POSTGRES'])

 

        while True:

 

            schema = reader.readSchema()

 

            if schema == None:

 

                break

 

            

 

            # Set Feature Type Name.

 

            schema.setAttribute('fme_feature_type', schema.getFeatureType())

 

            

 

            # Create "attribute{}".

 

            for i, name in enumerate(schema.getSequencedAttributeNames()):

 

                type = schema.getAttribute(name)

 

                schema.setAttribute('attribute{%d}.name' % i, name)

 

                schema.setAttribute('attribute{%d}.fme_data_type' % i, type)

 

                

 

            # Ouput Schema Feature.

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----

 

 

Creating a feature containing table list would be easy, since you read "TAB_LIST".

 

After merging "attribute{}" to data features from the FeatureRader using "fme_feature_type" as key, the list can be used to configure dynamic schema with this method.

 

"Destination Schema is Derived from List " (http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)

 

 

Although the example is for PostgreSQL database, I think the same mechanism can be applied to any other formats including Oracle.
P.S. I tested the script example with FME 2015.0.

 

In FME 2014, probably appropriate parameters should be set to the second argument of the open method.
Thanks Takashi. I'll give it a try.
In FME 2015, you can select one of "Named Database Connection" or "Exposed (Embedded) Connection Parameters" for a aatabase connection.

 

Using Named Database Connections (http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Desktop_Help.htm#../Subsystems/FME_Readers_Writers/Content/_NamedConnections/Using_Named_Database_Connections.htm)

 

 

Since I used the "Named" connection in the example above, it was not necessary to pass parameters to the "open" method.

 

However, if you publish your workspace to FME Server, possibly you have to use the "Embedded" parameters.

 

This is an example using "Embedded" parameters for the existing POSTGRES reader.

 

-----

 

class SchemaReader(object):        

 

    def input(self, feature):

 

        # Create POSTGRES reader.

 

        # Assuming that the input feature contains comma-delimited table names list

 

        # as an attribute called "table_list".

 

        directives = <'IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)

 

        

 

        # Open dataset.

 

        # Assuming that a POSTGRES reader exists in the workspace,

 

        # refer to its Dataset Name and Parameters. 

 

        # The following parameter setting is an example for the POSTGRES format.

 

        # Required parameters are different depending on the format.

 

        # To learn more about parameters, see "Mapping File Directives" section

 

        # of help documentation on the specific format.

 

        dataset = FME_MacroValuesl'SourceDataset_POSTGRES']

 

        parameters = t

 

            '_HOST', FME_MacroValues3'POSTGRES_IN_HOST_POSTGRES'],

 

            '_PORT', FME_MacroValuesr'POSTGRES_IN_PORT_POSTGRES'],

 

            '_USER_NAME', FME_MacroValuesa'POSTGRES_IN_USER_NAME_POSTGRES'],

 

            '_PASSWORD', FME_MacroValuesE'POSTGRES_IN_PASSWORD_POSTGRES'],

 

        ]

 

        reader.open(dataset, parameters)

 

        

 

        while True:

 

            schema = reader.readSchema()

 

            if schema == None:

 

                break

 

            

 

            # Set Feature Type Name.

 

            schema.setAttribute('fme_feature_type', schema.getFeatureType())

 

            

 

            # Create "attribute{}".

 

            for i, name in enumerate(schema.getSequencedAttributeNames()):

 

                type = schema.getAttribute(name)

 

                schema.setAttribute('attribute{%d}.name' % i, name)

 

                schema.setAttribute('attribute{%d}.fme_data_type' % i, type)

 

                

 

            # Ouput Schema Feature.

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----
You could push the schema read from the oracle user tables to txetfile  using a AttributeFilewriter and use the written file as a schema. (and spare the use of pythonscripts).

 

 

 

 
I found a way to create a SCHEMA reader with Python script.

 

"reader" method of the SCHEMA reader returns a schema feature, so it's not necessary to create "attribute{}" list.

 

Just be aware that the attribute name for the real feature type name will be "fme_feature_type_name".

 

-----

 

class SchemaReader(object):

 

    def input(self, feature):

 

        # Create SCHEMA reader.

 

        directives =  'IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('SCHEMA', False, directives)

 

        

 

        # Open dataset.

 

        # It's not essential to set connection parameters (_HOST, _PORT, etc.),

 

        # if those values are same as the existing POSTGRES reader.

 

        # Just "REAL_FORMAT" is required.

 

        dataset = FME_MacroValuese'SourceDataset_POSTGRES']

 

        parameters = r

 

            'REAL_FORMAT', 'POSTGRES',

 

            '_HOST', FME_MacroValuesP'POSTGRES_IN_HOST_POSTGRES'],

 

            '_PORT', FME_MacroValues_'POSTGRES_IN_PORT_POSTGRES'],

 

            '_USER_NAME', FME_MacroValuesI'POSTGRES_IN_USER_NAME_POSTGRES'],

 

            '_PASSWORD', FME_MacroValuesE'POSTGRES_IN_PASSWORD_POSTGRES'],

 

        ]

 

        reader.open(dataset, parameters)

 

        

 

        while True:

 

            schema = reader.read()

 

            if schema == None:

 

                break

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----
correstion for a typo:

 

"read" method of the SCHEMA reader...
Looks like the SCHEMA reader is smarter. However, I noticed that it will write all the parameter values including password into the log file.

 

The log might be visible through the network, so I would not recommend you to use the SCHEMA reader.
A good news regarding enhancements of schema reading.

 

See Dale's comment for this article.

 

Create Schema Features with Python Script (http://fme-memorandum-takashi.blogspot.com/2015/03/create-schema-features-with-python.html)

Reply