Skip to main content
Solved

Oracle Spatial to SHAPE


Hello,

 

 

I have a workbench which reads featureclasses from parameters i.e. Oracle tables. I linked this parameter to "Feature Types To Read". Oracle Sptial Reader is used to read these feature classes and write it out to SHAPE. This works fine. Just want to know is there any other efficient way to achieve this. As I don't know how many tables I may need to pass in parameter.

 

 

Cheers,

 

John

Best answer by takashi

In FME 2015, you can select one of "Named Database Connection" or "Exposed (Embedded) Connection Parameters" for a aatabase connection.

 

Using Named Database Connections (http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Desktop_Help.htm#../Subsystems/FME_Readers_Writers/Content/_NamedConnections/Using_Named_Database_Connections.htm)

 

 

Since I used the "Named" connection in the example above, it was not necessary to pass parameters to the "open" method.

 

However, if you publish your workspace to FME Server, possibly you have to use the "Embedded" parameters.

 

This is an example using "Embedded" parameters for the existing POSTGRES reader.

 

-----

 

class SchemaReader(object):        

 

    def input(self, feature):

 

        # Create POSTGRES reader.

 

        # Assuming that the input feature contains comma-delimited table names list

 

        # as an attribute called "table_list".

 

        directives = ['IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)

 

        

 

        # Open dataset.

 

        # Assuming that a POSTGRES reader exists in the workspace,

 

        # refer to its Dataset Name and Parameters. 

 

        # The following parameter setting is an example for the POSTGRES format.

 

        # Required parameters are different depending on the format.

 

        # To learn more about parameters, see "Mapping File Directives" section

 

        # of help documentation on the specific format.

 

        dataset = FME_MacroValues['SourceDataset_POSTGRES']

 

        parameters = [

 

            '_HOST', FME_MacroValues['POSTGRES_IN_HOST_POSTGRES'],

 

            '_PORT', FME_MacroValues['POSTGRES_IN_PORT_POSTGRES'],

 

            '_USER_NAME', FME_MacroValues['POSTGRES_IN_USER_NAME_POSTGRES'],

 

            '_PASSWORD', FME_MacroValues['POSTGRES_IN_PASSWORD_POSTGRES'],

 

        ]

 

        reader.open(dataset, parameters)

 

        

 

        while True:

 

            schema = reader.readSchema()

 

            if schema == None:

 

                break

 

            

 

            # Set Feature Type Name.

 

            schema.setAttribute('fme_feature_type', schema.getFeatureType())

 

            

 

            # Create "attribute{}".

 

            for i, name in enumerate(schema.getSequencedAttributeNames()):

 

                type = schema.getAttribute(name)

 

                schema.setAttribute('attribute{%d}.name' % i, name)

 

                schema.setAttribute('attribute{%d}.fme_data_type' % i, type)

 

                

 

            # Ouput Schema Feature.

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----
View original
Did this help you find an answer to your question?
This post is closed to further activity.
It may be a question with a best answer, an implemented idea, or just a post needing no comment.
If you have a follow-up or related question, please post a new question or idea.
If there is a genuine update to be made, please contact us and request that the post is reopened.

27 replies

david_r
Evangelist
  • February 19, 2015
Hi,

 

 

here's an article that is relevant even though it uses an ESRI Geodatabase as the input database: http://fmepedia.safe.com/articles/How_To/How-to-Read-All-Feature-Classes-from-Multiple-ESRI-Geodatabases

 

 

The principle is the same for Oracle Spatial.

 

 

David

  • Author
  • February 19, 2015
I am able to read single oracle spatial table using Schema (Any Format) reader, but how to read multiple feature classses i.e tables?

  • Author
  • February 20, 2015
I am still finding it difficult to devlop this workbench.

 

 

In my case, I have a table which will have names of the tables I want to transform to SHAPE. To explain further, I have a table called "TAB_LIST" in which contains list of tables, may be 1 or 20. Also schema for these tables is different everytime.

 

 

As I said in original post, if pass these names as parameter (comma separated list) it works. So thinking of putting a list in oracle table and get list of tables to transform. Also I don't have to worry about writing a schema file.

 

 

-John

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 20, 2015
Hi John,

 

 

There could be several ways, but automating the process "create table names list and pass it to the Oracle reader" could be a quick way, since you have already succeeded the translation by passing manually the list to the reader.

 

How about using the WorkspaceRunner?

 

- Create another workspace, add a reader to read the "TAB_LIST" table.

 

- Transform the features from the reader into a single feature which has the table names list as an attribute. The Aggregator (set "Attributes to Concatenate" and "Separator Character" parameters) may be easy to do that.

 

- Add a WorkspaceRunner to call the main workspace and pass the attribute (i.e. table names list) to the "Feature Types to Read" parameter.

 

 

Takashi

  • Author
  • February 20, 2015
Thanks Takashi. But I want to avoid use of workspacerunner as I will be publishing it  to the FME server.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 20, 2015
OK, my second approach.

 

Applying "Destination Schema is Derived from List Attributes". (http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)

 

 

1. Read data features

 

Read the target table names from the "TAB_LIST" table,

 

create a single feature having the table names list as an attribute,

 

send the feature to the FeatureReader to read data features from the specified tables,

 

and add an AttributeExposer to expose "fme_feature_type".

 

 

2. Read schema features

 

Add an Oracle Spatial reader as a workspace resource, configure the connection appropriately, and check every table for the "Feature Types to Read".

 

Add a Schema (Any Format) reader to the Canvas;

 

FME 2015: Link its "Source Dataset" parameter to the Source Dataset parameter of the resource reader.

 

FME 2014: Set the dataset (database) name to its "Source Dataset" parameter, and select the format name (Oracle Spatial) explicitly for the "Input Format" parameter.

 

 

The Schema reader creates features having schema information as list attribute called "attribute{}", for each feach typte (table) in the source dataset. The list can be used to dynamically configure destination schema for the writer.

 

 

4. Merge the schema features to the data features to add schema information ("attribute{}") to the data features, with the FeatureMerger.

 

Join On:

 

Requestor (data feature): fme_feature_type

 

Supplier (schema feature): fme_feature_type_name

 

 

5. Setting dynamic writer

 

Add a NULL Reader as a workspace resource.

 

Send the Merged features from the FeatureMerger to the Shape writer.

 

Check "Dynamic Properties" in the writer properties dialog, and select the NULL reader as its "Schema Source".

 

 

Hope this works.

 

 

Takashi

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 20, 2015
Sorry, there was an issue in the method I mentioned above. The Schema reader reads only schema of the "TAB_LIST" table, because it refers to the configuration of the first Oracle reader.

 

Add the resource Oracle reader to the Canvas and put it to the top in the Navigator.

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 20, 2015
Use your table-list to query the usertables,  u can query the columns_names, and even their formating. Create Schemas from that.

 

Then Have your workspace query these schemas on demand.

 

 

..rest is standard dynamic writingstuff.. ;)

 

 

 

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 20, 2015
 SELECT *           FROM    user_tables ut,                 user_tab_columns utc      WHERE   ut.TABLE_NAME IN (YourTableList)                       AND utc.TABLE_NAME =ut.TABLE_NAME                       and ut.num_rows >0
 

 

to read usercolumns ( use join to just read onl form utc)

 

format information in "DATA_TYPE","DATA_LENGTH" etc. columns.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 20, 2015
I thought too much. It's not necessary to use the Schema reader.

 

The third approach.

 

 

Add an Oracl reader to read the "TAB_LIST",

 

create a feature having table names list as an attribute,

 

send it to the FeatureReader to read data features of specified feature types,

 

send the resulting feature to the Shape writer.

 

 

Add another Oracle reader as a workspace resource to provide schema of every table.

 

Check "Dynamic Properties" in the writer properties dialog, and select the resouce Oracle reader as its "Schema Source".

  • Author
  • February 23, 2015
Thanks Takashi, I added Oracle reader to reader tab_list. and sent to featurereader. I also added another Oracle reader as resource but how can point it to individual table in the tab_list. I don't know the table names, they are not constant.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 23, 2015

 

I assume that all the data tables exist when you create the workspace, though you don't know which tables will be actually read.

 

 

Set all the data tables to "Feature Types to Read" of the resource Oracle reader, so that every schema will be read.

 

And you don't need to consider about something like "link" between the resource reader and the "table_list" reader. Just set the resource reader to the "Schema Source" of the dynamic writer.

 

Then, an appropriate schema will be applied to the features depending on the feature type name (i.e. table name).

 

 

I think this works as expected.

  • Author
  • February 23, 2015
No, not all tables are created when workspace is being created. there will be new tables created by other application which I need to transform.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 23, 2015
Oh,  the tables may not exist when you create the workspace. That was the bottle neck. I misunderstood your requirement, sorry.

 

I need reconsideration...

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 23, 2015
That is why reading tablenames and tabelcolumns from the system is suggested...

 

See snippet i posted above.

 

You dont need to know what is in there to read and use it, thats the beauty of it.

 

 

Make a list of your tables ( a string like Tabelist = 'Tabel1','table2', etc) by concatenating the tabelnames you extract with SQL on the user_tables

 

 

  • Author
  • February 24, 2015
But for that you need to create schema. Can we do this in single workspace or do I need two workspaces, one to create schema and other's main?

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 24, 2015
I would like to avoid scripting as much as possible, but a Python scripting seems to be effective in this case.

 

 

This script reads schemas of tables specified by comma-delimited table names stored by the input feature as attribute "table_list", and then outputs features which have basic schema information ("fme_feature_type" and "attributre{}") for each table.

 

-----

 

# Script Example for PythonCaller

 

import fme

 

import fmeobjects

 

 

class SchemaReader(object):        

 

    def input(self, feature):

 

        directives = ['IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)

 

        reader.open(FME_MacroValues['SourceDataset_POSTGRES'])

 

        while True:

 

            schema = reader.readSchema()

 

            if schema == None:

 

                break

 

            

 

            # Set Feature Type Name.

 

            schema.setAttribute('fme_feature_type', schema.getFeatureType())

 

            

 

            # Create "attribute{}".

 

            for i, name in enumerate(schema.getSequencedAttributeNames()):

 

                type = schema.getAttribute(name)

 

                schema.setAttribute('attribute{%d}.name' % i, name)

 

                schema.setAttribute('attribute{%d}.fme_data_type' % i, type)

 

                

 

            # Ouput Schema Feature.

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----

 

 

Creating a feature containing table list would be easy, since you read "TAB_LIST".

 

After merging "attribute{}" to data features from the FeatureRader using "fme_feature_type" as key, the list can be used to configure dynamic schema with this method.

 

"Destination Schema is Derived from List " (http://fmepedia.safe.com/articles/Samples_and_Demos/Dynamic-Workflow-Tutorial-Destination-Schema-is-Derived-from-List-Attributes)

 

 

Although the example is for PostgreSQL database, I think the same mechanism can be applied to any other formats including Oracle.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 24, 2015
P.S. I tested the script example with FME 2015.0.

 

In FME 2014, probably appropriate parameters should be set to the second argument of the open method.

  • Author
  • February 24, 2015
Thanks Takashi. I'll give it a try.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • Best Answer
  • February 25, 2015
In FME 2015, you can select one of "Named Database Connection" or "Exposed (Embedded) Connection Parameters" for a aatabase connection.

 

Using Named Database Connections (http://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Desktop_Help.htm#../Subsystems/FME_Readers_Writers/Content/_NamedConnections/Using_Named_Database_Connections.htm)

 

 

Since I used the "Named" connection in the example above, it was not necessary to pass parameters to the "open" method.

 

However, if you publish your workspace to FME Server, possibly you have to use the "Embedded" parameters.

 

This is an example using "Embedded" parameters for the existing POSTGRES reader.

 

-----

 

class SchemaReader(object):        

 

    def input(self, feature):

 

        # Create POSTGRES reader.

 

        # Assuming that the input feature contains comma-delimited table names list

 

        # as an attribute called "table_list".

 

        directives = ['IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('POSTGRES', False, directives)

 

        

 

        # Open dataset.

 

        # Assuming that a POSTGRES reader exists in the workspace,

 

        # refer to its Dataset Name and Parameters. 

 

        # The following parameter setting is an example for the POSTGRES format.

 

        # Required parameters are different depending on the format.

 

        # To learn more about parameters, see "Mapping File Directives" section

 

        # of help documentation on the specific format.

 

        dataset = FME_MacroValues['SourceDataset_POSTGRES']

 

        parameters = [

 

            '_HOST', FME_MacroValues['POSTGRES_IN_HOST_POSTGRES'],

 

            '_PORT', FME_MacroValues['POSTGRES_IN_PORT_POSTGRES'],

 

            '_USER_NAME', FME_MacroValues['POSTGRES_IN_USER_NAME_POSTGRES'],

 

            '_PASSWORD', FME_MacroValues['POSTGRES_IN_PASSWORD_POSTGRES'],

 

        ]

 

        reader.open(dataset, parameters)

 

        

 

        while True:

 

            schema = reader.readSchema()

 

            if schema == None:

 

                break

 

            

 

            # Set Feature Type Name.

 

            schema.setAttribute('fme_feature_type', schema.getFeatureType())

 

            

 

            # Create "attribute{}".

 

            for i, name in enumerate(schema.getSequencedAttributeNames()):

 

                type = schema.getAttribute(name)

 

                schema.setAttribute('attribute{%d}.name' % i, name)

 

                schema.setAttribute('attribute{%d}.fme_data_type' % i, type)

 

                

 

            # Ouput Schema Feature.

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 26, 2015
You could push the schema read from the oracle user tables to txetfile  using a AttributeFilewriter and use the written file as a schema. (and spare the use of pythonscripts).

 

 

 

 

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 27, 2015
I found a way to create a SCHEMA reader with Python script.

 

"reader" method of the SCHEMA reader returns a schema feature, so it's not necessary to create "attribute{}" list.

 

Just be aware that the attribute name for the real feature type name will be "fme_feature_type_name".

 

-----

 

class SchemaReader(object):

 

    def input(self, feature):

 

        # Create SCHEMA reader.

 

        directives = ['IDLIST', feature.getAttribute('table_list')]

 

        reader = fmeobjects.FMEUniversalReader('SCHEMA', False, directives)

 

        

 

        # Open dataset.

 

        # It's not essential to set connection parameters (_HOST, _PORT, etc.),

 

        # if those values are same as the existing POSTGRES reader.

 

        # Just "REAL_FORMAT" is required.

 

        dataset = FME_MacroValues['SourceDataset_POSTGRES']

 

        parameters = [

 

            'REAL_FORMAT', 'POSTGRES',

 

            '_HOST', FME_MacroValues['POSTGRES_IN_HOST_POSTGRES'],

 

            '_PORT', FME_MacroValues['POSTGRES_IN_PORT_POSTGRES'],

 

            '_USER_NAME', FME_MacroValues['POSTGRES_IN_USER_NAME_POSTGRES'],

 

            '_PASSWORD', FME_MacroValues['POSTGRES_IN_PASSWORD_POSTGRES'],

 

        ]

 

        reader.open(dataset, parameters)

 

        

 

        while True:

 

            schema = reader.read()

 

            if schema == None:

 

                break

 

            self.pyoutput(schema)

 

            

 

        reader.close()

 

-----

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 27, 2015
correstion for a typo:

 

"read" method of the SCHEMA reader...

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • February 27, 2015
Looks like the SCHEMA reader is smarter. However, I noticed that it will write all the parameter values including password into the log file.

 

The log might be visible through the network, so I would not recommend you to use the SCHEMA reader.

takashi
Contributor
Forum|alt.badge.img+19
  • Contributor
  • March 1, 2015
A good news regarding enhancements of schema reading.

 

See Dale's comment for this article.

 

Create Schema Features with Python Script (http://fme-memorandum-takashi.blogspot.com/2015/03/create-schema-features-with-python.html)

Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings