Skip to main content

Hi

I have an excel with list of feature classes to read, attribute fields (comma delimited) and where clause(ex: myfield = 'Yes' and MyField2='abc')

Reading excel, passing to feature reader with where clause is fine, works ok

My issue, is how do I -

> dynamically write selected features from feature reader to writer

> with only attribute fields required (coming from excel in comma delimited texts)

[ if not possible at least write all read attributes to write, again shud be dynamic as each input feature class\\att fields will be different.]

> pass feature class names built prior to feature reader

A work around solution and an example will really help solve my problem.

Tried to search similar example, couldn't figure out.

Really appreciate your help

Thank you all

Which writer do you use?

 

 

 


Which writer do you use?

 

 

 

mssql spatial..

 

I think writer is not an issue.. I could make it up to dynamic writer with spatial index and bounding box..

 

But for every input feature class , want to create dynamic schema and supply that to the writer in Schema Source: <Schema From Schema Feature> parameter.

 

Currently I'm struck in the Feature Reader..schema..
mssql spatial..

 

I think writer is not an issue.. I could make it up to dynamic writer with spatial index and bounding box..

 

But for every input feature class , want to create dynamic schema and supply that to the writer in Schema Source: <Schema From Schema Feature> parameter.

 

Currently I'm struck in the Feature Reader..schema..
Do you use a schema reader or a schemamapper or do you want to create the schema feature on the fly?

 

 


Did you try a SchemaSetter customer transformer?

 

https://hub.safe.com/transformers/schemasetter

Example:

https://knowledge.safe.com/articles/43899/dynamic-workflows-advanced-example-modifying-the-s.html

 


Thanks for reply

 

I already checked, but for couldn't set schema dynamically for all my incoming feature classes, and each one has different schema..

 

 


Assuming that each record from the Excel table contains these three attributes for each feature class to be read.

  • Feature Class: Name of a feature class.
  • Required Fields: Attribute names (comma-separated) to be written into the destination dataset.

  • Where Clause: The value to be set to the WHERE Clause parameter in the FeatureReader.

Based on the Require Fields (comma-separated attribute names), this Python script updates the schema definition output from the Schema port of the FeatureReader.

# PythonCaller Script Example
def updateSchemaDefinition(feature):
    # Original Schema Definition (attribute names and data types)
    # name: attribute name
    # native_data_type: data type name read from the source dataset
    # fme_data_type: FME generic data type name mapped to the native_data_type
    names = feature.getAttribute('attribute{}.name')
    ntypes = feature.getAttribute('attribute{}.native_data_type')
    ftypes = feature.getAttribute('attribute{}.fme_data_type')
    
    # Collect required attribute names and data types
    attrsToKeep = set(feature.getAttribute('Required Fields').split(','))
    newNames, newNativeTypes, newFmeTypes = Â], m],  ]
    for name, ntype, ftype in zip(names, ntypes, ftypes):
        if name in attrsToKeep:
            newNames.append(name)
            newNativeTypes.append(ntype)
            newFmeTypes.append(ftype)
            
    # Update Schema Definition (remove and re-create the "attribute{}" list)
    feature.removeAttrsWithPrefix('attribute{')    
    feature.setAttribute('attribute{}.name', newNames)
    feature.setAttribute('attribute{}.native_data_type', newNativeTypes)
    feature.setAttribute('attribute{}.fme_data_type', newFmeTypes) 

0684Q00000ArJljQAF.png

Please vote up this Idea: Introduce New Transformer to Modify Schema Easily


Assuming that each record from the Excel table contains these three attributes for each feature class to be read.

  • Feature Class: Name of a feature class.
  • Required Fields: Attribute names (comma-separated) to be written into the destination dataset.

  • Where Clause: The value to be set to the WHERE Clause parameter in the FeatureReader.

Based on the Require Fields (comma-separated attribute names), this Python script updates the schema definition output from the Schema port of the FeatureReader.

# PythonCaller Script Example
def updateSchemaDefinition(feature):
    # Original Schema Definition (attribute names and data types)
    # name: attribute name
    # native_data_type: data type name read from the source dataset
    # fme_data_type: FME generic data type name mapped to the native_data_type
    names = feature.getAttribute('attribute{}.name')
    ntypes = feature.getAttribute('attribute{}.native_data_type')
    ftypes = feature.getAttribute('attribute{}.fme_data_type')
    
    # Collect required attribute names and data types
    attrsToKeep = set(feature.getAttribute('Required Fields').split(','))
    newNames, newNativeTypes, newFmeTypes = Â], m],  ]
    for name, ntype, ftype in zip(names, ntypes, ftypes):
        if name in attrsToKeep:
            newNames.append(name)
            newNativeTypes.append(ntype)
            newFmeTypes.append(ftype)
            
    # Update Schema Definition (remove and re-create the "attribute{}" list)
    feature.removeAttrsWithPrefix('attribute{')    
    feature.setAttribute('attribute{}.name', newNames)
    feature.setAttribute('attribute{}.native_data_type', newNativeTypes)
    feature.setAttribute('attribute{}.fme_data_type', newFmeTypes) 

0684Q00000ArJljQAF.png

Please vote up this Idea: Introduce New Transformer to Modify Schema Easily

Thanks for your reply

 

I couldn't use the above example(python caller), as I did not know how to define the data type for each dynamically.

 

As mentioned, I'm reading excel to read my feature classes, and have list of attributes to keep in comma delimited text.

 

 

However, I could update my ETL with the following, and its working..

 

Read Schema from feature reader, ListExploder, Aggregator(with Feature class aggregate field), attribute keeper.

 

 

However, still hav some bugs like, its not writing first feature class. Also, setting dynamic bounding box for each feature type. This is required in order to auto create spatial index.

 

 

 


Assuming that each record from the Excel table contains these three attributes for each feature class to be read.

  • Feature Class: Name of a feature class.
  • Required Fields: Attribute names (comma-separated) to be written into the destination dataset.

  • Where Clause: The value to be set to the WHERE Clause parameter in the FeatureReader.

Based on the Require Fields (comma-separated attribute names), this Python script updates the schema definition output from the Schema port of the FeatureReader.

# PythonCaller Script Example
def updateSchemaDefinition(feature):
    # Original Schema Definition (attribute names and data types)
    # name: attribute name
    # native_data_type: data type name read from the source dataset
    # fme_data_type: FME generic data type name mapped to the native_data_type
    names = feature.getAttribute('attribute{}.name')
    ntypes = feature.getAttribute('attribute{}.native_data_type')
    ftypes = feature.getAttribute('attribute{}.fme_data_type')
    
    # Collect required attribute names and data types
    attrsToKeep = set(feature.getAttribute('Required Fields').split(','))
    newNames, newNativeTypes, newFmeTypes = Â], m],  ]
    for name, ntype, ftype in zip(names, ntypes, ftypes):
        if name in attrsToKeep:
            newNames.append(name)
            newNativeTypes.append(ntype)
            newFmeTypes.append(ftype)
            
    # Update Schema Definition (remove and re-create the "attribute{}" list)
    feature.removeAttrsWithPrefix('attribute{')    
    feature.setAttribute('attribute{}.name', newNames)
    feature.setAttribute('attribute{}.native_data_type', newNativeTypes)
    feature.setAttribute('attribute{}.fme_data_type', newFmeTypes) 

0684Q00000ArJljQAF.png

Please vote up this Idea: Introduce New Transformer to Modify Schema Easily

Also, attribute fields to keep from input feature class are in my list (from excel), how do I keep ONLY these attribute fields ? I donot want any other att fields in my output than I need.

 

There is no option to keep these incoming attributes in feature reader or merger ?
Thanks for your reply

 

I couldn't use the above example(python caller), as I did not know how to define the data type for each dynamically.

 

As mentioned, I'm reading excel to read my feature classes, and have list of attributes to keep in comma delimited text.

 

 

However, I could update my ETL with the following, and its working..

 

Read Schema from feature reader, ListExploder, Aggregator(with Feature class aggregate field), attribute keeper.

 

 

However, still hav some bugs like, its not writing first feature class. Also, setting dynamic bounding box for each feature type. This is required in order to auto create spatial index.

 

 

 

You don't need to know data types in the source dataset, since the FeatureReader will detect them at run-time and output a schema feature having the "attribute{}" list which storing data type names read from the source dastaset.

 

The Python script will just reconstruct the "attribute{}" based on the required attribute names specified through the Excel table.

 

 


Also, attribute fields to keep from input feature class are in my list (from excel), how do I keep ONLY these attribute fields ? I donot want any other att fields in my output than I need.

 

There is no option to keep these incoming attributes in feature reader or merger ?
The Python script updates the "attribute{}" list (it's the schema definition) to keep only attributes specified through the Excel table (comma-separated field names). Then, the dynamic writer feature type would configure the destination schema according to the "attribute{}" list which keeps only the specified attributes. Result, the destination table would have only the specified attributes.

 

 


Assuming that each record from the Excel table contains these three attributes for each feature class to be read.

  • Feature Class: Name of a feature class.
  • Required Fields: Attribute names (comma-separated) to be written into the destination dataset.

  • Where Clause: The value to be set to the WHERE Clause parameter in the FeatureReader.

Based on the Require Fields (comma-separated attribute names), this Python script updates the schema definition output from the Schema port of the FeatureReader.

# PythonCaller Script Example
def updateSchemaDefinition(feature):
    # Original Schema Definition (attribute names and data types)
    # name: attribute name
    # native_data_type: data type name read from the source dataset
    # fme_data_type: FME generic data type name mapped to the native_data_type
    names = feature.getAttribute('attribute{}.name')
    ntypes = feature.getAttribute('attribute{}.native_data_type')
    ftypes = feature.getAttribute('attribute{}.fme_data_type')
    
    # Collect required attribute names and data types
    attrsToKeep = set(feature.getAttribute('Required Fields').split(','))
    newNames, newNativeTypes, newFmeTypes = Â], m],  ]
    for name, ntype, ftype in zip(names, ntypes, ftypes):
        if name in attrsToKeep:
            newNames.append(name)
            newNativeTypes.append(ntype)
            newFmeTypes.append(ftype)
            
    # Update Schema Definition (remove and re-create the "attribute{}" list)
    feature.removeAttrsWithPrefix('attribute{')    
    feature.setAttribute('attribute{}.name', newNames)
    feature.setAttribute('attribute{}.native_data_type', newNativeTypes)
    feature.setAttribute('attribute{}.fme_data_type', newFmeTypes) 

0684Q00000ArJljQAF.png

Please vote up this Idea: Introduce New Transformer to Modify Schema Easily

Find the attachment: b17725-update-schema-definition-with-python-demo.zip

 

  • "b17725-update-schema-definition-with-python-demo.fmw" (FME 2017.1.2.1)
  • "FeatureClassList.xlsx": Query, Attribute to be retained, Feature Class
  • "source.db": Sample source dataset containing two tables (SQLite Non Spatial  DB)

Reply