Skip to main content

Hi,

I'm trying to create a workspace that reads any feature class of a certain geometry type (e.g. line or point) from an Oracle Spatial database and writes it into an SDE Geodatabase. Since around 200 feature classes need to be processed, a manual transformation is not possible (meaning that a reader and writer are created for each feature type individually). I have tried different approaches that did not work:

 

- Generic reader: Expects as an input a "source dataset". What I have is an database, however (I need to provide its connection parameters via published parameters instead of providing a dataset)

 

- Dynamic reader (merged feature types): An Oracle Spatial reader together with "merge feature types" and a filter on "Features to Read" -> "Feature Types to Read" (provided by a published parameter) did not work, because its schema does not change with the feature classes dynamically.

 

 

I finally used a FeatureReader in combination with a dynamic writer, which would work perfectly, if no attribute transformation is required (or for the initial writing process). However, I want to write only incremental updates in the SDE geodatabase and thus need the check the attribute values regarding changes. But because the FeatureReader outputs the data and its schema separately, this seems not to be feasable (easily).

Is there a possibility to merge the data and its schema, so that subsequent transformers can work with the attributes? A method without PythonCaller would be prefered, but I am grateful for hints regarding PythonCaller as well. Or are there other approaches than using FeatureReader which would fulfill my conditions?

Thanks in advance!

Best regards,

André

Just thinking about this now, but the first observation is that you definitely don't need a Generic reader. The Generic reader reads data regardless of format. Firstly you already know the data format, so that's not an issue. Secondly, you can't use it on databases, only on file formats, because how could FME connect to a database without knowing what type it is?!

 


Hi @andr_, schema of each destination feature type will have to be configured before the first data feature is arrived in the dynamic writer feature type. Therefore, the FeatureReader outputs the schema feature for a feature type at first, and then outputs data features belonging to the feature type. Usually it's not necessary to merge the schema feature to data features unless you need to change the schema, even if attribute values will be changed through the workflow. I think this workflow would be enough to perform the translation that does not change schema.

However, if you need to change the schema (number of attributes, attribute names, data types), you will have to modify contents of the schema feature or adopt another approach to configure the destination schema. Merging modified schema feature to data features may be a way, but it is not only the way. Perhaps this article (and related articles) might help you. Tutorial: Dynamic Workflows

Hi @takashi,

Thank you for your answer! I have also realised in the meantime that I can work with the attribute values even if they are not exposed. However, there is still a problem: I need to check two datasources (an Oracle Spatial DB containing the new data and an SDE Geodatabase containing the current data) regarding updates by using the ChangeDetector. For that, some of the attributes should be ignored for the comparison (e.g. SDE-typical attributes such as OBJECTID or fme format parameters like fme_type), what can be done by using a parameter or PythonCaller.

A problem, however, is that the data types of the attributes are not recognized correctly and handled differently depending on the source database. Example: An attribute X is according to SQLDeveloper of type NUMBER(10,0) for both the Oracle Spatial Database (source) and the SDE Geodatabase (target). Between the FeatureReader and the Writer, however, attribute X from Oracle Spatial is handled as a 64 bit integer (1111) whereas attribute X from SDE is handled as a 64 bit real (1111.0). If I cannot guarantee that the same attributes from different sources are handled in the same way, I see hardly a way to detect changes based on the attribute values. Have I missed something?

Best regards,

 

André


If the attribute names that you don't want to compare are known, a possible way is:

  1. Expose all the attribute names (e.g. OBJECTID, fme_***, etc.) with the AttributeExposer.
  2. In the ChangeDetector parameters, select "Match All Except Selected Attributes" in the "Attribute Matching Strategy" parameter, and select the exposed attributes in the "Selected Attributes" parameter.

I expect the ChangeDetector assumes that 64 bit real 1111 matches 64 bit integer 1111, regardless of the difference in internal data storage type. How was the result from your workflow?


You are right, fme seems to notice that 1111 is the same as 1111.0. The problem are other attributes, such as dates, certain numbers and the geometry coordinates. The issues with the precision of the numbers and the format of dates may be solved by defining the corresponding attribute types in the Oracle Spatial source database more precisely. Regarding the coordinates, I need a CoordinateRounder, because SDE saves them according to this post differently. However, I need to reduce the precision to one digit after the comma, which is quite risky.

Maybe another way to avoid at least some of These data type problems would be to write the Oracle spatial feature classes first into another SDE schema and perform the change detection afterwards between the two SDE databases. What do you think?


You are right, fme seems to notice that 1111 is the same as 1111.0. The problem are other attributes, such as dates, certain numbers and the geometry coordinates. The issues with the precision of the numbers and the format of dates may be solved by defining the corresponding attribute types in the Oracle Spatial source database more precisely. Regarding the coordinates, I need a CoordinateRounder, because SDE saves them according to this post differently. However, I need to reduce the precision to one digit after the comma, which is quite risky.

Maybe another way to avoid at least some of These data type problems would be to write the Oracle spatial feature classes first into another SDE schema and perform the change detection afterwards between the two SDE databases. What do you think?

I don't have the environment for SDE, so cannot test that, but I think your thought is reasonable and it's worth to try.

 


Hi @andr_, schema of each destination feature type will have to be configured before the first data feature is arrived in the dynamic writer feature type. Therefore, the FeatureReader outputs the schema feature for a feature type at first, and then outputs data features belonging to the feature type. Usually it's not necessary to merge the schema feature to data features unless you need to change the schema, even if attribute values will be changed through the workflow. I think this workflow would be enough to perform the translation that does not change schema.

However, if you need to change the schema (number of attributes, attribute names, data types), you will have to modify contents of the schema feature or adopt another approach to configure the destination schema. Merging modified schema feature to data features may be a way, but it is not only the way. Perhaps this article (and related articles) might help you. Tutorial: Dynamic Workflows
Hey just wanted to say thanks as I just ran into this problem and your solution (although simple) was exactly what we were after!

 

 

Cheers!!

 


@egaru​ , I'm looking solution for the same problem.. If you have solution, can you provide me..

 

Thanks,

Krish


Reply