One way is AttributePivoter.
It looks like "_count" is effectively the row number so:
- Set "Group Rows By" to _count
- Set "Group Columns By" to gsa_attribute_name
- Set "Attribute to Analyse" as field_values
As it is a 1 vs 1 intersection of Row vs Column in the Pivot, just use something like "Max" as the statistic type.
However, as AttributePivoter can't tell what attributes it is going to end up with, it will need to be followed by an AttributeExposer to statically expose the Attribute Names to any other downstream transformers.
Finally to create the Shape vs Row number lookup table to join this up to, send the raw table on a separate path to DuplicateFilter, grouping by _count, gsa_shape_set_id. This will generate the row number vs Shape identifier lookup table on the "Unique" port. The result of this can be joined up with the geometries via the "gsa_shape_set_id" attribute, and in turn this can be joined with the AttributePivoter results table via the "_count" attribute.
On a broader note, I don't know why your life wasn't made easier in just giving you the original ArcGIS feature class as the source! It looks like it was already arranged in the way you needed it before something else has pulled it apart.
Thanks, I try that but get a fatal error:
2019-12-11 14:05:27| 1.4| 0.7|INFORM|AttributeKeeper_5_OUTPUT_-1_154_Player (RecorderFactory): Played back 7056 feature(s) from file `C:\Users\oliverm\AppData\Local\Temp\wb-cache-oracle_i3-nHTlPp\Main_AttributeKeeper_5 -1 77 fo 0 OUTPUT 2 ec8526d21294a8418158130273ce14b4a040bbe3.ffs'
2019-12-11 14:05:28| 1.5| 0.2|WARN |Python Exception <TypeError>: '<' not supported between instances of 'float' and 'str'
2019-12-11 14:05:28| 1.5| 0.0|WARN |Traceback (most recent call last):
File "__init__.py", line 163, in close
File "__init__.py", line 261, in generate
File "__init__.py", line 281, in summaryData
File "__init__.py", line 281, in summaryData
File "__init__.py", line 373, in summaryData
File "__init__.py", line 417, in summarize
TypeError: '<' not supported between instances of 'float' and 'str'
2019-12-11 14:05:28| 1.5| 0.0|FATAL |AttributePivoter (PythonFactory): PythonFactory failed to close properly
2019-12-11 14:05:28| 1.6| 0.0|ERROR |AttributePivoter (PythonFactory): A fatal error has occurred. Check the logfile above for details
2019-12-11 14:05:28| 1.6| 0.0|ERROR |A fatal error has occurred. Check the logfile above for details
Instead I have setup a list builder
Then merged it with the geometry to create in a list the attributes in a list.
I am also working on creating the schema dynamically for each feature:
Let me know if this looks the right way to go.
Thank you
This workflow should work:
Use an Aggregator transformer to create feature:
Group by gsa_name and _count.
Set accumulation mode to "Merge Incoming Attributes"
Set a name for the list.
Set Selected attributes to contain: gsa_attribute_name and field_values.
Then use the ListKeyValuePairExtractor to create the attributes:
Set the Attribute Name List to: _list{0}.gas_attribute_name
Set the Attribute Value List to: _list{0}.field_value
The result should be the feature with the attributes.
The attributes will be n the feature, but will not be exposed.
You can view the results using the Data Inspector (connect the Inspector transformer).
To expose the attributes, use the AttributeExposer transformer.
Hope this helps.
This workflow should work:
Use an Aggregator transformer to create feature:
Group by gsa_name and _count.
Set accumulation mode to "Merge Incoming Attributes"
Set a name for the list.
Set Selected attributes to contain: gsa_attribute_name and field_values.
Then use the ListKeyValuePairExtractor to create the attributes:
Set the Attribute Name List to: _list{0}.gas_attribute_name
Set the Attribute Value List to: _list{0}.field_value
The result should be the feature with the attributes.
The attributes will be n the feature, but will not be exposed.
You can view the results using the Data Inspector (connect the Inspector transformer).
To expose the attributes, use the AttributeExposer transformer.
Hope this helps.
thank you for the help. how does this differ from the ListBuilder?
once I merge with the feature merger to get the geometry I have both the feature, geometry and a list with the right attributes.
the problem i have is that the AttributeExposer needs fields defined, I have many different features coming through with different attributes. e.g. international boundaries, coastline etc. In this case how can I dynamically output the attributes for each feature?
Thank you
I have built the schema and populated the fme_geometry{0} dynamically, but now I get that there is no schema information when it writes?
Can anyone help me identify what is missing here? Thank you very much.
About exposing all attributes.
This Idea has been posted here and is what you are looking for: BulkAttributeExposer
Looks like it is not yet available, but voting for it might help.
About exposing all attributes.
This Idea has been posted here and is what you are looking for: BulkAttributeExposer
Looks like it is not yet available, but voting for it might help.
thank you for the help, I saw this blog post by @brianatsafe https://knowledge.safe.com/articles/21787/dynamic-workflows-20150-and-below-destination-sche.html
If the writer at the end is dynamic then I would have a fighting chance. As it stands I get this error at the moment.
it does create the feature correctly but for some reason wont populate:
thank you for the help, I saw this blog post by @brianatsafe https://knowledge.safe.com/articles/21787/dynamic-workflows-20150-and-below-destination-sche.html
If the writer at the end is dynamic then I would have a fighting chance. As it stands I get this error at the moment.
it does create the feature correctly but for some reason wont populate:
Tricky! think the features are not finding the correct Feature Type Name as defined in the Writer for some reason. It is complaining that it has a feature named "SAM_International_Boundaries" but then it doesn't seem to find the equivalent name in the dynamic writer settings. I'd try dropping the table and re-running this. Also if you can send your entire log, that might help.
The schema feature has to arrive in the writer before any data features, if you intend to configure the destination schema dynamically and use the attribute{} list as the schema definition.
In your workflow, there are blocking transformers (FeatureMerger and ListBuilder) on the data flow for creating the schema feature, so the schema feature would arrive after the data features. Also a schema feature should have two special attributes - fme_feature_type_name containing the schema definition name and fme_schema_handling containing "schema_only".
A possible workaround is, unconditionally merge the schema definition (i.e. the attribute{} list and the fme_geometry{} list) to every data feature using a FeatureMerger before writing. I think it would be easier than configuring a schema feature properly and controlling the order of features.
In fact, it's an application of the traditional method "Destination Schema is Derived from a List in the First Feature", which is still available even though it's not documented anywhere now. The modern method "Destination Schema is Derived from a Schema Feature" is a variant of the traditional one.
The schema feature has to arrive in the writer before any data features, if you intend to configure the destination schema dynamically and use the attribute{} list as the schema definition.
In your workflow, there are blocking transformers (FeatureMerger and ListBuilder) on the data flow for creating the schema feature, so the schema feature would arrive after the data features. Also a schema feature should have two special attributes - fme_feature_type_name containing the schema definition name and fme_schema_handling containing "schema_only".
A possible workaround is, unconditionally merge the schema definition (i.e. the attribute{} list and the fme_geometry{} list) to every data feature using a FeatureMerger before writing. I think it would be easier than configuring a schema feature properly and controlling the order of features.
In fact, it's an application of the traditional method "Destination Schema is Derived from a List in the First Feature", which is still available even though it's not documented anywhere now. The modern method "Destination Schema is Derived from a Schema Feature" is a variant of the traditional one.
oracle spatial.zip
Thanks @takashi I appreciate the reply. I think that the schema is being written out correctly which is great, the problem I have is that the actual data coming in is list based - each record has its own list which contains the attributes the schema is creating correctly. The issue is that I need to dynamically take that list from each record and populate into the newly created feature. Do you have any suggestions to flip the data in this way without defining the fields before hand?
Thank you
The schema feature has to arrive in the writer before any data features, if you intend to configure the destination schema dynamically and use the attribute{} list as the schema definition.
In your workflow, there are blocking transformers (FeatureMerger and ListBuilder) on the data flow for creating the schema feature, so the schema feature would arrive after the data features. Also a schema feature should have two special attributes - fme_feature_type_name containing the schema definition name and fme_schema_handling containing "schema_only".
A possible workaround is, unconditionally merge the schema definition (i.e. the attribute{} list and the fme_geometry{} list) to every data feature using a FeatureMerger before writing. I think it would be easier than configuring a schema feature properly and controlling the order of features.
In fact, it's an application of the traditional method "Destination Schema is Derived from a List in the First Feature", which is still available even though it's not documented anywhere now. The modern method "Destination Schema is Derived from a Schema Feature" is a variant of the traditional one.
A possible (and easiest maybe) workaround is to merge the attribute{} list that defines the destination schema to every data feature. It's an application of the traditional method "Destination Schema is Derived from a List in the First Feature" I mentioned in the answer above.
A possible (and easiest maybe) workaround is to merge the attribute{} list that defines the destination schema to every data feature. It's an application of the traditional method "Destination Schema is Derived from a List in the First Feature" I mentioned in the answer above.
@takashi thank you so much, this works like a dream. 1000's of features created dynamically all with their associated attributes. I really appreciate your help on this and also to @erik_jan for helping me along the way as well.