Hi @venu, a well known method is
- AttributeCreator: create a new attribute, name = the value of Bldg_Type, value = the value of Count (see the screenshot below),
- Aggregator (Accumulation Mode: Merge Incoming Attributes): aggregate the features grouping by the Parcel_No,
- AttributeExposer: and then expose the new attribute names (Villa, Residential Apartment, etc.), optionally
[Addition] The BulkAttributeRenamer (Action: Regular Expression Replace) can also be used instead of the AttributeCreator above.
Hi @venu, a well known method is
- AttributeCreator: create a new attribute, name = the value of Bldg_Type, value = the value of Count (see the screenshot below),
- Aggregator (Accumulation Mode: Merge Incoming Attributes): aggregate the features grouping by the Parcel_No,
- AttributeExposer: and then expose the new attribute names (Villa, Residential Apartment, etc.), optionally
[Addition] The BulkAttributeRenamer (Action: Regular Expression Replace) can also be used instead of the AttributeCreator above.
any other solution or if any python code
any other solution or if any python code
In the case, the AttributeCreator (or BulkAttributeRenamer) + Aggregator (+ AttributeExposer) is a well-known solution. Is there any reason for asking other solutions? Just curious?
In the case, the AttributeCreator (or BulkAttributeRenamer) + Aggregator (+ AttributeExposer) is a well-known solution. Is there any reason for asking other solutions? Just curious?
@takashi, thanks for your answer and reply.. I am not able to achieve what I want because in writer those bld_types fields need to be added manually and that bldg_type values will change it's not standard.
Maybe I am confused with your solution that's why I asked python code.Can suggest the easiest way to achieve the task.
any other solution or if any python code
Hi @venu, if the building types could change and you want to configure the destination schema automatically depending on the contents of the source dataset for each run, there is no easy way unfortunately.
A possible approach would be to create a schema feature based on the source data with a Python script and configure Dynamic Schema from the schema feature.
However, it's hard to fully explain all of them here. If you could post a sample source data, I would provide a demo workspace using the data.
Are you using FME 2017.0 or FME 2017.1?
any other solution or if any python code
If I understand your requirement correctly, the attached workspaces would work as expected. Both are created with FME 2017.1.
See also here to learn about the fundamentals.
Dynamic Workflows: Destination Schema is Derived from a Schema Feature