That's an excellent question, because whenever you duplicate transformers like that there is normally a better way to do things.
However, I think this case is the exception to that. A different where clause would effectively give a different lookup table per feature, and that isn't something the transformer is designed to handle.
Interestingly... the SchemaMapper appears to be feature based; a feature that enters then emerges immediately, not waiting for other features. So it might be possible to create a workaround.
I would suggest trying:
- Create a second output connection from the Input port to a FeatureReader
- Use the FeatureReader to read the data from the database table using the required where clause (which can be defined as an attribute).
- Use a FeatureWriter to write that data to a plain lookup file (like CSV or text)
- Then have the SchemaMapper point to that CSV file as the lookup source
- Finally set the connection runtime order so the FeatureReader/Writer are triggered first

What I hope happens is that each incoming feature goes to the SchemaMapper, but by the time it arrives the CSV data has been overwritten. That way it gets the unique lookup table you want.
I don't guarantee that will work (it might just cache the lookup of the first feature), and you might not consider it worth the effort trying to set it up (it's a fair effort for little payoff), but if you're inquisitive then that's what I would try first,
Hope this helps
Mark
That's an excellent question, because whenever you duplicate transformers like that there is normally a better way to do things.
However, I think this case is the exception to that. A different where clause would effectively give a different lookup table per feature, and that isn't something the transformer is designed to handle.
Interestingly... the SchemaMapper appears to be feature based; a feature that enters then emerges immediately, not waiting for other features. So it might be possible to create a workaround.
I would suggest trying:
- Create a second output connection from the Input port to a FeatureReader
- Use the FeatureReader to read the data from the database table using the required where clause (which can be defined as an attribute).
- Use a FeatureWriter to write that data to a plain lookup file (like CSV or text)
- Then have the SchemaMapper point to that CSV file as the lookup source
- Finally set the connection runtime order so the FeatureReader/Writer are triggered first

What I hope happens is that each incoming feature goes to the SchemaMapper, but by the time it arrives the CSV data has been overwritten. That way it gets the unique lookup table you want.
I don't guarantee that will work (it might just cache the lookup of the first feature), and you might not consider it worth the effort trying to set it up (it's a fair effort for little payoff), but if you're inquisitive then that's what I would try first,
Hope this helps
Mark
In fact thinking about it the FeatureWriter might hold the output connection open longer than we'd want, so the AttributeFileWriter might be a better choice there.
That's an excellent question, because whenever you duplicate transformers like that there is normally a better way to do things.
However, I think this case is the exception to that. A different where clause would effectively give a different lookup table per feature, and that isn't something the transformer is designed to handle.
Interestingly... the SchemaMapper appears to be feature based; a feature that enters then emerges immediately, not waiting for other features. So it might be possible to create a workaround.
I would suggest trying:
- Create a second output connection from the Input port to a FeatureReader
- Use the FeatureReader to read the data from the database table using the required where clause (which can be defined as an attribute).
- Use a FeatureWriter to write that data to a plain lookup file (like CSV or text)
- Then have the SchemaMapper point to that CSV file as the lookup source
- Finally set the connection runtime order so the FeatureReader/Writer are triggered first

What I hope happens is that each incoming feature goes to the SchemaMapper, but by the time it arrives the CSV data has been overwritten. That way it gets the unique lookup table you want.
I don't guarantee that will work (it might just cache the lookup of the first feature), and you might not consider it worth the effort trying to set it up (it's a fair effort for little payoff), but if you're inquisitive then that's what I would try first,
Hope this helps
Mark
I'm struggling with this same problem and while I appreciate this solution (I'd considered something similar) I think this approach is too unreliable to take a punt on. It's disappointing as a real solution to this, could be so powerful.
Edit: I think have a working solution below.
I've managed to implement this using a FeatureMerger and a PythonCaller - basically:
1. Assume you have a list of dynamic mapping fields (source and destination attribute names) based on a Type or Category or other condition
2. Feature Merge your Feature with the list of Fields using this Type/Category/Condition - ensure you choose "Generate List" so you still only have 1 row per Feature.
3. You may need to pre-declare the list of possible destination Attributes and expose them in the PythonCaller (I'm never really sure when this is needed)
4. Configure the following Python to suit your needs:
Source = feature.getAttribute('Mapping{}.Source')
Destination = feature.getAttribute('Mapping{}.Destination')
for i, (a, b) in enumerate(zip(Source, Destination)):
sourceValue = feature.getAttribute(a) if sourceValue:
feature.setAttribute(b, sourceValue )
<br>
@peterx I really like the PythonCaller solution. My data sits in a cloud-based database that I can extract into FME using an HTTPCaller and I wanted a way to map the attributes without having to write the mappings into a file just for the SchemaMapper.