Skip to main content
Question

Custom Transformer - Input attribute values can't be exposed nor used on the prompt


javierml
Contributor
Forum|alt.badge.img+1

Hello everyone

 

I've been designing a new custom transformer to sum up part of my workflow (in FME Form 2025.0). The issue is that I can't add any of my attributes to the text editor even though all attributes are selected to be exposed in the input parametres.

Moreso, If I edit it individually, no attributes appear (as obvious), but if I edit them inside a workbench session and I choose specific attributes to expose, they are still not availaible in the text editor, but they  get instead automatically added as user parametres for the custom transformer. Can anyone explain me how this works? I don't get a clear answer from looking at https://docs.safe.com/fme/2025.0/html/FME-Form-Documentation/FME-Form/Workbench/Using_Custom_Transformers.htm#Editing_Input_Output_Ports

It also irritates me that it’s theoretically a copy of another transformer with a couple of changes (only adding transformers or changing their parametres, none with the inputs) and that transformer allows to add feature attributes values by any prompt.

2 replies

desiree_at_safe
Safer
Forum|alt.badge.img+7

Hello! Based on your description, I suspect this might be related to how the custom transformer is configured for that specific parameter (“Prompt” in your example). Sometimes, when selecting “Attribute Value” doesn’t appear as an option, it could indicate that the parameter (“Prompt”) in the custom transformer might be expecting something else (i.e. if it’s set to a user parameter vs. attribute in the custom transformer).

If you can share your workspace, I could take a look to see how the “Prompt” parameter is set up in your custom transformer. It could give us a better understanding of what you are experiencing.

Looking forward to helping you solve this! 🙂

 


javierml
Contributor
Forum|alt.badge.img+1
  • Author
  • Contributor
  • April 14, 2025

Hi ​@d_mars,

 

I couldn’t exactly find what’s wrong with the prompt set up so I have attached the custom workbench.

 

As you can see, it’s just a workaround from the GoogleGeminiConnector, but instead of using a language-model, I’m trying to use an image generation model. That’s what I find strange because GoogleGeminiConnector doesn’t have this issue at all. 


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings