Skip to main content

Hi, I've created some published parameters for some tranformers, I used Choice with Alias for all parameters and they all have same set of aliases. For example, choice "2000" represents 576 of rows and columns in Tiler and represents E of attribute value of AttributeCreator. Every time I ran the workbench I have to set every published parameter into 2000.

I'm wondering if I can create a universal published parameter of these published parameters, so I only need to choose one time every time I run the workbench.

@takashi @DaveAtSafe @david_r @jdh @Mark2AtSafe

I would consider using just a single published parameter and then several AttributeValueMappers in the workspace to translate that parameter value to the different transformer settings.

If you're comfortable with Python you could also create several private scripted parameters that return the different mappings.

Lets assume you have a published parameter called PRODUCT, you could then have a private scripted parameter that's called PRODUCT_TILER which could contain

mapping = {
  '2000': '576',  # PRODUCT 2000 corresponds to 576 pixels...
  '3000': '800',
  ...etc...
}
product_id = FME_MacroValues 'PRODUCT']
return mapping.get(product_id, 'Unknown product')

You then link the Tiler setting to $(PRODUCT_TILER). Then repeat as necessary for each mapping needed.


I would second david_r suggestions. A third alternative is to use a choice with alias parameter, with the Value being a separated list of the values. (I prefer | but , is possible depending on your data).

 

 

Ex. alias 2000, value 2000|576|E

 

3000,| 256 | A

 

 

and then use an attributeSplitter on the parameter in the workspace.

I'll just elaborate a bit on some key differences:

If you use AttributeSplitters or AttributeValueMappers, they will each be executed once for every feature that passes through. This is not a big deal if you haven't got many features, but if you have millions it could potentially slow things down a little bit. It will also "burden" all the features with some additional attributes that will take up a bit of memory. Again, this is very rarely an issue, until suddenly it is :-)

Using the scripted private parameters you trade complexity for efficiency, since the scripted parameter is only evaluated once before your translation starts, regardless of many features it processes. There are also no temporary attributes added to your features.


@bobo

I kinda do what bobo is asking (me thinks).

Have sets of choice parameters with or without aliases.

Then I use scripted parameters to operate on those. And use the scripted parameters as parameters in the workbench.

Actually in the spirit of what @david_r is suggesting. (I mostly do these in tcl ).


@bobo

I kinda do what bobo is asking (me thinks).

Have sets of choice parameters with or without aliases.

Then I use scripted parameters to operate on those. And use the scripted parameters as parameters in the workbench.

Actually in the spirit of what @david_r is suggesting. (I mostly do these in tcl ).

Yes, that's what I want. Can you share your method, please?


I'll just elaborate a bit on some key differences:

If you use AttributeSplitters or AttributeValueMappers, they will each be executed once for every feature that passes through. This is not a big deal if you haven't got many features, but if you have millions it could potentially slow things down a little bit. It will also "burden" all the features with some additional attributes that will take up a bit of memory. Again, this is very rarely an issue, until suddenly it is :-)

Using the scripted private parameters you trade complexity for efficiency, since the scripted parameter is only evaluated once before your translation starts, regardless of many features it processes. There are also no temporary attributes added to your features.

Yeah, I get your point. But I'm not a programmer and not very familiar with Python. Fortunately, I'll only work with as much as 100000 features, I think I'll go with AttributeValueMapper or AttributeSplitter method.


I would second david_r suggestions. A third alternative is to use a choice with alias parameter, with the Value being a separated list of the values. (I prefer | but , is possible depending on your data).

 

 

Ex. alias 2000, value 2000|576|E

 

3000,| 256 | A

 

 

and then use an attributeSplitter on the parameter in the workspace.
Thank you, I'll try it tomorrow.

Yeah, I get your point. But I'm not a programmer and not very familiar with Python. Fortunately, I'll only work with as much as 100000 features, I think I'll go with AttributeValueMapper or AttributeSplitter method.

Sounds like a plan. There's much to be said for readability and ease of maintenance! Somewhere around 100'000 features should be no issue at all.


I would consider using just a single published parameter and then several AttributeValueMappers in the workspace to translate that parameter value to the different transformer settings.

If you're comfortable with Python you could also create several private scripted parameters that return the different mappings.

Lets assume you have a published parameter called PRODUCT, you could then have a private scripted parameter that's called PRODUCT_TILER which could contain

mapping = {
  '2000': '576',  # PRODUCT 2000 corresponds to 576 pixels...
  '3000': '800',
  ...etc...
}
product_id = FME_MacroValues 'PRODUCT']
return mapping.get(product_id, 'Unknown product')

You then link the Tiler setting to $(PRODUCT_TILER). Then repeat as necessary for each mapping needed.

Is it possible I just copy and modify the quoted code into PythonCaller, it will work? I'm new to Python.


Sounds like a plan. There's much to be said for readability and ease of maintenance! Somewhere around 100'000 features should be no issue at all.

Thank you very much.


Is it possible I just copy and modify the quoted code into PythonCaller, it will work? I'm new to Python.

Yes, that should work (except for the "....etc..." line :-)

If you get errors, post your script here and I'll try to help you along.


Reply