Hi @takashi
Thank you for bringing this to our attention. I have filed enhancement request (PR81520) regarding the Creator Runtime Order dialog's inconsistency with PythonCreator methods and defect report (PR81523) for PythonCreator runtime order not honoured if source dataset is empty.
I'll update you when they have been addressed.
Kind regards,
Debbi
Hi @takashi
Thank you for bringing this to our attention. I have filed enhancement request (PR81520) regarding the Creator Runtime Order dialog's inconsistency with PythonCreator methods and defect report (PR81523) for PythonCreator runtime order not honoured if source dataset is empty.
I'll update you when they have been addressed.
Kind regards,
Debbi
Hi @DebbiAtSafe, thanks for filing the PRs.
Just an idea. In the Creator Runtime Order dialog, it would be ideal if a suffix indicating the method ('input' or 'close') could be added to the name of PythonCreator in the tables. e.g.
- "PythonCreator #input" in the Creators Run Before Readers table
- "PythonCreator #close" for the Creators Run After Reader table
Hi @DebbiAtSafe, thanks for filing the PRs.
Just an idea. In the Creator Runtime Order dialog, it would be ideal if a suffix indicating the method ('input' or 'close') could be added to the name of PythonCreator in the tables. e.g.
- "PythonCreator #input" in the Creators Run Before Readers table
- "PythonCreator #close" for the Creators Run After Reader table
Hi @takashi
You're very welcome. I will add your idea to the relevant PR.
Kind regards,
Debbi
Hi @takashi
Regarding PR81523 (incorrect runtime order), our developer has looked into it and said that it's behaviour that is expected. I'll try to summarize his thoughts on this.
Pipeline: CreationFactory (CF_PC1) -> PythonFactory (PF_PC1) -> CreationFactory (CF_PC2) -> PythonFactory (PF_PC2) -> AttributeExposer/Inspector factories
From my understanding, the feature being read is a trigger. It flows through a pipeline where it will trigger the CF_PC1 to create its 'input' feature which will branch to AttributeExposer/Inspector (end of pipeline). The trigger then flows through and will be noticed by the second PythonCaller's CreationFactory. CF_PC2 creates its 'input' feature and goes to end of pipeline. Then the trigger feature will reach the end of pipeline. As there are no more features, the pipeline will begin to shut down.
Since CF_PC1 already created their feature, nothing created when it is shut down. PF_PC1 creates its 'close' feature on shutdown and feature is sent to end of pipeline. CF_PC2 shuts down (nothing created because already created earlier). PF_PC2 creates its 'close' feature on shutdown and feature is sent to end of pipeline. Pipeline shut down completely.
However, in the case of no features being read, there is no trigger. So the CreationFactories and PythonCreations act in sequence when shutdown.
In summary:
With some reader input: input, input, <FeaturesRead>, close, close
Without reader input: input, close, input, close
Our developer also suggested the following:
Indeed, if the features from the data were to be emitted after a couple of PythonCreators' initial features, I would run the reader input through the PythonCreators sequentially, with code something like:
def __init__(self):
self.doneInitialFeature = False
def input(self,feature):
if not self.doneInitialFeature:
newFeature = fmeobjects.FMEFeature()
newFeature.setAttribute('_method', 'input')
self.pyoutput(newFeature)
self.doneInitialFeature = True
self.pyoutput(feature)
Kind regards,
Debbi
Hi @takashi
Regarding PR81523 (incorrect runtime order), our developer has looked into it and said that it's behaviour that is expected. I'll try to summarize his thoughts on this.
Pipeline: CreationFactory (CF_PC1) -> PythonFactory (PF_PC1) -> CreationFactory (CF_PC2) -> PythonFactory (PF_PC2) -> AttributeExposer/Inspector factories
From my understanding, the feature being read is a trigger. It flows through a pipeline where it will trigger the CF_PC1 to create its 'input' feature which will branch to AttributeExposer/Inspector (end of pipeline). The trigger then flows through and will be noticed by the second PythonCaller's CreationFactory. CF_PC2 creates its 'input' feature and goes to end of pipeline. Then the trigger feature will reach the end of pipeline. As there are no more features, the pipeline will begin to shut down.
Since CF_PC1 already created their feature, nothing created when it is shut down. PF_PC1 creates its 'close' feature on shutdown and feature is sent to end of pipeline. CF_PC2 shuts down (nothing created because already created earlier). PF_PC2 creates its 'close' feature on shutdown and feature is sent to end of pipeline. Pipeline shut down completely.
However, in the case of no features being read, there is no trigger. So the CreationFactories and PythonCreations act in sequence when shutdown.
In summary:
With some reader input: input, input, <FeaturesRead>, close, close
Without reader input: input, close, input, close
Our developer also suggested the following:
Indeed, if the features from the data were to be emitted after a couple of PythonCreators' initial features, I would run the reader input through the PythonCreators sequentially, with code something like:
def __init__(self):
self.doneInitialFeature = False
def input(self,feature):
if not self.doneInitialFeature:
newFeature = fmeobjects.FMEFeature()
newFeature.setAttribute('_method', 'input')
self.pyoutput(newFeature)
self.doneInitialFeature = True
self.pyoutput(feature)
Kind regards,
Debbi
Hi @DebbiAtSafe, thanks for your explanation. I can understand how the script in a PythonCreator would be executed at runtime, but my point is not here.
The "Creator Runtime Order" functionality has been introduced in FME 2016.1. It was a great enhancement, but was not perfect at first. That is, runtime order of some Creator transformers didn't have been consistent with setting in the "Create Runtime Order" dialog. Then, the inconsistencies have been improved, and it seems to have been almost completed in FME 2017.1.
Here is a history:
Control the Execution Order of "Starter" Transformers
However, in FME 2017.1 OEdited], I noticed that the runtime order of the features created in the 'close' method in PythonCreator cannot be controlled through the "Creator Runtime Order" dialog. It's the issue I pointed.
It would be ideal if the runtime order on both "input" and "close" methods could be completely, separately controlled through the "Creator Runtime Order" dialog. Even if it would be hard to control them completely because of a limitation on the implementation, at least it should be documented how the runtime order on the PythonCreator would be determined.
Hi @takashi
Regarding PR81523 (incorrect runtime order), our developer has looked into it and said that it's behaviour that is expected. I'll try to summarize his thoughts on this.
Pipeline: CreationFactory (CF_PC1) -> PythonFactory (PF_PC1) -> CreationFactory (CF_PC2) -> PythonFactory (PF_PC2) -> AttributeExposer/Inspector factories
From my understanding, the feature being read is a trigger. It flows through a pipeline where it will trigger the CF_PC1 to create its 'input' feature which will branch to AttributeExposer/Inspector (end of pipeline). The trigger then flows through and will be noticed by the second PythonCaller's CreationFactory. CF_PC2 creates its 'input' feature and goes to end of pipeline. Then the trigger feature will reach the end of pipeline. As there are no more features, the pipeline will begin to shut down.
Since CF_PC1 already created their feature, nothing created when it is shut down. PF_PC1 creates its 'close' feature on shutdown and feature is sent to end of pipeline. CF_PC2 shuts down (nothing created because already created earlier). PF_PC2 creates its 'close' feature on shutdown and feature is sent to end of pipeline. Pipeline shut down completely.
However, in the case of no features being read, there is no trigger. So the CreationFactories and PythonCreations act in sequence when shutdown.
In summary:
With some reader input: input, input, <FeaturesRead>, close, close
Without reader input: input, close, input, close
Our developer also suggested the following:
Indeed, if the features from the data were to be emitted after a couple of PythonCreators' initial features, I would run the reader input through the PythonCreators sequentially, with code something like:
def __init__(self):
self.doneInitialFeature = False
def input(self,feature):
if not self.doneInitialFeature:
newFeature = fmeobjects.FMEFeature()
newFeature.setAttribute('_method', 'input')
self.pyoutput(newFeature)
self.doneInitialFeature = True
self.pyoutput(feature)
Kind regards,
Debbi
It would be ideal if the user could control the runtime order of all the Creator transformers with the Creator Runtime Order dialog shown like this.
Looking at this, I still think that the PythonCreators should output features in the order of "input-input-close-close" according to the order set in the "Creator Runtime Order" dialog, even if there were no readers. How do you think?
It would be ideal if the user could control the runtime order of all the Creator transformers with the Creator Runtime Order dialog shown like this.
Looking at this, I still think that the PythonCreators should output features in the order of "input-input-close-close" according to the order set in the "Creator Runtime Order" dialog, even if there were no readers. How do you think?
Yes, I agree, this would be ideal. As more and more transformers support notions such as "suppliers first" etc it becomes increasingly important to be able to have fine-grained control over the feature order.
Hi @takashi
Regarding PR81523 (incorrect runtime order), our developer has looked into it and said that it's behaviour that is expected. I'll try to summarize his thoughts on this.
Pipeline: CreationFactory (CF_PC1) -> PythonFactory (PF_PC1) -> CreationFactory (CF_PC2) -> PythonFactory (PF_PC2) -> AttributeExposer/Inspector factories
From my understanding, the feature being read is a trigger. It flows through a pipeline where it will trigger the CF_PC1 to create its 'input' feature which will branch to AttributeExposer/Inspector (end of pipeline). The trigger then flows through and will be noticed by the second PythonCaller's CreationFactory. CF_PC2 creates its 'input' feature and goes to end of pipeline. Then the trigger feature will reach the end of pipeline. As there are no more features, the pipeline will begin to shut down.
Since CF_PC1 already created their feature, nothing created when it is shut down. PF_PC1 creates its 'close' feature on shutdown and feature is sent to end of pipeline. CF_PC2 shuts down (nothing created because already created earlier). PF_PC2 creates its 'close' feature on shutdown and feature is sent to end of pipeline. Pipeline shut down completely.
However, in the case of no features being read, there is no trigger. So the CreationFactories and PythonCreations act in sequence when shutdown.
In summary:
With some reader input: input, input, <FeaturesRead>, close, close
Without reader input: input, close, input, close
Our developer also suggested the following:
Indeed, if the features from the data were to be emitted after a couple of PythonCreators' initial features, I would run the reader input through the PythonCreators sequentially, with code something like:
def __init__(self):
self.doneInitialFeature = False
def input(self,feature):
if not self.doneInitialFeature:
newFeature = fmeobjects.FMEFeature()
newFeature.setAttribute('_method', 'input')
self.pyoutput(newFeature)
self.doneInitialFeature = True
self.pyoutput(feature)
Kind regards,
Debbi
Hi
@takashi
You're welcome.
I do agree with you and @david_r about being able to control "close" method with the Creator Runtime Dialog would be ideal. This enhancement request is still currently active.
My explanation above was an attempt to pass on one of our developer's thoughts regarding the input-input-close-close situation. After reading his explanation, I think I understand why it is happening.
Hi
@takashi
You're welcome.
I do agree with you and @david_r about being able to control "close" method with the Creator Runtime Dialog would be ideal. This enhancement request is still currently active.
My explanation above was an attempt to pass on one of our developer's thoughts regarding the input-input-close-close situation. After reading his explanation, I think I understand why it is happening.
Yes, I can understand why it is happening but cannot agree saying it is conformed to the concept of the "Creator Runtime Order" functionality.
Thanks for your consideration.