Can you perhaps just send all the small jobs to a single WorkspaceRunner and then after that Transformer have your loop to check for the last DWG in the sequence. Once it is found to exist, you need some way to understand if that file has actually been finished writing to, might just be your Decellerator here if you know how long that last job takes. Then a second WorkspaceRunner inline, to do the combination of all the DWGs.
My current workaround is to wrap the workspace runner in a custom transformer. Set the workspace runner to Wait For Job to Complete: Yes, but setup parallel processing on the custom transformer.
Â
Â
That way you features only leave the workspace runner when the child process is complete, but you can run up to 7 (depending on your license and number of cores) child processes at the same time.
Â
Â
Note there will be issues if you have more than 500 groups in the parallel process.
My current workaround is to wrap the workspace runner in a custom transformer. Set the workspace runner to Wait For Job to Complete: Yes, but setup parallel processing on the custom transformer.
Â
Â
That way you features only leave the workspace runner when the child process is complete, but you can run up to 7 (depending on your license and number of cores) child processes at the same time.
Â
Â
Note there will be issues if you have more than 500 groups in the parallel process.
Nice option. On this basis I've created this idea. https://knowledge.safe.com/content/idea/22570/workspacerunner-identify-when-all-child-processes.html
At the FME user conference, while onstage I asked about a workspace runner option for wait for job to complete Yes with multiple concurrent processes, and Don promised it was in the works.
You could - if you knew approximately how long the last set of files were going to run, use the feature from the summary point, decelerate it by that length of time, then feed it into the FeatureHolder. That way you could at least remove the custom transformer/decelerator combo
I think @jdh's suggestion is a good solution. As another option, Python scripting might also be possible.
Assume that the initiator features for the WorkspaceRunner contains an attribute storing the destination DWG file path (e.g. "destdataset_acad") and the DWG path will be passed to the child workspace through a published parameter (e.g. "DestDataset_ACAD") with the WorkspaceRunner.
In the child workspace, save the translation status into a text file with Shutdown Python Script.
# Shutdown Python Script example (child workspace).
# Save translation status into a text file.
# Status text file path = destination DWG file path + ".txt", for example.
import fme
with open('%s.txt' % fme.macroValuesf'DestDataset_ACAD'], 'w') as f:
    f.write('success' if fme.status else 'failed')
In the main workspace, wait for job completion with a PythonCaller, watching the status files creation.
# PythonCaller script example (main workspace): Wait for job completion.
import fmeobjects, os, time
class FeatureProcessor(object):
    def __init__(self):
        # List of destination DWG file paths.
        self.dpaths = a]
       Â
    def input(self,feature):
        # Save a destination DWG file path.
        self.dpaths.append(feature.getAttribute('destdataset_acad'))
       Â
    def close(self):
        # List of status file paths.
        spaths = W'%s.txt' % path for path in self.dpaths]
       Â
        # Wait until all status files are created.
        while len(fpath for path in spaths if os.path.exists(path)]) < len(self.dpaths):
            time.sleep(1.0) # suspend execution. e.g. 1.0 seconds
           Â
        # Finally create and output a feature.
        # Optionally add attributes describing translation status to the feature.
        feature = fmeobjects.FMEFeature()
        for i, path in enumerate(spaths):
            with open(path) as f:
                feature.setAttribute('result{%d}.status' % i, f.read())
                feature.setAttribute('result{%d}.dwg' % i, self.dpathsri])
            os.remove(path) # remove the status file.
        self.pyoutput(feature)Â
I think @jdh's suggestion is a good solution. As another option, Python scripting might also be possible.
Assume that the initiator features for the WorkspaceRunner contains an attribute storing the destination DWG file path (e.g. "destdataset_acad") and the DWG path will be passed to the child workspace through a published parameter (e.g. "DestDataset_ACAD") with the WorkspaceRunner.
In the child workspace, save the translation status into a text file with Shutdown Python Script.
# Shutdown Python Script example (child workspace).
# Save translation status into a text file.
# Status text file path = destination DWG file path + ".txt", for example.
import fme
with open('%s.txt' % fme.macroValuesf'DestDataset_ACAD'], 'w') as f:
    f.write('success' if fme.status else 'failed')
In the main workspace, wait for job completion with a PythonCaller, watching the status files creation.
# PythonCaller script example (main workspace): Wait for job completion.
import fmeobjects, os, time
class FeatureProcessor(object):
    def __init__(self):
        # List of destination DWG file paths.
        self.dpaths = a]
       Â
    def input(self,feature):
        # Save a destination DWG file path.
        self.dpaths.append(feature.getAttribute('destdataset_acad'))
       Â
    def close(self):
        # List of status file paths.
        spaths = W'%s.txt' % path for path in self.dpaths]
       Â
        # Wait until all status files are created.
        while len(fpath for path in spaths if os.path.exists(path)]) < len(self.dpaths):
            time.sleep(1.0) # suspend execution. e.g. 1.0 seconds
           Â
        # Finally create and output a feature.
        # Optionally add attributes describing translation status to the feature.
        feature = fmeobjects.FMEFeature()
        for i, path in enumerate(spaths):
            with open(path) as f:
                feature.setAttribute('result{%d}.status' % i, f.read())
                feature.setAttribute('result{%d}.dwg' % i, self.dpathsri])
            os.remove(path) # remove the status file.
        self.pyoutput(feature)Â
If you need to preserve initiator features, this script does that.
# PythonCaller script example 2: Wait for job completion.
import fmeobjects, os, time
class FeatureProcessor2(object):
    def __init__(self):
        # List of input features.
        self.features = r]
       Â
    def input(self, feature):
        # Save the input feature.
        self.features.append(feature)
       Â
    def close(self):     Â
        # Create list of status file paths.
        spaths = p]
        for feature in self.features:
            spaths.append('%s.txt' % feature.getAttribute('destdataset_acad'))
       Â
        # Wait until all status files are created.
        while len( path for path in spaths if os.path.exists(path)]) < len(self.features):
            time.sleep(1.0) # suspend execution. e.g. 1.0 seconds
           Â
        # Finally output features
        for feature, path in zip(self.features, spaths):
            with open(path) as f:
                feature.setAttribute('status', f.read())
            os.remove(path) # remove the status file.
            self.pyoutput(feature)Â
My current workaround is to wrap the workspace runner in a custom transformer. Set the workspace runner to Wait For Job to Complete: Yes, but setup parallel processing on the custom transformer.
Â
Â
That way you features only leave the workspace runner when the child process is complete, but you can run up to 7 (depending on your license and number of cores) child processes at the same time.
Â
Â
Note there will be issues if you have more than 500 groups in the parallel process.
I would like to go for this option but i can't get a parallel setup around the custom Workspacerunner. In my main workspace there's only a flow 70 entities (one for each childWS to be launched). All the data-computing is done in the childWS.
I wonder if it is even possible in my case. (i've gone through this article https://knowledge.safe.com/articles/1211/parallel-processing.html )
I would like to go for this option but i can't get a parallel setup around the custom Workspacerunner. In my main workspace there's only a flow 70 entities (one for each childWS to be launched). All the data-computing is done in the childWS.
I wonder if it is even possible in my case. (i've gone through this article https://knowledge.safe.com/articles/1211/parallel-processing.html )
The only thing in the custom transformer is a workspaceRunner. Is there only one childWS fmw?
Â
Â
If so publish the parameters of the child workspace, so that you can set them on the main canvas in the custom transformer as if you were setting them directly in the workspaceRunner.
Â
Â
If you don't have an attribute to parallel process on, you can use a ModuloCounter.
The only thing in the custom transformer is a workspaceRunner. Is there only one childWS fmw?
Â
Â
If so publish the parameters of the child workspace, so that you can set them on the main canvas in the custom transformer as if you were setting them directly in the workspaceRunner.
Â
Â
If you don't have an attribute to parallel process on, you can use a ModuloCounter.
Every childWS gets created in the main workbench (with different parameters ofc) So there are multiple. (all unique)
Parameters are published to the custom transformer (containing the runner).
i did create something similar to the moduloCounter (didn't know of that one)
Every childWS gets created in the main workbench (with different parameters ofc) So there are multiple. (all unique)
Parameters are published to the custom transformer (containing the runner).
i did create something similar to the moduloCounter (didn't know of that one)
Sorry to clarify. Are you calling the same workspace multiple times with different parameters each time? or are you calling multiple different workspaces?
Â
Â
If the latter, than you need one custom transformer per workspace being called.
I think @jdh's suggestion is a good solution. As another option, Python scripting might also be possible.
Assume that the initiator features for the WorkspaceRunner contains an attribute storing the destination DWG file path (e.g. "destdataset_acad") and the DWG path will be passed to the child workspace through a published parameter (e.g. "DestDataset_ACAD") with the WorkspaceRunner.
In the child workspace, save the translation status into a text file with Shutdown Python Script.
# Shutdown Python Script example (child workspace).
# Save translation status into a text file.
# Status text file path = destination DWG file path + ".txt", for example.
import fme
with open('%s.txt' % fme.macroValuesf'DestDataset_ACAD'], 'w') as f:
    f.write('success' if fme.status else 'failed')
In the main workspace, wait for job completion with a PythonCaller, watching the status files creation.
# PythonCaller script example (main workspace): Wait for job completion.
import fmeobjects, os, time
class FeatureProcessor(object):
    def __init__(self):
        # List of destination DWG file paths.
        self.dpaths = a]
       Â
    def input(self,feature):
        # Save a destination DWG file path.
        self.dpaths.append(feature.getAttribute('destdataset_acad'))
       Â
    def close(self):
        # List of status file paths.
        spaths = W'%s.txt' % path for path in self.dpaths]
       Â
        # Wait until all status files are created.
        while len(fpath for path in spaths if os.path.exists(path)]) < len(self.dpaths):
            time.sleep(1.0) # suspend execution. e.g. 1.0 seconds
           Â
        # Finally create and output a feature.
        # Optionally add attributes describing translation status to the feature.
        feature = fmeobjects.FMEFeature()
        for i, path in enumerate(spaths):
            with open(path) as f:
                feature.setAttribute('result{%d}.status' % i, f.read())
                feature.setAttribute('result{%d}.dwg' % i, self.dpathsri])
            os.remove(path) # remove the status file.
        self.pyoutput(feature)Â
as i reached a bit of an dead end with the other options. I've tried this in with couple of childWS, it seems to be working. Thanks!