Skip to main content

Hi Everyone,

 

I tried to make use of the schemamapper transformer to create a new output excel spreadsheet which takes a customised schema csv file as an input. The problem is that the schema file sits within a random folder (date time stamp temp folder name - e.g. 20190903100000) each time the workbench run. (PS: the randomised folder is created at the beginning of the workflow and being passed through as an attribute value).Due to the randomised temp folder name which i tried to pass into the schemamapper as the dataset, I am not able to pass it through. I then tried to create a custom transformer where i can pass the temp folder name via attribute value > published parameter for the custom transformer. The new problem I have now is that the custom transformer does not appear to be able to pick up the attribute value (which has become a published parameter) in the dataset field within the schemamapper.

It always throws an error shows "ERROR |CSV reader: Failed to open file '@Value(working_folder)Schema.csv' for reading. Please ensure that the file exists and you have sufficient privileges to read it"

Can anyone advise what would be the best method to approach this?

Note that I am aware that I can initiate another workbench which works perfectly but I would like to keep it in a single workbench.

Please find attached sample workbench.

Thanks you everyone for your assistance.

Hi! Whenever I use paths within Custom Transformers I try to add them as attributes - for instance _fullfilename. Also use the "Single Output-port" if the schema is not important. Make sure the fullfilename is available by double clicking the green Inputbox.

Hope this helps @doni.tan


Hi! Whenever I use paths within Custom Transformers I try to add them as attributes - for instance _fullfilename. Also use the "Single Output-port" if the schema is not important. Make sure the fullfilename is available by double clicking the green Inputbox.

Hope this helps @doni.tan

Hi @sigtill,

Thanks you for your quick response. Sorry for not being up front, the issue is more specific on schemaMapper transformer rather than other transformers. I tried what you mentioned above on featureReader and it works perfectly. But if i tried it on the schemaMapper transformer, it will throw the same error.

I have actually created a sample workbench and attached to the initial question this time.

But thanks for your assistance.


The SchemaMapper transformer can only take a hardcoded value or a published parameter as the Dataset Parameter. Wrapping it in a custom transformer doesn't change this behaviour, it just allows you to type the hardcoded value, rather than using the file browser GUI.

 

 

This means that the SchemaMapper is not resolving the value of the attribute, it's looking for a file called '@Value...'

 

 

The only way I see this working is if you used a scripted private parameter to figure out the Dataset based on the TempFolder parameter, and use that in the SchemaMapper.

 

 

You would probably want to include the creating the directory portion of the workspace in this parameter rather than using the systemCaller.

 

 

I would also suggest creating an idea to allow the SchemaMapper to use attributes as input parameters. Though I don't know how technologically feasible that is.

The SchemaMapper transformer can only take a hardcoded value or a published parameter as the Dataset Parameter. Wrapping it in a custom transformer doesn't change this behaviour, it just allows you to type the hardcoded value, rather than using the file browser GUI.

 

 

This means that the SchemaMapper is not resolving the value of the attribute, it's looking for a file called '@Value...'

 

 

The only way I see this working is if you used a scripted private parameter to figure out the Dataset based on the TempFolder parameter, and use that in the SchemaMapper.

 

 

You would probably want to include the creating the directory portion of the workspace in this parameter rather than using the systemCaller.

 

 

I would also suggest creating an idea to allow the SchemaMapper to use attributes as input parameters. Though I don't know how technologically feasible that is.

Hi @jdh, thanks you for the input. I am hoping to find out if anyone has had any alternative solution such as creating alternate flow to mimic how schemaMapper works. In this case, it will does not require scripting which require in depth interpretation on the workflow.


I've been pondering the same problem for a while, and I have something that works for my usecase. 

 

You have to give the SchemaMapper a hardcoded filelocation to read, but you can edit that file. So I use a FeatureReader followed by a FeatureWriter to dynamically pass my source file to the reader and then write it to the location that the SchemaMapper expects. So I pass the location of my true source file in a parameter and then use a sort of default filelocation to my SchemaMapper.

For my usecase, I have tested this and it works.

 

An Image of my flow

 


Reply