Skip to main content

I want to run a batch process to load a series of file-based datasets in a folder, do some simple transformations and export each to a database table.  The workspace in the image below is run by a workspace runner.  The complication is that each source has some standard columns and an unknown quantity of additional columns, which need to be loaded into the database as a new table.  If I configure the workspace with one source file, the schema config will not be correct for another file.  How can I set the schema definition to be dynamic?  I’ve been researching the subject but have not found the answer.

 

Hi ​@theandroidbell ,

Have you learned "Tutorial: Dynamic Workflows"?

There are some ways to configure dynamic schema, but I think "Destination Schema as a Mirror Image of the Source Dataset" could be applied to your case.


...