Heya,
I'm having some trouble writing a two csv tables (share a relationship via a common ID in both tables) to a esri feature class and related table (linked via relationship class - global ID / parent globalID) which have a set pre-configured schema.
Here are the steps i have taken and some screen shots of my process to load the csv data to the feature class schema (part 1/2).
Part 1/2 - load geocoded main csv table data into esri feature class
The csv default for each field is read as a text field, thus the Esri reprojector is trying to pass all these text fields into a set schema. Do I need to include a schema mapper? If so, what does that table look like and in what format does it need to be in. Or, do I need to define all the fields in the CSV reader?
The csv is being created correctly no problem and the data are same in the same projection and in the same region as I view them all in the preview with feature caching. The writer to write to Awhina Schema cannot be run - it's greyed out. But I did manage to run it once after some modifications and the log file had a lot of errors - a warning about this coordinate system no existing and a date error:
"2020-05-11 15:27:29| 0.9| 0.0|WARN |Coordinate system named _LL-WGS84_0 does not exist."
The Esri date fields are date but the csv is passing datetime - not sure if that is an issue?
example of csv date format above
csv writer above
Part 2/2 - read csv related table and load into esri related table
In the above model, the csv write to a gdb with no set schema and writes all the features but cannot run to the related table that has set schema.
Thanks for any help you may be able to provide, I'm a bit new to FME so may not be using the best approach here.
Cheers,
Simon