I am using the Schema Scanner with a dynamic writer in my workspace and it is awesome except that it often incorrectly identifies some of my integer fields as date/time fields. If I turn off the date/time detection then my actual date/time fields get written as strings or integers. Right now I am running it with date/time detection on and then I put an attribute manager on the schema and find the ones it has incorrectly identified and manually adjust them but I feel this kind of takes away from the dynamic nature of the schema scanner. Is there a better way to do this? I wish I could tell it to ignore certain fields or specify the fields that it should look for date/time in but again that is taking away from the dynamic nature of the bench.
Page 1 / 1
A couple of things:
- Is the data source itself something like a database table or geodatabase? You don’t need SchemaScanner in this case, you can read the Schema directly from the source data field definitions with FeatureReader set in Read-Schema-Only mode, and is faster to run.
- What do you have as a raw sample date/time format to detect in the source? Ie. What is the raw date/time string format in the source? You can encode the pattern into SchemaScanner’s “Convert Input Date Format to FME Date” Parameter. As per the Help, this option is able to be used when “Output Schema Before Data Features” is set to Yes