The following people have been involved in conversations of similar topics, and I'm hoping you guys might be able to help me. Thanks!
@mark_1spatial @Mark2AtSafe @takashi @david_r
The following people have been involved in conversations of similar topics, and I'm hoping you guys might be able to help me. Thanks!
@mark_1spatial @Mark2AtSafe @takashi @david_r
Timestamps are still, as far as I know, treated as strings by FME, meaning that you'd need the reader schema feature to tell the native format (see e.g. the FeatureReader's "schema" output port).
Depending on your use case, I'd consider using the following algorithm:
If you get all the way to 3 without any exceptions, you could perhaps assume a timestamp?
Not perfect, but it might be enough for a lot of use cases.
Timestamps are still, as far as I know, treated as strings by FME, meaning that you'd need the reader schema feature to tell the native format (see e.g. the FeatureReader's "schema" output port).
Depending on your use case, I'd consider using the following algorithm:
If you get all the way to 3 without any exceptions, you could perhaps assume a timestamp?
Not perfect, but it might be enough for a lot of use cases.
The reader schema feature that you mention, is there any way to access that from the writer? I have a local FMEFeature object created from the FMESession, and that class does not seem to have any function related to native formats. Am I confusing two completely different things?
The reader schema feature that you mention, is there any way to access that from the writer? I have a local FMEFeature object created from the FMESession, and that class does not seem to have any function related to native formats. Am I confusing two completely different things?
There's a possibility that your writer can access the reader schema IF your writer is set to dynamic, but I've got not experience with that strategy. You may want to contact Safe support about this.
I'd suggest that your writer should already "know" what columns it expects will be dates. Its "DEF lines" (to use old terminology) should say what the types of each column are going to be, and then your writer should be able to know which ones are going to be dates. And so those columns would get the conversion treatment that @david_r suggests.
If you're reading from Oracle, the dates will be formatted in the FME internal format, and Oracle will have told the FME infrastructure that the column was a date, so if your format also says it supports dates in its metafile, then automatically it should be a date type when a workspace is generated.
Thanks, @daleatsafe! I was worried that not all readers would necessarily convert the date type values to the fme_date etc. formats. In one test workspace, I had a CSV reader and my custom writer. While my writer's attribute was set to date, the CSV reader itself was not. And I was able to connect the two without FME complaining. If I only expected/allowed the fme_date format, then the only way the test case would work was if I stuck a DateTimeConverter transformer in between. But, if I want the user to have the flexibility of using fme_date for their reader (either with a transformer or automatically, like you said Oracle does), as well as using our custom writer's special format (like the CSV file content did), then I need some logic in the writer that can catch both cases.
I'm trying to be as user friendly as possible, perhaps a little at the cost of the data conversion speed. I will have to weigh the pros and cons of each choice.
Thanks again for your feedback!