I have a workspace which reads rows from an excel file and writes the same rows to an other excel file using dynamic schema definition. Gets the schema from the excel file which is read. My problem is that the first row that is read is skipped in the writer. In the attached example the output will only contain the second row of TestExcelWriter.xlsx
The dynamic writer (Schema Sources: Schema from Schema Feature) would not write the first feature into the destination dataset, if it has an attribute called "fme_schema_handling" with the value "schema_only", which the schema feature output from the <Schema> port contains.
I don't think you need to use the FeatureMerger. You can just send the schema feature and data features to the writer feature type separately.
The dynamic writer (Schema Sources: Schema from Schema Feature) would not write the first feature into the destination dataset, if it has an attribute called "fme_schema_handling" with the value "schema_only", which the schema feature output from the <Schema> port contains.
I don't think you need to use the FeatureMerger. You can just send the schema feature and data features to the writer feature type separately.
Problem solved. Thanks a lot.
Hello,
I just fell into the same error (with FME 2021.2), removing the Feature Merger doesn't work (northing is written). I had to sample every first row (I was writting several files with the fanout), then (re)-sort them to make sure the first row of each file is the double created, so I write them all.
Hello,
I just fell into the same error (with FME 2021.2), removing the Feature Merger doesn't work (northing is written). I had to sample every first row (I was writting several files with the fanout), then (re)-sort them to make sure the first row of each file is the double created, so I write them all.
Can you show us screenshots, including entire workflow and the dynamic writer feature type parameters dialog?
I can't show you the entire workflow (too big), but I can show you the part of interest :
On the left upper part is the dynamic reader, on left down part is the data I want to write.
The red box is the patch I had to create.
My data is previously ordered by, so when I use the sampler the parameters sets make sure it is the first one that I duplicate :
Then I sort them again (because I don't know if all the duplicates are written at the end or "at the right place I want them to be")
I remove an attribute that just serves me for the ordering process.
And the dynamic settings (for the writer) is pretty basic :
I just start writing at the 10th line in the excel, the rest is not changed.
I hope this helps you !
I can't show you the entire workflow (too big), but I can show you the part of interest :
On the left upper part is the dynamic reader, on left down part is the data I want to write.
The red box is the patch I had to create.
My data is previously ordered by, so when I use the sampler the parameters sets make sure it is the first one that I duplicate :
Then I sort them again (because I don't know if all the duplicates are written at the end or "at the right place I want them to be")
I remove an attribute that just serves me for the ordering process.
And the dynamic settings (for the writer) is pretty basic :
I just start writing at the 10th line in the excel, the rest is not changed.
I hope this helps you !
Thank you for your clarification.
I suppose that you intend to configure the destination schema based on the schema definition (i.e. attribute{} list) contained by the feature read by the FeatureReader (Schema port).
If I understand your intention correctly, you can do that with just removing "fme_schema_handling" attribute from the schema feature. And then, it's not necessary to sample the first feature any longer.
The workflow should look like this.
See here to learn more.
Dynamic Workflows: Destination Schema is Derived from a Schema Feature
I think this method can also be applied in your case, since you retrieve the schema definition from an external dataset with the FeatureReader.
Dynamic Workflows: Destination Schema is Derived from an External Dataset
Thank you for your clarification.
I suppose that you intend to configure the destination schema based on the schema definition (i.e. attribute{} list) contained by the feature read by the FeatureReader (Schema port).
If I understand your intention correctly, you can do that with just removing "fme_schema_handling" attribute from the schema feature. And then, it's not necessary to sample the first feature any longer.
The workflow should look like this.
See here to learn more.
Dynamic Workflows: Destination Schema is Derived from a Schema Feature
I think this method can also be applied in your case, since you retrieve the schema definition from an external dataset with the FeatureReader.
Dynamic Workflows: Destination Schema is Derived from an External Dataset
Thank you so much for your answer, I had several FME script to modify because of this issue. Your solution is way faster and easier (and it works of course).
You made my day !