Does it work when re-adding the Reader?
Did you migrate the workspace from a earlier FME version?
I've encountered similar issues in 2018.1, where changes to the excel file aren't picked up. Updating feature types or deleting and adding the reader again doesn't help. My workaround thus far is to give the file a new name which fixes the issue (not ideal, but fine for the work I'm doing at the moment)
Try the following:
- Delete the existing reader
- Save the workspace and exit FME Workbench
- Save the update Excel file
- Start FME Workbench and open the workspace file
- Addd a new Excel reader from scratch
I've encountered similar issues in 2018.1, where changes to the excel file aren't picked up. Updating feature types or deleting and adding the reader again doesn't help. My workaround thus far is to give the file a new name which fixes the issue (not ideal, but fine for the work I'm doing at the moment)
Agree, I've observed the same thing. I had the impression there was some caching going on in FME Workbench. Simply restarting FME and re-creating the reader fixed it for me.
Sounds like a bug to me. We'll see if we can reproduce.
This is what I have seen regarding excel readers. If you upload a reader with multiple tabs, and delete a tab later, it will still read the excel spreadsheet tab. I don't know why. The same goes for column headers. If you rename a column header or even spell it a little different (case-sensitive), it will still read it as if the column header was the way it was originally. And if you delete a column, it will still read it as if the column is there.
The only way to fix it is to delete the reader and reintroduce it back into your workspace.
When dealing with Excel readers I follow this rule of thumb. Setup your data structure the way you always want to use it. If you change any data structure, you will have to delete the reader and bring it back in to get your updates. And when you bring it in, always check the parameters before bringing it in to verify your data is coming in correctly. And always remember, Column headers are case-sensitive.
I use excel readers a lot so I have done a lot of research regarding this so hopefully what I have learned will help. Good luck!
@gmbutler2
Thanks for your comment.
This is exactly what I have observed using Excel readers, basically in every FME version.
Normally the workaround of deleting the reader and adding it again fixes the problem of the deleted data still showing. However this time the problem was a bit more persistent.
I ended up copying the sheet I needed into a new Excel file and have the reader read this new file.
Cheers,
Ed
Always seen this in many versions including transformers including SchemaMapper, and Schema from Table etc. Has caught me out on more than one occasion
I just had this happen to me on Feature Reader reading Excel files format (I use FME Desktop 2021.1.0.1 Build 21614). I change the Column Header (in multiple tab) and it only read the old Column Header. I did notice even if you remove the feature reader and replace it, sometimes the old value will still remain. Even changing the excel file name, it will still read the old header, which is super weird to me. I also try close workspace and reopen it, sometimes that fixed it, sometimes it doesn't.
The best way I found to fixed this issue is to actually close the workspace and wait a few minutes and reopen the workspace and re-add the feature reader.
I just had this happen to me on Feature Reader reading Excel files format (I use FME Desktop 2021.1.0.1 Build 21614). I change the Column Header (in multiple tab) and it only read the old Column Header. I did notice even if you remove the feature reader and replace it, sometimes the old value will still remain. Even changing the excel file name, it will still read the old header, which is super weird to me. I also try close workspace and reopen it, sometimes that fixed it, sometimes it doesn't.
The best way I found to fixed this issue is to actually close the workspace and wait a few minutes and reopen the workspace and re-add the feature reader.
Yeah I have seen that happen as well. The only thing I have seen that fixes it is if you have changed anything in the format of the original excel file, the reader does not see that change. I think this is because when you load the file originally you are setting all the parameters to read the file a certain way. So if you change anything that might affect that then the reader can't recognize it because you set the parameter for the way it was originally. For example, if your parameters say you are reading column H of data with an attribute heading of City and Column I with a heading of State. It will read those columns in that way. But if you delete the H column, thus changing Column I to State data, when the reader reads that file, the data in column H will still be read as City and I will still be read at State. SO the data will still read, but all of your State values will now be in Column H which is labelled as City, and the Column I values which are now null or missing will come in as State.
The only clear fix I have found is to replace the reader itself so that you can change the parameters to represent any changes with the file. Then delete the old reader that has the old parameters set.
When working with excel files, I always make sure the format of my source files are the same so that I don't need a new reader for every type of change necessary.
Does anyone know if this is being looked into as a bug?
I use excel readers quite a bit and I'm faced with a choice of:
- not deleting the readers and risking that the output may be incorrect because the excel reader is reading data incorrectly
- deleting all excel readers and creating new ones every time I want to run the workbench
The lack of confidence in the excel reader means I normally go with option 2 but this is obviously not an ideal solution.
I have done extensive research on this issue for my own personal edification. Here's what I have personally uncovered.
When you read an excel spreadsheet, the structure of the workbook is analyzed.
If you modify the workbook in any way, the excel reader will still read this data because this structure has been saved in the metadata of the excel workbook. FME is not intuitive on the structure and when you add the reader FME assumes the structure of the document you are reading will not change. There are two ways to address this.
- You can copy the tab you want to use into a new workbook. This overwrites the structure of the metadata of the copied data into the structure of a new excel document. And then you can replace your reader with the new workbook. When you use updated versions of the same file, FME will only read the structure that was specified when the Reader was added. This means even if you get an updated document that has multiple tabs, it will only read the structure of the tab that was specified when the reader was added.
-or-
- Save a copy of the file with only what you want. Then add your reader again. Everytime you get an update of the file, you will have to repeat the process.
When FME reads an Excel file, it analyzes not only the structure of the workbook, but also of the tabs, including the column headings. If the field names row changes, or if the attributes names are different (case-sensitive), FME will fail to read any columns where these changes have occurred.
The only way to resolve this issue is to add a new reader that reads the structure with the modified column headings and field names rows.
I have a document I get weekly that I have to add every time because the individual that sends me the data may rename a column header or sheet name or any small seemingly insignificant thing but the FME Excel Reader is not intuitive in it process of reading the documents that are provided. I don't think it is a bug, I think it is an issue of making sure that the format of the document you are using with FME is standardized as much as possible and if your file does not end up being in the standardized format then you just need to pivot.
How I address this is that when I add a document, I expose all format attributes using the Advanced section and Schema Attributes to Expose. I then immediately modify these in my workspace to whatever I want my output format to be and remove the unnecessary junk data. That way when I add the next document back in, I just have to click iselect all] on the schema attributes to expose and the workspace will clean it up.
I hope this helps someone even though it doesn't really fix the issue. I hope it just gives some ways to make it less stressful for future work. Good luck everybody!
One workaround I’ve just implemented is to convert the excel file to a CSV. I'm not seeing the “phantom fields” issue when reading in changes to my CSV file. Hopefully this helps!