If the workspace is not too complex I would try the following:
- Start a new workspace
- Insert a new gdb reader.
Sometimes workspaces get somehow corrupted and starting a new one, can be a solution.
If the workspace is not too complex I would try the following:
- Start a new workspace
- Insert a new gdb reader.
Sometimes workspaces get somehow corrupted and starting a new one, can be a solution.
I tried creating a new workspace and I am still having the same problem. Any other ideas?
I tried creating a new workspace and I am still having the same problem. Any other ideas?
Can it be that the feature type is passed twice from the scripted parameter? Any chance of uploading the workspace?
Can it be that the feature type is passed twice from the scripted parameter? Any chance of uploading the workspace?
I have two file geodatabase readers, pointed at two different file geodatabases which are in different folders but otherwise have the same name and feature classes. One of the file geodatabases has the original feature class with 50 features, and one has the edited feature class with 49 features. This is just for testing purposes. What I'm discovering is that if I delete the reader that points to the original feature class (with 50 features) the first reader reads only 49 features. So it appears that when it's reading 99 features, it's somehow reading the features that the other reader is supposed to be reading. Does my explanation make sense, and have you seen this behavior before?
I tried changing the file geodatabase name and feature class name of the 50-feature data, and now the two readers behave as expected. However, in the full implementation of my workflow, making sure the two file geodatabases and feature classes have different names will be a lot of extra upfront work and I would prefer a solution with the FME settings rather than modifying my data.
I can try to upload the workspace but since I am using FME through Esri's Data Interop, I have fewer options for exporting my workspace.
Hi @crutan
Based on the workspace you shared on live chat, it appears you are dynamically trying to pass the feature class and geodatabase names/paths to read by reading a text file.
I recommend confirming the issue is not caused by incorrect parameter values passed from your scripted parameters. You can do this by using the Text Line reader and the FileNamePartExtractor, along with a TestFilter or AttributeFilter to determine which text line feature should pass to which FeatureReader.
You can use the attributes _dirpath and _rootname (or _filename) in the FeatureReader. An example of this method is attached: TextLineReaderwithFileNamePartExtractor.fmw
Hi @crutan
Based on the workspace you shared on live chat, it appears you are dynamically trying to pass the feature class and geodatabase names/paths to read by reading a text file.
I recommend confirming the issue is not caused by incorrect parameter values passed from your scripted parameters. You can do this by using the Text Line reader and the FileNamePartExtractor, along with a TestFilter or AttributeFilter to determine which text line feature should pass to which FeatureReader.
You can use the attributes _dirpath and _rootname (or _filename) in the FeatureReader. An example of this method is attached: TextLineReaderwithFileNamePartExtractor.fmw
@debbiatsafe This was so helpful; thank you! I'm having trouble figuring out how to set up a writer with a dynamic name. I got the schema working, but I want the name to be the value of _filename from the schema port of the TestFilter (I want the output file to have the same name as the schema feature I'm comparing my new data against). Can you advise me on this? Everything I've tried has resulted in values like "rootname" or "_filename" being assigned as the writer's name, instead of the string that those attributes are supposed to hold (like "urban_service_boundary").
@debbiatsafe This was so helpful; thank you! I'm having trouble figuring out how to set up a writer with a dynamic name. I got the schema working, but I want the name to be the value of _filename from the schema port of the TestFilter (I want the output file to have the same name as the schema feature I'm comparing my new data against). Can you advise me on this? Everything I've tried has resulted in values like "rootname" or "_filename" being assigned as the writer's name, instead of the string that those attributes are supposed to hold (like "urban_service_boundary").
Hi @crutan,
You are on the right track and only miss making the attributes available after the FeatureReaders:
Hi @crutan,
You are on the right track and only miss making the attributes available after the FeatureReaders:
I did have that setting applied already, and I was still having the issue of output file names not getting set appropriately. I ended up returning to scripting some parameters through Python, and got the results I needed. Thanks for your help, and thank you @debbiatsafe