Skip to main content

Hi all,

 

So I have successfully read in a shapefile to FME and written it to our ArcGIS Enterprise Portal. What I next want to do is send an updated version of the shapefile into Portal, overwriting the existing version (as this feature will be added into maps and dashboards etc.). However I have tried to do this in several ways and it isn't working.

The shapefile is very simple, a 'provider field' a 'total field' and an 'easting' and 'northing' field. The provder field is likely the only attributes which will change, with the values changing regularly.

What I tried first was in the writer set the Portal layer to INSERT and Truncate the layer, but this resulted in the layer deleting all attribute values out of 2 of the 4 fields (provider and total)?!

So I then tried setting the writer to UPDATE, this this failed as the shapefile doesn't contain a 'objectid' field. Although when I view the shapefile in ArcGIS, it does have an ID field, but if I view it in FME, it doesn't show. I tried to use the attribute manager transformer to add in a manual objectid field but it still failed. I also tried manually adding in objectid field in ArcGIS, but it wouldn't let me as it already existed.

I feel like this should quite a trivial thing to do, to basically overwrite an existing ArcGIS Portal feature, but I am having no luck!

Any help is appreciated :)

Thanks

Dan

A Table Truncate command depends on the DB API, but the usual processing of this is to not Drop the table, but delete all rows (and may or may not trigger some form of automatic Vacuuming to compact the table space).

So using this method, you need to INSERT all 4 fields back with the Writer. By the situation you've described, this is probably the best and easiest way to do it as the Schema isn't changing so there is no need to delete and re-create the entire Table.

The reason for no OID or FID field on a Shape File is because they physically don't have one (and yet another reason to try to avoid Shape Files because it is yet another limitation in a long list of limitations). What you see in ArcGIS is added as a virtual in-memory "field" but it is just a row counter.....and worse still..........it is dynamic. Delete a row or Sort the records and then it recalculates all the numbers again to match the new row order.

In your case anyway, unless the row count and row order of the data is EXACTLY the same each time, it is a very bad thing to try to use an ObjectID/OID/FID for an Update Operation. Updates only work if the Primary Key is unique and importantly....a static value! ObjectIDs in ArcGIS are definitely not this and the ESRI documentation explicitly warns against trying to use them like this for that reason.


A Table Truncate command depends on the DB API, but the usual processing of this is to not Drop the table, but delete all rows (and may or may not trigger some form of automatic Vacuuming to compact the table space).

So using this method, you need to INSERT all 4 fields back with the Writer. By the situation you've described, this is probably the best and easiest way to do it as the Schema isn't changing so there is no need to delete and re-create the entire Table.

The reason for no OID or FID field on a Shape File is because they physically don't have one (and yet another reason to try to avoid Shape Files because it is yet another limitation in a long list of limitations). What you see in ArcGIS is added as a virtual in-memory "field" but it is just a row counter.....and worse still..........it is dynamic. Delete a row or Sort the records and then it recalculates all the numbers again to match the new row order.

In your case anyway, unless the row count and row order of the data is EXACTLY the same each time, it is a very bad thing to try to use an ObjectID/OID/FID for an Update Operation. Updates only work if the Primary Key is unique and importantly....a static value! ObjectIDs in ArcGIS are definitely not this and the ESRI documentation explicitly warns against trying to use them like this for that reason.

Thanks @bwn.

So it sounds like INSERT is the best option to use, and not drop the table. I feel like I have already tried this (did quite a bit of trial of error) but will try again to make sure and see what the result is.

 

Thanks for the info about shapefiles. I am using shapefile as the excel table gets converted to a spatial dataset using ArcGIS geoprocessing tool, Create using X and Y and it converts to a shapefile, I could perhaps add another step to this and convert it into another spatial format (one which doesnt create a virtual in memory OID.

 

I will let you know how I get on.

Thanks

Dan


Hi again,

 

So I started from scratch and just used 4 records in the shapefile and published the shapefile to Portal. It looks fine so I then updated the shapefile in ArcGIS Pro (not Portal) and used FME to read in the updated shapefile and write into Portal overwriting the Portal version. I used the INSERT option and didn't tick truncate.

 

This ran without errors but the result is the shapefile in Portal now has 8 records, and it hasn't brought across all attributes, snippet attached. Any ideas? Could this be to do with the shapefile OID... If so I can try using a different format if needs be.

 

Thanks

Dan


Hi again,

 

So I started from scratch and just used 4 records in the shapefile and published the shapefile to Portal. It looks fine so I then updated the shapefile in ArcGIS Pro (not Portal) and used FME to read in the updated shapefile and write into Portal overwriting the Portal version. I used the INSERT option and didn't tick truncate.

 

This ran without errors but the result is the shapefile in Portal now has 8 records, and it hasn't brought across all attributes, snippet attached. Any ideas? Could this be to do with the shapefile OID... If so I can try using a different format if needs be.

 

Thanks

Dan

The fields that were successfully written have names in lowercase while the other fields without values are in sentence case. Perhaps it is an issue with the case of the attribute names?

Is there an alias defined on the Provider and Total fields?


The fields that were successfully written have names in lowercase while the other fields without values are in sentence case. Perhaps it is an issue with the case of the attribute names?

Is there an alias defined on the Provider and Total fields?

Thanks for the response @debbiatsafe.

 

There were 4 records originally in the dataset, the following 4 were created when running the update writer in FME. So I don't think them fields with lowercase actually ran successfully? It looks like it inserted the 4 records again, but for some reason didn't like the Provider and total fields?!

That being said, perhaps INSERT isn't the function to use? As I don't wan't the dataset to insert 4 new rows every time, I want it to just overwrite the existing records (although realistically the only changes to the data will be the hours field).

Thanks, Dan


Thanks @bwn.

So it sounds like INSERT is the best option to use, and not drop the table. I feel like I have already tried this (did quite a bit of trial of error) but will try again to make sure and see what the result is.

 

Thanks for the info about shapefiles. I am using shapefile as the excel table gets converted to a spatial dataset using ArcGIS geoprocessing tool, Create using X and Y and it converts to a shapefile, I could perhaps add another step to this and convert it into another spatial format (one which doesnt create a virtual in memory OID.

 

I will let you know how I get on.

Thanks

Dan

@dan_mbc, almost all ArcGIS ArcTool Dialogs that create data will give an option to output to a Shape File, Personal Geodatabase (PGDB) Feature Class or File Geodatabase (FGDB) Feature Class or for non-spatial tables similarly dBase (dbf), PGDB Table or FGDB Table.

Out of these, FGDBs are the newest and best format, the one that ESRI recommends users default to with ESRI keeping an eye on when they can deprecate SHP and PGDB (chances are, PGDBs will be the first to go).

GDBs generate an actual ObjectID field. It is still dynamic to a degree and is still frowned upon as using as a Primary Key, but it generally doesn't alter too much except when undertaking various table re-build operations, but still shouldn't be used as a key row identifier when it needs to be maintained as the unique identifier for a record throughout its lifecycle.

I haven't got ArcGIS Pro installed but for example, here are drop down options in ArcGIS 10.x.

What should try to do in ArcGIS is use "Geodatabases" where possible. SHP is OK for things like needing to send to other applications/environments that can't read FGDBs, but otherwise they are a generally poorer format.


Thanks for the response @debbiatsafe.

 

There were 4 records originally in the dataset, the following 4 were created when running the update writer in FME. So I don't think them fields with lowercase actually ran successfully? It looks like it inserted the 4 records again, but for some reason didn't like the Provider and total fields?!

That being said, perhaps INSERT isn't the function to use? As I don't wan't the dataset to insert 4 new rows every time, I want it to just overwrite the existing records (although realistically the only changes to the data will be the hours field).

Thanks, Dan

The option to use in the Writer is "Truncate Table" = "Yes". An Insert is just like an Append, it adds new records. Using a Truncate + Insert operation together deletes all the existing rows and appends new rows, which is basically the same as full table refresh.

 

In the absence of a true Primary Key (ie. Unique identifier Field), this is usually the best approach providing it does not cause too much overhead.

The alternative is if "Provider" is a unique value and guaranteed to stay unique. This could be used in an Alternative Writer Update Mode, using "Provider" as the key field.

As to why they are blank, that isn't possible to diagnose without perhaps seeing the workspace, but if you send the output of the final output Transformer to an Inspector Transformer just before it goes into the Writer, what does this look like? Do you see all Attributes and Values in the resulting Data Inspector?


@dan_mbc, almost all ArcGIS ArcTool Dialogs that create data will give an option to output to a Shape File, Personal Geodatabase (PGDB) Feature Class or File Geodatabase (FGDB) Feature Class or for non-spatial tables similarly dBase (dbf), PGDB Table or FGDB Table.

Out of these, FGDBs are the newest and best format, the one that ESRI recommends users default to with ESRI keeping an eye on when they can deprecate SHP and PGDB (chances are, PGDBs will be the first to go).

GDBs generate an actual ObjectID field. It is still dynamic to a degree and is still frowned upon as using as a Primary Key, but it generally doesn't alter too much except when undertaking various table re-build operations, but still shouldn't be used as a key row identifier when it needs to be maintained as the unique identifier for a record throughout its lifecycle.

I haven't got ArcGIS Pro installed but for example, here are drop down options in ArcGIS 10.x.

What should try to do in ArcGIS is use "Geodatabases" where possible. SHP is OK for things like needing to send to other applications/environments that can't read FGDBs, but otherwise they are a generally poorer format.

Thanks @bwn. I agree shapefiles are not ideal and for 95% of our data we use SQL server and file geodatases. I only used shapefile for this as when adding a new dataset in ArcGIS Enterprise (Portal) it accepts shapefiles if they are in a zip file, so it was an easy way to get the data into Portal, and the in theory just overwrite it. I am not sure if they accept geodatabases in this way, I will take a look.

What I might try do is read in the excel spreadsheet into FME and create the geodatabase within there and publish to Portal. Then try the process again of overwriting/inserting the new values.


The option to use in the Writer is "Truncate Table" = "Yes". An Insert is just like an Append, it adds new records. Using a Truncate + Insert operation together deletes all the existing rows and appends new rows, which is basically the same as full table refresh.

 

In the absence of a true Primary Key (ie. Unique identifier Field), this is usually the best approach providing it does not cause too much overhead.

The alternative is if "Provider" is a unique value and guaranteed to stay unique. This could be used in an Alternative Writer Update Mode, using "Provider" as the key field.

As to why they are blank, that isn't possible to diagnose without perhaps seeing the workspace, but if you send the output of the final output Transformer to an Inspector Transformer just before it goes into the Writer, what does this look like? Do you see all Attributes and Values in the resulting Data Inspector?

Thanks @bwn.

 

Setting the options to truncate and insert works, as in it is a full table refresh, with only 4 records now, but the provider and total fields are still blank.

I think I know what the problem is, there is a warning which says:

ArcGIS Portal Feature Service Writer: Attribute 'Provider' on feature type '4May_Dom_hours' does not match any field on the corresponding layer/table. The attribute will be ignored.

The same message also exists for the total field.

Looking into the fields, it seems like the field 'types' have changed due to how they are stored within each system. So as a shapefile the provider is a char field and the total is a double field. But when then converted into Portal it stores the provider as EsriFieldTypeString and the total field as EsriFieldTypeDouble.

I am surprised the total field hasn't worked, as they are both defined as 'double', which is the same as the x and y field, which comes across without any problems.

I have tried manually joining the field lines together but it didn't work. Although the field types are different due to different systems, the actual types seem sensible so not sure what else to do. May try using a file geodatabase rather than shapefile.


Hi again,

 

So I have followed the advice and when creating the X and Y table I have used file geodatabase instead of a shapefile. I then published this geodatabase to Portal via FME and it loads in fine, but I have the same field matching problem as before when I then try to run the FME model again after some changes have been made to the geodatabase data (new values entered for the total hours).

 

ArcGIS Portal Feature Service Writer: Attribute 'OBJECTID' on feature type 'May4_Dom_Hours' does not match any field on the corresponding layer/table. The attribute will be ignored

ArcGIS Portal Feature Service Writer: Attribute 'Provider' on feature type 'May4_Dom_Hours' does not match any field on the corresponding layer/table. The attribute will be ignored

ArcGIS Portal Feature Service Writer: Attribute 'Total' on feature type 'May4_Dom_Hours' does not match any field on the corresponding layer/table. The attribute will be ignored

 

I have the writer set to truncate and INSERT.

Thanks


Thanks @bwn. I agree shapefiles are not ideal and for 95% of our data we use SQL server and file geodatases. I only used shapefile for this as when adding a new dataset in ArcGIS Enterprise (Portal) it accepts shapefiles if they are in a zip file, so it was an easy way to get the data into Portal, and the in theory just overwrite it. I am not sure if they accept geodatabases in this way, I will take a look.

What I might try do is read in the excel spreadsheet into FME and create the geodatabase within there and publish to Portal. Then try the process again of overwriting/inserting the new values.

Should work, I've uploaded FGDBs directly to AGOL before which I believe is architecturally identical to Portal, but I agree the prompts on the ArcGIS web form side can be a little confusing as to what formats are acceptable and in what kind of file container. The trick is to zip the FGDB into a single container ZIP file, then the Feature Classes within it should become available as Hosted Layers.


Thanks @bwn.

 

Setting the options to truncate and insert works, as in it is a full table refresh, with only 4 records now, but the provider and total fields are still blank.

I think I know what the problem is, there is a warning which says:

ArcGIS Portal Feature Service Writer: Attribute 'Provider' on feature type '4May_Dom_hours' does not match any field on the corresponding layer/table. The attribute will be ignored.

The same message also exists for the total field.

Looking into the fields, it seems like the field 'types' have changed due to how they are stored within each system. So as a shapefile the provider is a char field and the total is a double field. But when then converted into Portal it stores the provider as EsriFieldTypeString and the total field as EsriFieldTypeDouble.

I am surprised the total field hasn't worked, as they are both defined as 'double', which is the same as the x and y field, which comes across without any problems.

I have tried manually joining the field lines together but it didn't work. Although the field types are different due to different systems, the actual types seem sensible so not sure what else to do. May try using a file geodatabase rather than shapefile.

For that it could be back to the potential problems suggested by @debbiatsafe .

I don't think it would be an Alias problem, as you had the same issue when you first created the Portal table using SHP format, and SHPs don't have Field Alias's. So it is likely the real names of of the Fields are indeed "Provider" and "Total", but to be sure, I would check this in the ArcGIS Catalog view on the FGDB you are using for the initial upload by inspecting if an Alias has been set to Labels different from the real field names.

Otherwise I'm thinking perhaps Writer Bug? I've noticed on a couple of other Writers that FME thinks mistakenly that the destination format only accepts lower case syntax, when the database actually supports Uppers and Lowers.

For that I would try renaming the Provider and Total fields to lower case variations of "provider" and "total", both in the seed FGDB and also within the FME Update/Insert workflow (You can use an FME AttributeManager Transformer to enforce renaming the fields inside the Workflow). If that solves it, then it points to the Writer not handling Portal table field names with Upper case characters?


Should work, I've uploaded FGDBs directly to AGOL before which I believe is architecturally identical to Portal, but I agree the prompts on the ArcGIS web form side can be a little confusing as to what formats are acceptable and in what kind of file container. The trick is to zip the FGDB into a single container ZIP file, then the Feature Classes within it should become available as Hosted Layers.

Thanks, I realise I could have uploaded the GDB directly into portal as a zip file, but instead I used FME to publish the GDB into portal. The outcome I would imagine is the same, but I was in FME at the time.

However the issue with field types still seems to exist.


For that it could be back to the potential problems suggested by @debbiatsafe .

I don't think it would be an Alias problem, as you had the same issue when you first created the Portal table using SHP format, and SHPs don't have Field Alias's. So it is likely the real names of of the Fields are indeed "Provider" and "Total", but to be sure, I would check this in the ArcGIS Catalog view on the FGDB you are using for the initial upload by inspecting if an Alias has been set to Labels different from the real field names.

Otherwise I'm thinking perhaps Writer Bug? I've noticed on a couple of other Writers that FME thinks mistakenly that the destination format only accepts lower case syntax, when the database actually supports Uppers and Lowers.

For that I would try renaming the Provider and Total fields to lower case variations of "provider" and "total", both in the seed FGDB and also within the FME Update/Insert workflow (You can use an FME AttributeManager Transformer to enforce renaming the fields inside the Workflow). If that solves it, then it points to the Writer not handling Portal table field names with Upper case characters?

Thank you @bwn you were right, the issue seems to be with FME writer not handling portal field names with upper case in them.

I amended the raw data (excel spreadsheet) and changed provider and total to be lower case. I then ran the X and Y tool and recreated the geodatabase feature. Finally I went into FME and read in the GDB and published to Portal (using Truncate and INSERT). I then went in and amended the values in the GDB and re ran the FME workbench, and it updated without any issues! Thank you!

The only issue I now have is the 'raw' data won't actually be in lower case. So if this problem is FME not handling Portal field names correctly, then when we get sent the excel spreadsheet (think it's daily), I may have to amend the field names to be lower case... I might have to try incorporate this into the ArcGIS Model builder.


Thanks for the response @debbiatsafe.

 

There were 4 records originally in the dataset, the following 4 were created when running the update writer in FME. So I don't think them fields with lowercase actually ran successfully? It looks like it inserted the 4 records again, but for some reason didn't like the Provider and total fields?!

That being said, perhaps INSERT isn't the function to use? As I don't wan't the dataset to insert 4 new rows every time, I want it to just overwrite the existing records (although realistically the only changes to the data will be the hours field).

Thanks, Dan

Glad to hear you were able to get the workspace working for you. I was curious so I tested reading from Shapefile and writing against a Portal with a PostGIS datastore.

I found that FME created an Alias automatically when the attribute name contained uppercase characters. The Alias contained the original uppercase or mixed case attribute name while the actual field name is the attribute converted to lowercase (see screenshot).

My colleagues more familiar with Portal tell me this is a function of the database underlying the datastore. In my case, PostgreSQL (the underlying database) uses lowercase field names. It may be a similar situation for the Portal you are writing to.

So when updating an existing layer, take care to match case of the attributes as seen on Portal layer definition. I would recommend using the BulkAttributeRenamer which can convert multiple attribute names to lowercase within the worspace.


Glad to hear you were able to get the workspace working for you. I was curious so I tested reading from Shapefile and writing against a Portal with a PostGIS datastore.

I found that FME created an Alias automatically when the attribute name contained uppercase characters. The Alias contained the original uppercase or mixed case attribute name while the actual field name is the attribute converted to lowercase (see screenshot).

My colleagues more familiar with Portal tell me this is a function of the database underlying the datastore. In my case, PostgreSQL (the underlying database) uses lowercase field names. It may be a similar situation for the Portal you are writing to.

So when updating an existing layer, take care to match case of the attributes as seen on Portal layer definition. I would recommend using the BulkAttributeRenamer which can convert multiple attribute names to lowercase within the worspace.

That's interesting. PostGreSQL will actually accept both Upper and Lower case field names, although it is generally recommended to use lower case names as it makes its implementation of SQL more reliable because it auto-resolves any unquoted fields inside SQL statements to a lower case name.

Eg. 'SELECT Provider FROM ... ' in PostGreSQL translates to 'SELECT provider FROM ... ' and this hence can raise an error since the statement needed to be written as ' SELECT "Provider" FROM ... ' to properly resolve to a a Field named "Provider"

But apart from that, it isn't a limitation of PostGreSQL, just a best practice guide, so strictly speaking FME does not need to enforce lower case attribute names in the Writer. Not sure if the Portal API sitting on top of it can deal with it, but @dan_mbc 's initial manual upload of mixed case Attribute Names suggests it can?


That's interesting. PostGreSQL will actually accept both Upper and Lower case field names, although it is generally recommended to use lower case names as it makes its implementation of SQL more reliable because it auto-resolves any unquoted fields inside SQL statements to a lower case name.

Eg. 'SELECT Provider FROM ... ' in PostGreSQL translates to 'SELECT provider FROM ... ' and this hence can raise an error since the statement needed to be written as ' SELECT "Provider" FROM ... ' to properly resolve to a a Field named "Provider"

But apart from that, it isn't a limitation of PostGreSQL, just a best practice guide, so strictly speaking FME does not need to enforce lower case attribute names in the Writer. Not sure if the Portal API sitting on top of it can deal with it, but @dan_mbc 's initial manual upload of mixed case Attribute Names suggests it can?

Thanks both @bwn and @debbiatsafe.

 

It does make sense about the underlying datastore using lowercase field names, the strange thing is, as @bwn pointed out, the initial upload to Portal works fine. I have uploaded this data to Portal in a couple of different ways. Firstly I just uploaded the shapefile (in a zip file) to portal directly, and secondly I did it within FME, and it uploaded fine without any problems? It is only when I then go to 'overwrite' it that it gives me problems.

I guess this could be because the initial upload (with mixed case) results in the underlying database changing the field names to lowercase. Then when I come to overwriting it, it see's the source shapefile/geodatabase field names as being different so 'ignores' them. It's almost as if it is clever enough to handle the case when first uploading, changing it as required, but if overwriting and it detects a change in field case, then it simple will ignore it?


Thanks both @bwn and @debbiatsafe.

 

It does make sense about the underlying datastore using lowercase field names, the strange thing is, as @bwn pointed out, the initial upload to Portal works fine. I have uploaded this data to Portal in a couple of different ways. Firstly I just uploaded the shapefile (in a zip file) to portal directly, and secondly I did it within FME, and it uploaded fine without any problems? It is only when I then go to 'overwrite' it that it gives me problems.

I guess this could be because the initial upload (with mixed case) results in the underlying database changing the field names to lowercase. Then when I come to overwriting it, it see's the source shapefile/geodatabase field names as being different so 'ignores' them. It's almost as if it is clever enough to handle the case when first uploading, changing it as required, but if overwriting and it detects a change in field case, then it simple will ignore it?

@bwn and @dan_mbc It is true PostgreSQL will accept both upper and lowercase field names. However, as mentioned in here and here, field names will be auto-converted to lowercase when publishing to Portal for ArcGIS with an ArcGIS Data Store Hosting Server.

I realize I actually misspoke (mistyped?) in my last comment based on further tests. FME does not modify the field names or create an alias if an attribute name contains uppercase characters. This step is actually done by Portal during the publishing process.

FME's Portal writer creates a temporary, zipped geodatabase and pushes it to the portal. If you download this temporary geodatabase from the portal and inspect the feature class, you will find the field names are as shown in the writer feature type and there are no aliases defined.

Yes, the attribute names have to be exact when writing to a table with an existing schema. There should have been some warnings in the log file similar to the ones mentioned in this Q&A.


@bwn and @dan_mbc It is true PostgreSQL will accept both upper and lowercase field names. However, as mentioned in here and here, field names will be auto-converted to lowercase when publishing to Portal for ArcGIS with an ArcGIS Data Store Hosting Server.

I realize I actually misspoke (mistyped?) in my last comment based on further tests. FME does not modify the field names or create an alias if an attribute name contains uppercase characters. This step is actually done by Portal during the publishing process.

FME's Portal writer creates a temporary, zipped geodatabase and pushes it to the portal. If you download this temporary geodatabase from the portal and inspect the feature class, you will find the field names are as shown in the writer feature type and there are no aliases defined.

Yes, the attribute names have to be exact when writing to a table with an existing schema. There should have been some warnings in the log file similar to the ones mentioned in this Q&A.

Thanks @debbiatsafe, I just went into some of the hosted feature layers in Portal and looked at their service REST end points and you are right, although the fields in Portal look like they are in proper case (for example 'Provider') this is just the alias, the actual field names have been converted to lower case. So it looks like it is Portal which does this.

 

Now I know so I can amend my work processes to work around this, thank you both so much for your help with this. Mych appreciated @debbiatsafe @bwn


Reply