Skip to main content

Hi community! I've been trying to formulate an efficient workspace that would:

  1. Extract feature service URLs from an excel sheet (many have the item ID number at the end)
  2. Download the GIS data from each feature service
  3. Transform the idea with typical cleaning up
  4. Write out to a file GDB on a local server drive

 

For steps 1 and 2, I've been experimenting with HTTPCaller and FeatureReader but haven't yielded a successful extraction yet. I think 3 and 4 will be straightforward, but if there any any suggestions (specifically with getting from 1 to 2) welcomed, and appreciate any feedback in advance!

This is a good example of a dynamic workflow as you're dealing with multiple input schema and requiring them to go through the same process. Additionally, you're not sure on the schema before the process starts so you can't manually expose attributes and set the output schemas.

 

I've attached an example process (built in 2023.1) that uses two of Esris sample layers. The process broadly follows:

  1. Get the details of the service using an HTTP Caller to get the JSON
  2. Extract the name of the Layers
  3. Strips the layer id from the end of the URL
  4. Uses the URL and Name in a feature reader to read the data. Note that the features are coming out of the Generic port. This is because we don't know what the feature classes are going to be before we run the process.
  5. Use a schema scanner (with groupby on the feature type) to build a schema and remove 'object id' as this will cause issues as its a reserved field in a GDB
  6. With a GDB writer set as dynamic, write out the features with the correct schema(s)

 


This is so very helpful @hkingsbury​ and thank you for educating on the process! I have quite a lot of services that are pretty large, so it's been running for the last few hours. I'll report back once it's complete but so far so good!


This is a good example of a dynamic workflow as you're dealing with multiple input schema and requiring them to go through the same process. Additionally, you're not sure on the schema before the process starts so you can't manually expose attributes and set the output schemas.

 

I've attached an example process (built in 2023.1) that uses two of Esris sample layers. The process broadly follows:

  1. Get the details of the service using an HTTP Caller to get the JSON
  2. Extract the name of the Layers
  3. Strips the layer id from the end of the URL
  4. Uses the URL and Name in a feature reader to read the data. Note that the features are coming out of the Generic port. This is because we don't know what the feature classes are going to be before we run the process.
  5. Use a schema scanner (with groupby on the feature type) to build a schema and remove 'object id' as this will cause issues as its a reserved field in a GDB
  6. With a GDB writer set as dynamic, write out the features with the correct schema(s)

 

I can confirm that this is exactly what i needed and it download as i needed! However, one issue is that because some of the feature services are quite large, I'm letting it run several hours (8-13 hours) and FME eventually freezes. @hkingsbury​ (or anyone else)..any advice on the best option to make each feature service URL run completely through, then go to the next service?

 

Much appreciated!


Reply