Skip to main content
Solved

FME errors with Geojson writer and reader

  • August 16, 2019
  • 2 replies
  • 203 views

aguan
Supporter
Forum|alt.badge.img+11

I have a workspace (2018.1) that read a ArcSDE feature class into a Geojson file and then read the Geojson into MongoDB. The Geojson writer works fine in writing, but always gives an error at the end of writing regardless of the geometry type or the size of the feature class:

-------------------------------------------------------------------------------------------------------

Opening the GEOJSON reader with source dataset 'W:\\gis_data_admin\\Admin\\AllenG\\FME\\MongoDB\\Geojson\\IHS_INT.json'

A JSON syntax error was found at line 1, column 0

The JSON data is incomplete: Unexpectedly encountered the end of JSON data

A JSON syntax error was found at line 1, column 0

The data does not contain any JSON text

-------------------------------------------------------------------------------------------------------

If I change the reader to shape files, then I don't have the above problem. So the problem seems to be related to ArcSDE feature class.

 

On the other hand, the Geojson reader works fine for a small feature class, or when the Geojson file is small, around a couple of GB size, but throws out the error: "Insufficient memory available -- error code was 2. Offending plug-in is 'JSON'" when the file size exceeds 5 GB, or the features exceed about 500,000. At first I thought it is the MongoDB writer issue, but after I disable the writer, the error persists.

 

Best answer by markatsafe

@ag JSON export: Try and isolate the offending feature class. In the workbench navigator, under the Geodb reader Features to Read node, use the Feature Types to Read to only process one feature class (or a group of related feature classes) at a time. It's possible that this JSON error is a result of an earlier error, such as a memory error, so limiting the number of feature classes read might eliminate this as well.

JSON to MongoDB: FME should be able to handle larger JSON files, but this will depend on your computer resources as well as the degree of hierarchy/nesting in your JSON file. Again, splitting the JSON out by feature class would reduce the size of the individual JSON files, and probably improve the performance of the MongoDB import.

 

This post is closed to further activity.
It may be an old question, an answered question, an implemented idea, or a notification-only post.
Please check post dates before relying on any information in a question or answer.
For follow-up or related questions, please post a new question or idea.
If there is a genuine update to be made, please contact us and request that the post is reopened.

2 replies

Forum|alt.badge.img+2
  • Best Answer
  • August 23, 2019

@ag JSON export: Try and isolate the offending feature class. In the workbench navigator, under the Geodb reader Features to Read node, use the Feature Types to Read to only process one feature class (or a group of related feature classes) at a time. It's possible that this JSON error is a result of an earlier error, such as a memory error, so limiting the number of feature classes read might eliminate this as well.

JSON to MongoDB: FME should be able to handle larger JSON files, but this will depend on your computer resources as well as the degree of hierarchy/nesting in your JSON file. Again, splitting the JSON out by feature class would reduce the size of the individual JSON files, and probably improve the performance of the MongoDB import.

 


aguan
Supporter
Forum|alt.badge.img+11
  • Author
  • Supporter
  • August 27, 2019

@ag JSON export: Try and isolate the offending feature class. In the workbench navigator, under the Geodb reader Features to Read node, use the Feature Types to Read to only process one feature class (or a group of related feature classes) at a time. It's possible that this JSON error is a result of an earlier error, such as a memory error, so limiting the number of feature classes read might eliminate this as well.

JSON to MongoDB: FME should be able to handle larger JSON files, but this will depend on your computer resources as well as the degree of hierarchy/nesting in your JSON file. Again, splitting the JSON out by feature class would reduce the size of the individual JSON files, and probably improve the performance of the MongoDB import.

 

@markatsafe, I successfully ran the workspace on FME server. Can load large geojson file (15 GB) without memory error.

However, got data errors on a few SDE features when loading into MongoDB spatial. The errors are skipped though. Overrall, it is successful for the POC.