Skip to main content

I'm trying to convert a CSV into separate JSON files, one JSON file for each row. The CSV has about 15 columns and is plain text.

I'm hoping for each file to end up with JSON looking like this.

{ "type" : "kb", "id" : "1", "data" : "xyz" }

But instead, the JSON data is embedded in an array. In my case, I need each file to not have the array or the files won't import successfully to ElasticSearch.

Here is how the JSON is coming across.

[ { "type" : "kb", "id" : "1", "data" : "xyz" } ]

My Configuration

I've tried this multiple ways but the latest attempt is very simple. A CSV Reader reads the CSV file and then connects to a JSON Writer. The JSON Writer has the "Fanout Dataset" checked and the Fanout Expression is "@Value(id)" which is a column in my CSV that I want the files named after. One thing to note is that the files were not getting written until a "Feature Type Name" on the CSV Writer and set it to 'id'. Before that one consolidated file was being generated with a NULL name.

Any thoughts on how to remove the Array. I've spent all day on this and see no other examples in your KB. I've also tried the JSON Fragmenter and Templater but got the same result.

Hi @jlivet,

I believe the JSON writer will always write an array, since it expects to be writing multiple features to the file.

Please try creating your JSON in the JSONTemplater, then using the Text File writer to write out. This will write the JSON attribute verbatim, with no further formatting.


Thank you very much for the quick reply. I'll have some time this week to test this and will get back to you.


Reply