Question

How to insert extra semikolons into a csv file. that uses semikolons as delimiters


Badge +1

Hi.

 

I am trying to create a workbench that transforms a certain csv file into one that my drawing program can understand. It uses semikolons as delimiters, but it also needs every row to have two semikolons in a certain place.

 

This is an example of how it should be:

Auto0076;490496.620;6088527.736;0.733;65;;

 

This is an example of what I can make it write: Auto0076;490496.620;6088527.736;0.733;65

 

How can I insert the extra two semikolons at the end?

 

Regards Rasmus


9 replies

Userlevel 4
Badge +30

Hi @rasmusraun​ 

 

I suggest you to use the transformer StringConcatenator, like image bellow:

 

StringConcatenatorWorkspace_StringConcatenator 

Thanks in Advance,

Danilo

 

Badge +1

Thanks for the answer 😃

 

I tried but it gives me this result in the csv:

Auto0076;490496.620;6088527.736;0.733;";;"

 

Also I wasn't quite sure what I should select in the attribute value field. The csv file doesn't have any headers, and if I just select the last of the columns as attribute value it seems to overwrite the data of that column.

 

Edit: I missed the attribut creator. But I am unsure how you selected the entire row like that.

 

Edit2: Okay now my string concatenator output looks like this: Udklip1 

But my written data looks like this:. If possible I would like to avoid making the "att" header as it ruins the first row.

Udklip2

Userlevel 3
Badge +18

A CSV file usually has a header. Your problems are the result of not having a header.

The easiest way to do what you want to do is read (and write) it as a text file. If you set your reader to read every row as a feature (not file at ones) then you can use a AttributeCreator to add the ;;

 

New Attribute = text_line_data

Attribute value = @Value(text_line_data);;

 

Another way would be adding a header before you read the file.

 

 

 

 

Badge +1

Thanks for the answer.

 

I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)

Userlevel 4
Badge +25

Thanks for the answer.

 

I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)

You could use the FeatureWriter and FeatureReader to write the data as CSV and then read it back as text. Then you make the substitution and write the data back to a text file. That way it's all in the same workspace.

Badge +1

Perfect, thanks.

Userlevel 3
Badge +18

Thanks for the answer.

 

I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)

It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.

 

Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.

 

 fme_forum_image 

 

But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?

Badge +1

It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.

 

Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.

 

 fme_forum_image 

 

But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?

I see. In the end I managed to have everything in 1 workspace withe the text file reader.

 

It is indeed a pointcloud, but I didn't realise I could read the file as a pointcloud. How would that be an advantage?

Userlevel 3
Badge +13

It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.

 

Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.

 

 fme_forum_image 

 

But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?

Hello @rasmusraun​ , thanks for asking! The Point Cloud Reader used to be a lot faster, but the new CSV reader is very fast now. Reading in a point cloud via CSV could potentially give you more flexibility, but there are a lot more tools for PointCloud. Point Cloud format also makes use of bulk mode, which optimizes how FME stores/works with features by processing multiple similar features that share the same schema more efficiently (in bulk). Hope this helps, Kailin.

Reply