Hi @rasmusraun
I suggest you to use the transformer StringConcatenator, like image bellow:
Thanks in Advance,
Danilo
Thanks for the answer 😃
I tried but it gives me this result in the csv:
Auto0076;490496.620;6088527.736;0.733;";;"
Also I wasn't quite sure what I should select in the attribute value field. The csv file doesn't have any headers, and if I just select the last of the columns as attribute value it seems to overwrite the data of that column.
Edit: I missed the attribut creator. But I am unsure how you selected the entire row like that.
Edit2: Okay now my string concatenator output looks like this:
But my written data looks like this:. If possible I would like to avoid making the "att" header as it ruins the first row.
A CSV file usually has a header. Your problems are the result of not having a header.
The easiest way to do what you want to do is read (and write) it as a text file. If you set your reader to read every row as a feature (not file at ones) then you can use a AttributeCreator to add the ;;
New Attribute = text_line_data
Attribute value = @Value(text_line_data);;
Another way would be adding a header before you read the file.
Thanks for the answer.
I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)
Thanks for the answer.
I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)
You could use the FeatureWriter and FeatureReader to write the data as CSV and then read it back as text. Then you make the substitution and write the data back to a text file. That way it's all in the same workspace.
Thanks for the answer.
I see. I was hoping to have one workspace do everything, but I prefer the .txt file method then :)
It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.
Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.
But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?
It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.
Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.
But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?
I see. In the end I managed to have everything in 1 workspace withe the text file reader.
It is indeed a pointcloud, but I didn't realise I could read the file as a pointcloud. How would that be an advantage?
It is possible to do everything in one workspace. But it depends on what you want to do what the best solution is.
Because you don't have a header you can not use a CSV reader. This will always lose your first row. But you can use the Text File reader.
But your data looks like a pointcloud, are you sure you don't want to read the file as a pointcloudxyz?
Hello @rasmusraun , thanks for asking! The Point Cloud Reader used to be a lot faster, but the new CSV reader is very fast now. Reading in a point cloud via CSV could potentially give you more flexibility, but there are a lot more tools for PointCloud. Point Cloud format also makes use of bulk mode, which optimizes how FME stores/works with features by processing multiple similar features that share the same schema more efficiently (in bulk). Hope this helps, Kailin.