Hi,
you can use the Text File writer, just disable "Overwrite existing file":
David
Hey thank you for the reply will try it out ... (I just realised the values of the 'output expected' in my post have shifted, sorry about that).
Hi Sani,
If you need to append File_m.csv (A B C ...) to the tail of File_n.txt (... 0 1 3), the way suggested by David would be easiest.
File_n.txt after appending File_m.csv:
3 5 6 2 7 7
8 7 5 0 1 3
A B C D E F
M N O P Q R
But I'm worried that you need to insert File_m.csv to the head of File_n.txt.
File_n.txt after inserting File_n.csv:
A B C D E F
M N O P Q R
3 5 6 2 7 7
8 7 5 0 1 3
Takashi
Hello,
Either way it gets appended will do!
That ain't a problem...
3 5 6 2 7 7
8 7 5 0 1 3
A B C D E F
M N O P Q R
but there is one issue being, if each have a header:
No. Value
1. 3 5 6 2 7 7
2. 8 7 5 0 1 3
No. Value
1. A B C D E F
2. M N O P Q R
How to eliminate the second 'No. Value' to get-
No. Value
1. 3 5 6 2 7 7
2. 8 7 5 0 1 3
1. A B C D E F
2. M N O P Q R
Thank You
Hi,
one option could be to use the "Directory and File Pathnames" reader on the output file. If the attribute "path_filesize" > 0 then you can skip the header.
David
HI,
In mixed cases, header no header, and making it generic i think just setting the parameters on the reader won't do no more.
As those are serial readers i'd suggest using stringsearchers in combination with variable setters and retrievers.
Gio
Assume that you are reading File_m.csv with a TEXTLINE Reader and appending every line to File_n.txt with a TEXTLINE Writer (Overwrite Existing File: NO).
If File_m.csv always has a header line, you can set 1 to "Number of Lines to Skip" parameter of the reader, so that the reader always skips the first line i.e. header.
Otherwise (unknown whether there is a header), one possible way is:
1) Select the first line with a Sampler.
Sampling Type: First N Features
Sampling Amount: 1
2) Determine whether the first line is a header by testing the format. If the first line is a header, just discard it.
The way of the determination depends on the actual schema definition of File_m.csv.
Hello,
Thank You for the replies! :)
when I set "Number of Lines to Skip" = 1
output is:
No. Value
1. 3 5 6 2 7 7
2. 8 7 5 0 1 3No. Value
1. A B C D E F
2. M N O P Q R
Now I did try the sampler and does help remove the second ' No. Value'.
But when I try to append from the Non Sampled port to the Text Line Writer File_n.txt.
output is:
No. Value
1. 3 5 6 2 7 7
2. 8 7 5 0 1 3 1. A B C D E F
2. M N O P Q R
One thing that works is when the values from the Non Sampled port are stored in a seperate Text Line Writer, which can be then used as the Text Line Reader to append to the File_n.txt.
But how to integrate them within a single workbench ??
And maybe I'm simple stretching the process unnecessarily, ( frankly I ain't well versed with the schemas, just about know them little).
Regards.
Above all, we need to clarify the actual conditions. From the result tables, I guess the followings.
File_m.csv
1st line is empty (only a new-line character)
2nd line is the header.
File_n.txt
There isn't new-line character at the end of file.
Is it right?
Are there any additional information about conditions?
Yes, perhaphs File_n.txt does not have a new line character at the end hence the first line following the "No. Value" of the File_m.csv gets appended right beside it's last value without space.
I just checked again!!
the File_m.csv ... setting as "Number of Lines to Skip" = 1
the 'No. Value' is getting eliminated and the next line get appended to the File_n.txt without space.
File_m.csv
1st line is the header. (i.e. 'No Value')
File_n.txt
There isn't new-line character at the end of file.
If it is guaranteed that there is not a new-line character (NL) at the end of "File_n.txt", consider adding a NL to head of the first line to be appended.
Otherwise, I think it's difficult (almost impossible) to achieve the goal unless you read "File_n.txt". Because FME cannot determine whether the file ends with NL without reading it.
If it's unknown whether "File_n.txt" ends with a new-line character, a steady way would be:
1) Add a TEXTLINE reader to read "File_n.txt" line by line.
2) Add another TEXTLINE reader to read "File_m.csv" line by line, and discard the header line.
3) Add a TEXTLINE writer to write a new text file (e.g. "File_x.txt").
4) Send every text lines coming from the readers to the writer.
Make sure that the 1st reader (File_n.txt) is located upper than the 2nd reader (File_m.csv) in the Navigator, so that File_n.txt will be read first.
After running, rename "File_x.txt" to "File_n.txt". The result would be equivalent to appending "File_m.csv" (except header) to "File_n.txt".
The renaming can be automated with Shutdown Script. But considering the risk that the original data could be lost by unexpected error, it might be better to do that manually after confirming the result.
-----
# Shutdown Python Script Example for Renaming File
# 'SourceDataset...' and 'DesDataset...' have to be matched with
# the actual parameter names in the workspace.
import os, shutil
if FME_Status == 1:
file_n = FME_MacroValues_'SourceDataset_TEXTLINE']
file_x = FME_MacroValuesf'DestDataset_TEXTLINE']
os.remove(file_n) # Remove original "File_n.txt"
shutil.move(file_x, file_n) # Rename "File_x.txt" to "File_n.txt"
-----
Hello,
That's seems to do the work ! :)
One doubt being, since the appending is to be done to the file File_n.txt, by moving it to a new File in the above way and renaming it, would'nt it affect the File location, as the next process would need File_n.txt as it's input?
Thank You
Renaming with the script above will not affect the file location, since the reader parameter 'SourceDataset_...' refers to the full path of "File_n.txt".
But I don't think that it's necessary to rename the file if the purpose is to use the merged file (File_n.txt + File_m.csv) as the input to the next process. The next process can use simply the new file (File_x.txt). Isn't it?
Furthermore, maybe the next process can also read "File_n.txt" and "File_m.txt" directly with the similar way I mentioned in the previous post. Possibly no need to merge them.
Everything depends on the purpose to merge files.
Okay great, that should do :)
Thank You so much.