Skip to main content
Question

batch multiple txt files and append to postgis

  • February 18, 2014
  • 9 replies
  • 36 views

Dear community members

 

I'm facing a problem with fme desktop.

 

I have many xyz files and they must be imported into a single postgis table.

 

They are all equal, x,y,z column.

 

I must query each file so the first/last row/column will be discarded due to artifacts.

 

Then the result for each file must be appended to a table.

 

Right now I used a csv reader with an InlineQuerier and a 3DPointReplacer in sequence and for a single file it works with a query set in the inline querier

 

 

If I add more than a single xyz file to the reader only the first/last row/column of the whole set of points are removed..

 

 

So

 

how can I make fme to process the query on each single file efore it is added to the output?

 

Thanks

 

Pietro
This post is closed to further activity.
It may be an old question, an answered question, an implemented idea, or a notification-only post.
Please check post dates before relying on any information in a question or answer.
For follow-up or related questions, please post a new question or idea.
If there is a genuine update to be made, please contact us and request that the post is reopened.

9 replies

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 18, 2014
Hi

 

 

U can use a workspacecaller to acomplish that

 

;

 

 

This transformer runs the specified workspace for each feature that enters through the INPUT port. Any published parameters of the specified workspace will be given values as specified in the transformer, or taken from attributes of the feature which enters it.

 


  • Author
  • February 18, 2014
Thanks Gio

 

do you mean WorkspaceRunner?

 

Where do I insert this transformer? Between csv reader and InlineQuerier?

 

p

gio
Contributor
Forum|alt.badge.img+15
  • Contributor
  • February 18, 2014
Alernatively

 

If the csv's have unique names wich u can expose, u could use an Variablesetter and retriever structure.

 

When the variable changes u can identify the "next" first and last row.

 

Then u would not need a wrokspacerunner.

 

 

thats a structure i often use when i get csv's or txt files as input. To identify headers etc.

  • Author
  • February 18, 2014
So, with workspace runner I must create 2 workspaces, one with the reader-transformer/s-writer and a second with a path, the WSrunner that call the first one and some loggers to see if something went wrong..

 

according to this guide (http://fmepedia.safe.com/articles/Samples_and_Demos/WorkspaceRunner)

 

Right?

 

The variablesetter ethod, do you have any guide around?

 

Thanks

 

Pietro

  • Author
  • February 18, 2014
WorkspaceRunner method works!

 

For the other one I must study a little more..

 

Thanks!

 

p

takashi
Celebrity
  • February 18, 2014
Hi Pietro,

 

 

If you just need to discard first and last row of each CSV file, I think specifying number of lines to skip is a quick way.

 

Have a look at reader parameters "Number of Lines to Skip" and "Number of Footer Lines to Skip" on the Navigator Window.

 

 

Takashi

  • Author
  • February 18, 2014
Thankyou Takashi

 

It's not a matter of rows in the txt file but it's a matter of rows/columns of point cloud generated by the x,y,z from each txt file...

 

The txt file is an ascii xyz file, and it's a square point cloud.

 

I must eliminate the series of values with max/min x,y..

 

 

How can I use the variablesetter/retriever as indicated by Gio?

 

Thanks again

 

Pietro

takashi
Celebrity
  • February 18, 2014
I see.

 

Since you already have a workspace which can process one file, I think using the WorkspaceRunner is relatively easy solution as Gio mentioned.

 

 

Another approach.

 

1) Expose fme_basename (file name without extension) or fme_dataset (full file path).

 

2) Branch the flow of features into two streams.

 

3) On one stream, calculate min/max x/y for each file with a StatisticsCaluculator, grouping by fme_basename (or fme_dataset).

 

4) Merge them to another stream with a FeatureMerger. Join On fme_basename (or fme_dataset).

 

5) Filter out features if x = x._min or x = x._max or y = y._min or y = y._max with a Tester.

takashi
Celebrity
  • February 18, 2014
One more.

 

1) Create bounding box for each file with a BoundingBoxAccumulator, grouping by fme_basename (or fme_dataset).

 

2) Select points within the box using a SpatialFilter, grouping by fme_basename (or fme_dataset).