Use a search envelope in the reader parameters:
You can also chose to use a FeatureReader instead of a "classic" reader so you can have the extents of the envelope set via attributes created beforehand.
Use a search envelope in the reader parameters:
You can also chose to use a FeatureReader instead of a "classic" reader so you can have the extents of the envelope set via attributes created beforehand.
The search envelope in a classic reader is the way to go indeed.
This functionality (the search envelope) is not available in the FeatureReader as far as I know.
The Spatial Filter in the FeatureReader also does not work for LAS in the FeatureReader.
This is one of the very few differences between classic readers and FeatureReaders, so I submitted a case for this.
@james_c_452 You should be able to use the FeatureReader if the Initiator feature is a bounding box. You can then set the Spatial Filter parameter
Sorry, I should have been more specific. I am aware how to globally clip as everyone suggested (thanks) what I'd like to do is use a more piecemeal approach. Say that I have 10 tiles in a line and for a particular task I only need data for 3 randomly distributed specific spots. I haven't been able to work out how to do it in a way that only loads the relevant parts of the group of tiles. Though I haven't tried making a custom transformer for the feature reader, making it parallel by the area of interest, and then using the attribute values for the the bounding box of the area of interest.
Also, has anyone actually tried clipping a point cloud using the feature reader. If you have, does it first read the point cloud and then clip it, or does it only read what's in the bounding box (more what I'm after to save reading a lot of unnecessary data)?
Sorry, I should have been more specific. I am aware how to globally clip as everyone suggested (thanks) what I'd like to do is use a more piecemeal approach. Say that I have 10 tiles in a line and for a particular task I only need data for 3 randomly distributed specific spots. I haven't been able to work out how to do it in a way that only loads the relevant parts of the group of tiles. Though I haven't tried making a custom transformer for the feature reader, making it parallel by the area of interest, and then using the attribute values for the the bounding box of the area of interest.
Also, has anyone actually tried clipping a point cloud using the feature reader. If you have, does it first read the point cloud and then clip it, or does it only read what's in the bounding box (more what I'm after to save reading a lot of unnecessary data)?
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
From what I see the LAS file is read completely when using a FeatureReader with a Spatial Filter, not being clipped at all. Using FME 2021.2
@james_c_452 You should be able to use the FeatureReader if the Initiator feature is a bounding box. You can then set the Spatial Filter parameter
Unfortunately this seems not to work, checked in 2021.2.
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
In build 21784, setting Search Envelope via _xmin, _xmax, _ymin, _ymax attributes from BoundsExtractor clips the .las file in FeatureReader.
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
Can you please show me how? With a print screen?
I can't find a Search Envelope option in the FeatureReader Parameters when using LAS and would like to learn how to reproduce this.
Knowing what I do about FME, I wonder if the data is even read until some sort of action occurs. We'd probably just read the header, and that includes the min and max x/y.
For example, if you read one of your 10gb point clouds into FME - say just a reader connected to a Junction - and you have caching turned off, then how long does it take? Admittedly, my point clouds are tiny compared to yours (only 8m points), but for me it only takes 0.3 seconds.
If I clip then that with a Clipper entirely unrelated then it still takes 0.3 seconds. Only when the actual Clipper overlaps does it have to "read" the data to clip it (2.9 seconds).
What I'm saying is, if you can turn off caching, then it may be quicker to just read all of the point clouds and cut them with a Clipper. Or, the Search Envelope tool is using the same trick, so that it too performs quickly and doesn't actually read data it knows won't overlap.
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
@nielsgerrits
The real reason the result from the Spatial Filter looks wrong is because the spatial filter is inside of what is an entire pointcloud feature. Thus the default spatial intersection does an intersect on the whole feature (the entire point cloud) and the whole feature is returned.
The difference between this FeatureReader case and that in the regular reader is whether the Clip to Envelope is enabled (which actually does not exist on the FeatureReader). At the moment we are looking to implement this functionality for the FeatureReader.
Knowing what I do about FME, I wonder if the data is even read until some sort of action occurs. We'd probably just read the header, and that includes the min and max x/y.
For example, if you read one of your 10gb point clouds into FME - say just a reader connected to a Junction - and you have caching turned off, then how long does it take? Admittedly, my point clouds are tiny compared to yours (only 8m points), but for me it only takes 0.3 seconds.
If I clip then that with a Clipper entirely unrelated then it still takes 0.3 seconds. Only when the actual Clipper overlaps does it have to "read" the data to clip it (2.9 seconds).
What I'm saying is, if you can turn off caching, then it may be quicker to just read all of the point clouds and cut them with a Clipper. Or, the Search Envelope tool is using the same trick, so that it too performs quickly and doesn't actually read data it knows won't overlap.
In case it helps, I picked this as one of our questions of the week and covered it here: https://www.youtube.com/watch?v=ZvSaHQ7gouI&t=750s
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
Ah thank you, I expected it to clip it as well indeed. I work a lot with vector data and expected it to filter pointcloud like it does points / lines / polygons. I just wanted to make sure I did not miss something.
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
@nielsgerrits I mistook the FeatureReader for LAS Reader. The normal reader has a Search envelope and an option to clip while FeatureReader has a spatial filter but the point cloud is a single feature so it's not clipped.
Sorry about the misunderstanding.
I believe Safe should fix the inconsistencies between the different readers.
From what I can tell, FeatureReader first reads the pointcloud and then applies the envelope (actually, filters WHILE reading but has to read the whole file).
If you need filtering before reading I recommend using database formats for storing data (any type of data). These are the only formats that can filter before reading/loading. I recommend this article as a starting point: Tutorial: Let the Database Do the Work (safe.com)
No problem, I just wanted to make sure I understood correctly or missed something.
In case it helps, I picked this as one of our questions of the week and covered it here: https://www.youtube.com/watch?v=ZvSaHQ7gouI&t=750s
Thanks for the reply Mark, sorry I took so long to get back to you. It seems, like you're video, that the large clouds I use load reasonably quick. I think the problem might be that I mostly do operations straight away in my workspaces so the entire cloud needs to be loaded. In future I'll try using a clipper straight away to make it more efficient, at least when I can. I guess the problem is a lot of the time I need to do something on most of my data and preserve it as a point cloud!