If the attribute schemas match then you can simply connect several incoming feature classes (feature types in FME terms) to one output feature type.
One concept to get used to in FME is that, within the Workspace, the Features are treated (mostly) on a individual feature-by-feature basis. Once they have been read into the Workspace, they don't belong to any "Table" or "Feature Class", but exist as single features able to be manipulated independently of each other.
So, if you are reading in multiple feature classes, you can simply pipe them all to one downstream Junction, Writer or Transformer, and each individual feature will be passed to it regardless of their original Schema in the data source and exit on the Output Port(s) (or written in the case of Writer) as a single stream of "merged" features. It is roughly the equivalent of carrying out an "Append" operation in ArcGIS with the "No Schema Test" option used, but strictly speaking, because they are handled Feature by Feature, the incoming Schemas don't have to match at all, within the Workspace they are still held as individual Features and have an individual schema for each individual Feature.
The only caveat on this is when you get to the Writer, it has to make decisions of what schema definition to create within the target dataset being written to. This is where you can either manually define the Schema yourself within the Writer and say manually define each Field definition in the GDB Feature Class(es) to be written, or you can manipulate the final schema as a "Schema Feature" within the Workspace and use this when alternately using the Writer in a Dynamic Schema mode. A single "special" Schema Feature is also by default created by most Readers and these can be merged together from multiple Feature Class readers and manipulated further to create the final combined, single feature class Schema (if you send a Schema Feature output port from a Reader to Data Inspector it will make more sense once you see it!)
If the attribute schemas match then you can simply connect several incoming feature classes (feature types in FME terms) to one output feature type.
Hello, Yes the pattern of attributes is the same for 13 classes of identities so I made the connection to a single class of identities output but, the process could not finish, the recording memory was insufficient. So in this case it's better used a feature joiner transform or feature merger? thank you for your info.
Bonjour, Oui le schéma d'attributs est le même pour 13 classes d'identités donc, j'ai fait la connexion à une seule classe d'identités en sortie mais,le processus n'a pas pu finir, la mémoire d ' enregistrement était insuffisant. Donc, dans ce cas c'est mieux utilisé un transformateur de long métrage ou fusion de long métrage? merci de votre info
@claudialarrota The other approach is to use Merge Feature Types on your reader. This will create a much cleaner workspace. Using this along with the Feature Types to Read parameter allows you to select which feature types to read at any one time.
However, in one of your comments you mention you're using FeatureJoiner or FeatureMerger. This is an entirely different process. Perhaps have a look at the article Merging or Joining Spreadsheet or Database Data.
What @redgeographics and @bwn have described is appending rows of data from different feature classes, but with the same attributes/columns
Merging or joining is combining together rows based on a common identifier.
Hello, Yes the pattern of attributes is the same for 13 classes of identities so I made the connection to a single class of identities output but, the process could not finish, the recording memory was insufficient. So in this case it's better used a feature joiner transform or feature merger? thank you for your info.
Bonjour, Oui le schéma d'attributs est le même pour 13 classes d'identités donc, j'ai fait la connexion à une seule classe d'identités en sortie mais,le processus n'a pas pu finir, la mémoire d ' enregistrement était insuffisant. Donc, dans ce cas c'est mieux utilisé un transformateur de long métrage ou fusion de long métrage? merci de votre info
Some causes of that will be:
- Feature Caching has been left on and the Feature Data is so large it uses all the disk space
- Using FME in 32 bit mode instead of 64 bit mode, and the Operating System can't assign any more RAM to a workspace trying to build a large data model.
I've run into similar issues with RAM usage in 32 bit mode due to certain Readers only running in 32 bit mode Eg. GEODATABASE_SDE (on PCs that don't have ArcGIS 64 bit background geoprocessing installed). The solution that seems to work is to restrict these workspaces to piping the 32 bit Reader straight to a Writer that writes into an intermediate format capable of being used in a separate 64 bit workspace that uses a 64 bit reader to process the data properly. Reasonable intermediate file formats that are 64 bit capable are FFS, FGDB, SpatiaLite etc. although using FGDB as the intermediate format is both slowest to Write and Re-read albeit the advantage is that it can be natively opened within ArcGIS Desktop.