Shape the future of FME with your ideas
Open ideas have been reviewed by our Product Management and are open for commenting and voting.
Hi,It would be a nice feature to add support for SLD files. They're XML files, so it ought to be fairly straight forward.Some sort of cross-format support to input/output styles handled by the various style transformers, so I could e.g. read an SLD at designate it to output as MapInfo styles directly, or e.g. output an SLD file based on MapInfo styles, would also be very neat. This is nice-to-have though, and may be a little more complicated.Note that there may be version issues to consider.If this can be achieved by packaging some templaters etc., that's great too.Cheers.
Currently FME Server is routing jobs to available engines with the round robin circulation. One of the main benefits of dynamic engines is that job can use these (only) if there is peak demand. Currently jobs can also be assigned to dynamic engines if a standard engine is available but the circulation system decides to use the dynamic engine.My suggestion is to have FME Server 'see' dynamic engines differently than standard engines to be able to only assign jobs to dynamic engines if no standard engine is available.With this we don't have to pay for standard engines that are idle most of the time, can use dynamic engines only when needed and we don't lose credits on using dynamic engines when a standard engine is available.See also discussion here.
Enable a feature that allows user to view the workspace (either as an image before you open it - like Windows) or by opening the workspace in View only model. When editing is required, a license is checked out. Not a money spinner for Safe (!), but would allow users to check logic/undertake basic QA without checking out a production license.
Hi there, Sometimes in a large workspace, I need to zoom out and select a large area of transformers (to either create a bookmark, select them to move or delete, etc.), however at a certain zoom level I don’t have the granularity to control exactly what I’m selecting (I might be selecting things that I don’t want to select, then I need to zoom in and unselect these things). It would be helpful, at a certain level of zooming out a workspace, to show a little pop up window when you start clicking and dragging the cursor to select things, so that you can see where in the workspace the cursor is and then control what is being selected. Cheers
Our customers are asking for this, when creating APIs with FME Server's REST. At the moment REST API Service only uses tokens for authentication and not OAuth.
FME currently does not provide a native way to control compression of raster overviews (pyramids) when generating outputs such as GeoTIFF or COG, limiting the ability to optimize file size and performance.In real-world workflows, especially when handling large raster datasets, overview compression is essential to ensure efficient data access and reduced storage requirements.At present, achieving this requires the use of external tools, which breaks the native FME workflow and adds unnecessary complexity.Request:Add native support for overview compression (method, quality, predictor, etc.) in raster writers or transformers.This would enable fully native workflows within FME and significantly improve efficiency in production environments.
Currently you have to go directly to a remote engine job logs and log in to see job logs, It would be great to be able to see all logs from one place i.e the parent flow instanceIf a job runs on a remote engine, I would still like to see the logs in the parent flow instance
When inspecting features, particularly identifying point geometries, I find myself wanting to visualise the Identifier attribute for alot of points, sometimes along lines and centre of polygons. Obviously desktop GIS have this functionality…. but I’m asking to build on the Mark location option which already has a Label option. Can this same code be added under the Display Control, where users edit the styling of each dataset/layerData Inspection lacks functionality to add simple labels for a point dataset/layerA workaround in workbench is to use LabelPointReplacer transformer to create text geomSo the updated UI will need Label section in the drawing style… and as pictured belowability to select the Attribute Value column and make this the label text the font and colour would be nice label size (replacing font size?) may also be of assistance, though projected coordinates are likely needed to display in metresDisplay Control drawing styles to include a Label
Can FME add support for Vulcan files (by Maptek): .00t, .isis, .isixIf anyone else is interested, vote for this idea and add a comment about what transformations you envision performing in FME with the Vulcan files.
I'm curious if there's a straightforward way to make FME lineage information accessible for data catalog systems. Have you all thought about enabling FME with OpenLineage? I’m not sure how that could work, but I’ve noticed that data catalog products are beginning to adopt this standard. Imagine how great it would be if FME could effortlessly share lineage info with popular data catalog tools like Microsoft Purview, Collibra, and Alation! This could be a game-changer for organizations that truly value solid data governance and tracking. What do you all think? Is this feasible, or are there other existing alternatives?
The following improvements to Role Based Security would be useful:1.Increased Granularity for Job Viewing and Job Management PermissionsI need the ability to allow users to see specific other user’s jobs and logs rather than all jobs and logs. We use different service accounts for different enterprise projects and currently I am having to share all jobs/logs with users so they can monitor their processes. Since not all groups need to see each other’s jobs this is not the best solution. (Thank you Safe for new search options in 2019 as they slightly help with this issue.)Likewise, in the current Role system being able to view all jobs requires job management permissions which let users cancel jobs in the queue or terminate running jobs. This puts me in a difficult position regarding how we implement SOX compliance. It is preferred users be locked out from managing running production processes. If the ability to view all logs but not manage jobs was made available that would be a move in a positive direction. If the option to say kill queued jobs but not running jobs was an option that may also be useful.2. Additional Options on Database ConnectionsHaving a Read option alongside the Full Control option for database connections would be excellent. I am seeing issues when I grant users access to connections without giving them full control over a connection. Since we have many processes using the same named database connection it is not ideal to grant users management access since one incorrect change by any user with access could disrupt jobs for all users using the connection. Something similar to how the Notification items are broken out would be amazing.3. Automatic Content Sharing with Administrator AccountsIt would be helpful if FME Server automatically shared user created content with members of the Super User role. If this could be an option for the FME Admin role that would also be appreciated. I have a plethora of users creating content and when they ask for help it is cumbersome to have to share their content with myself and my fellow Admins before I can help troubleshoot.
I’m downloading bulk CSV data from an open data site, each download is gigabytes, I would like to set the output filename to a .zip file, to save space.
Hi,some formats (such as the Text File reader) have fixed schemas that will never change. In the case of the Text File reader the only attribute in the schema is the text_line_data attribute.For these fixed-schema formats, I would like to request an enhancement to the FeatureReader to have the schema automatically exposed, even when the Generic output port is configured. Currently, the schema attributes are only exposed for named output ports.For context, I refer to the following Community question: And to my own support question (to which James Cheng provided a very comprehensive reply):https://support.safe.com/hc/en-us/requests/64419I have reproduced James’ reply below, as it so clearly explains the situation:Thank you for raising this. I understand the frustration as I share the same thoughts and can see why you'd expect the text_line_data attribute to just be available when reading a Text File through the FeatureReader. The behavior you're seeing comes down to how the FeatureReader's generic output port works. This port is a catch-all that merges everything into a single stream, designed to handle any format and feature types with varying schemas. Because of this, it doesn't commit to any schema at authoring time, it doesn't make assumptions about which feature types or attributes will be present, so all attributes come through unexposed by default, even for predictable formats. By contrast, when you use a named output port (or a standalone Text File Reader), the schema is known at authoring time, so the attributes like text_line_data are automatically exposed. In essence, by using the Generic port, the trade-off is flexibility over schema awareness, it can handle anything but at the cost of not pre-committing to any specific attributes. However, as you've rightly pointed out though, the Text File Reader always produces a predictable fixed schema with a single attribute, text_line_data, no matter which files/feature types are being read. So there's a reasonable argument that the FeatureReader could recognize this and auto-expose it on the Generic Port since there's no ambiguity what the output will be. Currently, the Generic port doesn't differentiate between fixed schema and variable schema formats as it applies the same "unexposed by default" behavior uniformly across the board. That said, I think this is a genuinely good candidate for a product enhancement for the generic port. I'd encourage you to post this as an Idea on the FME Community, specifically requesting that the FeatureReader auto expose known attributes for fixed schema formats like Text Files on the Generic port and describing your use case. Community support helps the development team gauge interest and prioritize ideas for future releases, and we can link this ticket to the Idea to keep it on their radar. In the meantime, it's worth noting for this specific format, the feature types are all fixed to text_line, so using the default One per Feature Type output port setting will also route all text file features through a single text_line named port, similarly to the Generic Port, which will expose the text_line_data attribute automatically. If your workflow requires the Generic port, then the only available options are manually setting the Attributes to Expose parameter on the FeatureReader or placing an AttributeExposer downstream. regards,Nic
The current customization options for FME Flow Apps is quite poor. FME Flow Apps have the potential to be a centralised focal point/application for organisational/enterprise level ETL; however, they currently allow for minimal flexibility and design when it comes to styling.The lack of flexibility and options for design improvements mean that even with greatest graphics/design eye in the world, FME Flow Apps are often left looking nothing more than a data dropzone, workspace runner or ‘Click to Download’ link.Expanding the options for image customization and editing, greater flexibility in text design, or incorporating HTML would greatly enhance the visual appeal of FME Flow Apps. This would increase visibility and improve perceptions to none FME users who are given access to a Flow App.Documentation could also be created in the form of a technical article on ‘Designing and Styling an FME Flow App for your Organization’ to support this process.
Is there a way to have feature writer (PNG format), to generate .pgw instead of .wld file?For feature writer (JPEG format) it has option to change .wld to .jgw Just wondering if it would be possible to do that with PNG as well? Currently I change the .wld to .pgw manually. I know there’s a way to use SystemCaller to make it change the file extension. But it would be helpful if we can just skip that entirely because it may not work as expected when we Publish workspace in FME Flow. JPEG Format PNG format
When you add a "file" manual key to an automation which get run by an automation app, the option is to choose a file already in the resources of FME Flow. It would be nice to have an option for a user to upload a file like what's already available in workspace apps. For a User Parameter "File" the user is able to upload a file which gets used by the workspace the workspace app runs. The same would be ideal for automation apps.
Idea: cancelling a job will rollback the whole transaction of all features for Snowflake WriterObserved behavior: cancelling a job in FME Flow will stop additional features from being written, but features that have already been written to Snowflake are already committed without any rollback of the transaction. There is inconsistency between “Features Written’ as recorded in the job log and actual features written according to Snowflake logs. The job log records zero features written while there have been many features written to Snowflake despite job being cancelled.Desired behavior: cancelling a job in FME Flow will stop additional features from being written and rollback any partial changes the job has just done. Outcome: improved management of jobs.
When downloading data using Data Download service (e.g. from an FME Workspace App), a Zip file is created containing the output files. The zip file is automatically named something that looks like “FME_4B425D3A_1772473610895_13172.zip”. It would be nice if an option was implemented to allow customizing that name to something that is more user friendly. For example allow authors to specify what exact name the Zip file should have, or allow the use of parameters to create a string to use as file name. Even if the final filename has to have some extra characters at the end like 4B425D3A_1772473610895_13172, giving authors some customization capability would be beneficial.
Hello,To continue with data platform integration, is Safe discussing with Databricks to be added to its Marketplace, like with Snowflake? @irenemunoz maybe.
It should be possible to define default pagination parameters at the Data Virtualization API level, while still allowing bespoke pagination configurations for individual endpoints.Currently, pagination must be configured within each workspace, which can complicate workflows and, in some cases, negatively impact processing performance. This configuration also needs to be repeated for every workspace endpoint, leading to unnecessary duplication.A more efficient approach would be to manage pagination at the parent API level, with the following capabilities: A configuration option at the API level to enable or disable pagination globally A configuration option at the endpoint level to determine whether the endpoint inherits the global API pagination settings A parameter to define the default pagination size (i.e., the number of results returned per page)
***Note from Migration:*** Original Title was: Add support for integrated Windows Authentication (IWA)/ Single Sign On (SSO) for rest services (fmedatastreaming) If you configure fmeserver for integrated Windows Authentication (IWA)/ Single Sign On (SSO), it affects the web UI only, not rest services (as far as I know). This enhancement will make it easier to access fmedatastreaming with published parameters etc. in a secure environment.
The problem The documentation for FeatureMerger mentions a "Suppliers First" mode that can reportedly be very beneficial to performance (and, I would imagine, crucial for Streams), but comes with the constraint that all suppliers must have arrived before the first requestor comes in. To my knowledge, there is currently no way in FME of upholding that guarantee in a reliable manner. The timing between Readers, Creators, FeatureReaders and SQLExecutors is not something I can claim to understand, and the completion order can change based on whether caching is turned on or not. This is already troublesome when editing workbenches, but it can be expecially problematic inside custom transformers, where you don't have control over the delivery order of features in your input ports. This isn't merely an issue with FeatureMerger; it's going to be a problem at any time where you rely on input ordering for a transformer to work properly, with specific attention given to Streams and Feature tables. If I have a PythonCaller that configures itself based on data coming from TransformerA before it's ready to accept data from TransformerB, there isn't a lot of options for reliably dealing with out-of-order input. Holding onto features is illegal when bulk mode support is advertized, so the PythonCaller must either opt out of it (which hurts downstream performance) to accumulate any features it's not ready to process until the configuration features have come in. Even then, it can't know when TransformerA has closed, so unless it only expects one configuration feature, it's dangerous to start processing before close() has been called. This might be clearer when considering the attached screenshot: Python_MapAttributes needs the output line from JsonTemplater in order to work with the data coming from MappedInputLines. If MappedInputLines starts sending features first, the PythonCaller can't do anything yet, and can only crash (undesirable) or start buffering features (illegal in Bulk mode). The solution The idea would be to have some sort of Semaphore transformer, something like a FeatureHolder, but with (at least?) two input ports: a "Priority" port, which lets features through normally, and a "Held" port which buffers features until the other port has closed and no more features can go through it. This would ensure that no feature from the "Priority" side can ever arrive before a "Held" feature, thus allowing workflow and transformer designers to guarantee feature ordering downstream without breaking bulk mode. Other relevent use cases One might also consider having a Terminator node which should stop the translation when an assertion or a join fail in some unexpected way, but should wait until every faulty feature has arrived to give proper context instead of immediately stopping at the first one. Bad features could be sent through the priority port and then the priority port routed to a terminator, so that no feature can be passed to the next step until it has been verified that none exist that would trip the Terminator. This would also allow the Terminator to be changed to wait for all features to have arrived before stopping the translation, instead of aborting at the first one (which currently makes sense, as the more you wait, the more you risk that downstream writers will have already started to write incomplete data).
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK