Shape the future of FME with your ideas
Open ideas have been reviewed by our Customer Success team and are open for commenting and voting.
Currently, FME Flow Apps and Gallery Apps are either always enabled or always disabled. There are times when it would be useful for this to be managed in the app settings for example: If we need to restrict access to specific timeframes, such as during business hours. Having to manually enable and disable apps is frustrating. For apps with a defined start and end date for their availability, it would be much easier if we could automate them being switched on/off. It would be epic if we had a scheduling feature for FME Flow Apps that allows us to define periods of availability, similar to that for Schedules for Workspaces, where we can set when and how often an app is availableBeing able to configure an app to be automatically enabled or disabled based on a recurring schedule (e.g. weekdays from 8am to 6pm) Setting a specific start/end date and time after which the app becomes available/unavailable.
It would be great to have an 'undo' function for automations that are deleted.
Could you please add an “Export to CSV” option (download button) on the Jobs page in FME Flow (Completed / Running / Queued tables)? Admins frequently need to analyze job metadata dashboards. While the REST API supports retrieving job history, a one‑click CSV export from the UI would make this far easier for day‑to‑day administration and reporting.Why this matters:1. Reduces time and friction compared to building API queries/workspaces for each export.2. Aligns with existing Jobs table filters/column customizations (export what is visible).3. Common admin use case: ad‑hoc analysis, audits, troubleshooting, capacity/performance reporting.Notes:1. The current Jobs page allows filtering and viewing details/logs, but there’s no built‑in “export/download” for the visible rows.2. Adding a native export would complement the REST API V4.
FME courses and examples for the new Data Virtualization feature include lessons on how to create GET and POST endpoints in FME Flow and Form but there are no examples of how to use create a PUT (update) request in Flow and build a workspace which updates a data source. I’m working through this myself but I’m surprised that PUT and DELETE endpoints are not covered (but especially PUT) in the Data Virtualization course which I took a few weeks ago. The POST example in the course has been useful but is not enough in my opinion.
Hi, please, let's add automatic pre-loading of parameters before launching the automation app, similar to the workspace app. After all, even before launching the automation app, it is necessary to know the current values of dynamically changing domains in the background, for example in company databases.Anyway, solution for workspace app is brilliant! https://support.safe.com/hc/en-us/articles/40509829569805-Dynamic-Parameter-Configurations-in-FME-Flow-Apps
I use the Mark Location tool (via righthand mouse click) all the time in FME Data Inspector when building processes and I am then constantly zooming to the marked location via the mouse wheel. It would be great if the ‘Locate’ button in the Marked Location dialog box could be added to the standard toolbar in the FME Data Inspector so as soon as it loads I could click the Locate button and zoom to what I am interested in. Thanks!
Currently, when a Database Connection is created from the FME Flow interface, there is no visual check that confirms the connection has been successfully established. It is possible to enter incorrect credentials, and FME Flow will still allow the database connection to be saved without showing any error.As an improvement, it would be very helpful to include a visual indicator, for example, an icon, that immediately confirms whether the database connection was successful. This would help avoid configuration mistakes and make it easier to detect access issues early.
It would be useful if there was a way to validate all database and web connections on an FME FLow instance. This could be used to check whether an install or network change has introduced errors.If this could be done via the API it would also allow automated, periodic checking.
When you publish a workspace from FME Form, you can specify Flow topics to be notified on success or failure. And you can view this setting via Workspaces > [Repo] > [Workspace] > Advanced. Unfortunately it is read-only.We should be able to change the setting here. That way one can have a different topic per environment. Re-publishing via FME Form is not an option when using Projects for repeatable deployments.
Good morning. As an FME Flow admin I have to review the job logs manually for several projects due to the fact that users have not added a notification topics when they publish the forms. This is a problem as most of the time, workspace authors don’t even know that jobs have failed unless they review the logs themselves. I am trying to anticipate this scenario and inform the workspace authors as soon as jobs fail. Can we make the advance parameters editable for a selected workspace in FME Flow--see attached picture-- otherwise users will need to republish the workspaces that don't have a topic. I know that I need to educate the workspace authors to add the topics when publishing instead of fixing thins on the back end. This would just make the job much easier.
When inspecting feature attributes in the Visual Preview, there is an under appreciated option to select Columns in the Table view.Large schema's such as Esri's UN model have so many columns, that if you select only a few columns, it can help you quickly identify issues in the values. For example with Dates, filtering for columns with names that include date, you can more easily inspect and find dates that maybe in different/inconsistent formats. I'm asking to add a Preset/Save button, that remembers a preset or last selection of columns.Attribute Table > preset save as my attribute selection list
The problem The documentation for FeatureMerger mentions a "Suppliers First" mode that can reportedly be very beneficial to performance (and, I would imagine, crucial for Streams), but comes with the constraint that all suppliers must have arrived before the first requestor comes in. To my knowledge, there is currently no way in FME of upholding that guarantee in a reliable manner. The timing between Readers, Creators, FeatureReaders and SQLExecutors is not something I can claim to understand, and the completion order can change based on whether caching is turned on or not. This is already troublesome when editing workbenches, but it can be expecially problematic inside custom transformers, where you don't have control over the delivery order of features in your input ports. This isn't merely an issue with FeatureMerger; it's going to be a problem at any time where you rely on input ordering for a transformer to work properly, with specific attention given to Streams and Feature tables. If I have a PythonCaller that configures itself based on data coming from TransformerA before it's ready to accept data from TransformerB, there isn't a lot of options for reliably dealing with out-of-order input. Holding onto features is illegal when bulk mode support is advertized, so the PythonCaller must either opt out of it (which hurts downstream performance) to accumulate any features it's not ready to process until the configuration features have come in. Even then, it can't know when TransformerA has closed, so unless it only expects one configuration feature, it's dangerous to start processing before close() has been called. This might be clearer when considering the attached screenshot: Python_MapAttributes needs the output line from JsonTemplater in order to work with the data coming from MappedInputLines. If MappedInputLines starts sending features first, the PythonCaller can't do anything yet, and can only crash (undesirable) or start buffering features (illegal in Bulk mode). The solution The idea would be to have some sort of Semaphore transformer, something like a FeatureHolder, but with (at least?) two input ports: a "Priority" port, which lets features through normally, and a "Held" port which buffers features until the other port has closed and no more features can go through it. This would ensure that no feature from the "Priority" side can ever arrive before a "Held" feature, thus allowing workflow and transformer designers to guarantee feature ordering downstream without breaking bulk mode. Other relevent use cases One might also consider having a Terminator node which should stop the translation when an assertion or a join fail in some unexpected way, but should wait until every faulty feature has arrived to give proper context instead of immediately stopping at the first one. Bad features could be sent through the priority port and then the priority port routed to a terminator, so that no feature can be passed to the next step until it has been verified that none exist that would trip the Terminator. This would also allow the Terminator to be changed to wait for all features to have arrived before stopping the translation, instead of aborting at the first one (which currently makes sense, as the more you wait, the more you risk that downstream writers will have already started to write incomplete data).
Introduce the ability to add Dynamic input ports to the PythonCaller.
I believe this was mentioned in a webinar, but being able to import an endpoint’s schema - from the dataset it is going to be accessing of course - without having to manually create every property would be a great thing to have. This could be from a local file, cloud source, anything. Beyond saving time, it should also ensure accuracy.
In FME Flow, Flow App users can trigger a workspace with a Run button, but they can also submit hundreds or thousands of jobs, intentionally or by mistake. This can easily overload the system.Job Queues and Routing Rules help organize jobs, but they don’t limit how many jobs a user can submit.Feature request: Add a configurable per-user job limit (e.g., max 5 active/queued jobs) to prevent users from creating an unlimited job queue and to protect system stability.
It would be great to create reports through MS-Word Writer. The attributes can be populated in Word Document based on the tags (e.g. ${attribute}). Something like Survey123 Reporting Concept.
It should be possible to define default pagination parameters at the Data Virtualization API level, while still allowing bespoke pagination configurations for individual endpoints.Currently, pagination must be configured within each workspace, which can complicate workflows and, in some cases, negatively impact processing performance. This configuration also needs to be repeated for every workspace endpoint, leading to unnecessary duplication.A more efficient approach would be to manage pagination at the parent API level, with the following capabilities: A configuration option at the API level to enable or disable pagination globally A configuration option at the endpoint level to determine whether the endpoint inherits the global API pagination settings A parameter to define the default pagination size (i.e., the number of results returned per page)
I would like to see SAML and Office365 authentication added to the HTTPCaller so that it can access information from more different online sources.
Hello FME Community 👋We at Safe Software are busy working on some exciting FME Platform enhancements, many focused on product security. We would like your thoughts on one of the ideas that is currently up for consideration.We’ve received previous requests to add OpenID Connect authentication support to FME Flow, and we think that OpenID Connect (OIDC) authentication could be supported broadly across both FME Form and FME Flow.So, before we dive right into development efforts on this idea, we’d like to know what you think about it! Would you benefit from the FME Platform supporting OpenID Connect (OIDC) authentication? If so, can you provide a brief description of how FME supporting OpenID Connect authentication would enhance your experience with the FME Platform? We are also looking for anyone who might be interested in testing out our implementation of OpenID Connect authentication, once available. If you would like to be included as an early tester, please indicate that interest in your response!Here’s a bit of background on OpenID Connect (OIDC) authentication. If you’ve ever tried to create an account with a new app you’ve downloaded, you might be presented with options to use another account (like Google or Facebook) to login to the new app. In this way, you can use an account you already have, instead of creating a new account. This is OIDC authentication at work, and can be considered an extension to the OAuth 2.0 protocol already supported across the FME Platform. More information on OpenID Connect (OIDC) authentication can be found by visiting the OpenID Foundation’s How OpenID Connect Works page. We look forward to hearing from you on this exciting idea!
To get to know your current amount of Dynamic Engine / CPU Usage Credits you need to navigate to the Licensing Menu within the FME Server GUI. To see this, you need the proper permission that should not be granted to a lot of people. A possible way to make this information more accessible could be done via the REST API. Right now, there is no such endpoint. Accessing the credit amount via the REST API will give you the opportunity to automatically get the Credit amount and, for example, enables you to monitor the usage and reduction of you credits. This monitoring could be done via FME Server itself or by implementing the REST API request within a third-party application (like Grafana).
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK