Shape the future of FME with your ideas
Open ideas have been reviewed by our Customer Success team and are open for commenting and voting.
When inspecting features, particularly identifying point geometries, I find myself wanting to visualise the Identifier attribute for alot of points, sometimes along lines and centre of polygons. Obviously desktop GIS have this functionalityā¦. butĀ Ā Iām asking to build on the Mark location option which already has a Label option. Can this same code be added under the Display Control, where users edit the styling of each dataset/layerData Inspection lacks functionality to add simple labels for a point dataset/layerA workaround in workbench is to use LabelPointReplacer transformer to create text geomSo the updated UI will need Label section in the drawing styleā¦Ā and as pictured belowability to select the Attribute Value column and make this the label text the font and colour would be nice label size (replacing font size?) may also be of assistance, though projected coordinates are likely needed to display in metresDisplay Control drawing styles to include a LabelĀ
Having user-configurable syntax highlighting colors to FMEās syntax highlighting (Python, SQL, Expression Editor), similar to how Visual Studio Code handles token-based color customization would be a nice addition.I find some of the default colors for JSON syntax highlighting to be difficult to read, so it would be great if there were some customizations for this.Text Editorhttps://code.visualstudio.com/api/language-extensions/syntax-highlight-guide
With the upcoming release where a Python IDE will be integrated with FME, the next, and most important logical steps is to pass cached data into the IDE for the purposes of debugging calculations and logic flow.Ā I do not know how the integration with the Python IDE will be released but being able, at design time, to open the IDEA in debug mode is a powerful way of seeing how cached data flows into the PythonCaller, through the python logic, and then back out again.
PDF/Word documents have the ability to create form fields for users to fill in. Looking for the ability to auto-populate some or all of these fields from a database record. Ideally, the writer would accept a template .docx or .pdf and allow the field from the database to write to a field in the form.
I canāt see an obvious way to do this already at FME 2024.2, but it would be nice to be able to set the āOut Fieldsā parameter on ESRI service readers.Often I only need a handful, or even only 1 field from feature classes with enormous schemas. It would be nice if that could be configured in the reader itself seeing as the rest endpoint handles this parameter.Something like this as an example:Ā Ā
Currently, we can only set the default layer color in the DWG Writer using a color index. Various CAD standards also work with layer colors based on RGB values. I would like to see this expanded so that it can be set using both color index and RGB values.This will avoid having to create templates with thousandĀ of layers manually (extensive cad standard).Ā Ā
The ISO-8601 standard for week numbers in the DateTimeConverter is missing the Week 01 - 53 implementation, where week 01 is the one with the first Thursday of the year. This is referred to as %V in other scripting languages. Can this be added as a DateTime function?
Custom Transformers to automatically output a Rejected port which returns all Rejected ports from within the transformer
Allows the user to be able to gracefully handle failures
I believe this was mentioned in a webinar, but being able to import an endpointās schema - from the dataset it is going to be accessing of course -Ā without having to manually create every property would be a great thing to have.Ā This could be from a local file, cloud source, anything.Ā Beyond saving time, it should also ensure accuracy.
Hi FME Server Team,I would like to request an enhancement to the FME Server REST API.API Endpoint:/transformations/jobs/completedEnhancement Request:Currently, it is not possible to filter completed jobs by their finish time using this endpoint. I would like to request support for querying completed jobs byĀ finishTime, in addition to filtering byĀ repositoryĀ andĀ workspace.Use Case:For monitoring and automation purposes, I need to programmatically retrieve jobs that have successfully finished within the past 5 minutes, filtered by a specific repository and workspace name.Example Query:repository:Ā <repository_name>workspace:Ā <workspace_name>completed successfully (completedState: success)finished within the past 5 minutes (finishTime >= <timestamp>)Having the ability to query byĀ finishTimeĀ (ideally with support for both a start and end range, or a relative time based on current time eg. last 5 minutes) would greatly streamline our integration workflows.Thank you for considering this request.Ā Ā
Currently, FME Workbench allows users to define parameters in custom transformers or published workflows, but the ability to provide detailed guidance for each parameter is limited. I propose adding a new feature to User Parameters that enables authors to attach rich descriptions to each parameter. This would include:Formatted text (Markdown or HTML) Web links to documentation, tutorials, or external tools Images or diagrams to illustrate usageWhen a user runs the workspace, they would see an information icon next to each parameter (show it only if it has content). Clicking this icon would open a dedicated window or pane displaying the enhanced description, helping users understand the context, expected input, and any dependencies or external resources required.Ā BenefitsImproves usability and user experience, especially for complex parameters. Reduces support requests by providing self-service guidance. Enables better documentation and onboarding for new users.
FME Flow currently has columns in the Jobs Completed screen like the workspace, source, date/time, engine, etc.What happens for bulk dynamic data replication is you need to run the same fmw multiple times with differing parameters. e.g. a SQL Server to SHP Downloader that runs on a single table at a time.Whilst yes you COULD run the workspace only once (reading hundreds of thousands of features), this not only adversely and unnecessarily affects server resources but means a single error affects all data.Ā Instead, itās best to run the workspace multiple times (once per SQL Server table) via say a FMEFlowJobSubmitter.In FME Flow this means you get the Jobs screen full of hundreds of the same workspace with a differing date. Thereās no way to tell at a glance what parameters caused the 10 out of 200 jobs of the same workspace to fail.Iād like to see a parameter exposed in the Jobs screen so that I can see at a glance that Table ABC and XYZĀ failed. Not that Workspace123 failed 10 times with unknown parameters. Yes, Iām aware you can click into each job manually and that yes we could setup a log-reader workspace to do this. Or that we could setup a postprocessing task. I have setup such before - but itād be nice out-of-the-box in the Jobs screen.This could be a standard parameter or a FME Flow parameter that can be set by the author (e.g. linked to a Published Parameter).Thanks.
The 'NoFeaturesTester' custom transformer currently out on FME Hub is powerful. However, we're getting pushback from IT from using it in our production workspaces in FME Server. We've benchmarked their work-arounds, and it slows down the workspace significantly (2 seconds with NFT vs. 2.5 minutes with their suggested alternatives). My idea: Harden this transformer and turn into a standard transformer.
This enhancement will allowĀ full automation of attachment backup workflows using ESRI ArcGIS feature services or geodatabase attachments being backed up to ArcGIS feature services.Currently, even with the ESRI ArcGIS package, you need to manually configure the feature service feature URL on a FeatureReader so that the featuresā featureID can be passed to the ArcGISAttachment connector.Implementation would be sinmilar to the Publish Action item of the ArcGISOnlineConnector which exposes the _webservice_urlĀ Ā Ā Ā
Hi there,In this article (Configure user attribute mapping with Azure AD SAML Provider ā FME Support Center), a group claim can be setup to pass an AD group name which aligns to a role in FME Flow. As a group claim could be based on a search criterion and many AD groups could be returned which is a common method of Enterprise group membership in ArcGIS Enterprise software. I was wondering if this could be used to grant several roles to a user?Looking at controlling access to repositories, who can view workspace and who can run a job, etc via AD group assignment.Thanks
If you have ArcGIS Pro installed on your machine then you have access to the arcpy.geocoding module with its Locator class, which can be serverless.Ā This puts geocoding in the hands of anyone with (say) a file-based locator.
The Tester and TestFilter transformers have five distinct Comparison Modes:Automatic Numeric Case Sensitive Case Insensitive Date/TimeAs well as the option to Specify Per Test.The FeatureJoiner and FeatureMerger transformers have three Comparison Modes:Automatic String (presumably the same as Case Sensitive, above, but I have not validated that and donāt see it specified in the documentation) NumericAnd the way that these transformers are designed, they are inherently āPer Testā or per āJoin Onā condition, as it were.The general Idea here is to make the Comparison Modes more consistent across transformers - there are likely other transformers that could be included in this, beyond the four mentioned above.Case Sensitive, Case Insensitive, and Date/Time could be useful in the Feature-type transformers, in addition to the Test-type transformers. There are other ways to achieve these, of course, but there may be value in having them readily available. The order in which Comparison Modes are listed could be consistent - this is a minor point.If the various types of Comparison Modes are already tailored to best match each class of transformer, then the idea may not have merit, but posing it in case others could see any value in the proposed change or a similar change.
This idea has come up in the 2016 thread (now released) about adding any output ports to FeatureWriter but the use case for Rejected features remains - e.g. some web-based format has a transient HTTP error, the failed features need to be retried after a wee delay, or a field overflows and canāt be written. @markatsafe @rylanatsafe you guys were on that thread. FeatureReader has a Rejected port, very handy for retry logic in a looping custom transformer, lets see it for FeatureWriter!There might need to be a Rejected port for each output port if you're going to loop it, or filter on feature type before looping to an input.
In our standard FME Flow training course we have one excerciseĀ where Workspace A does some checks on user parameters and if they are succesful it runs Workspace B through an FMEFlowJobSubmitter.One of our trainees accidentally set up the FMEFlowJobSubmitter to run Workspace A instead and when he published it to Flow he made a recursive loop: every time Workspace A ran, it queued Workspace A again. I was busy helping somebody else at that time so when I got around to his question I noticed it had ran 17000+ jobs in a matter of minutes. These were low-impact jobs, so it wasnāt an immediate problem, but itās safe to say that this has the potential of completely overwhelming an FME Flow setup.After talking to Safe we decided to post it as an idea here to see if others have come across this as well or have opinions about it.The big question would be:Should FMEĀ be able to detectĀ these recursion risks?And in a broader sense: have you had this happen to you? Or something similar that you think FME should be able to detect and warn you about?Ā
Adding the row number on the interface helps you to avoid errors or forget something.For example, row number to check the amount of conditions that the Tester has.Moreover, highlight the row number if the attribute name already exists.
When I have a job that runs for 5-15 minutes (and this is with a subset of data) it would be nice to be able to sneak a look at the cached features at any point during the running workbench. I could click on the green loading box and see what features have loaded so far. This way I can be QC-ing output before the workbench has even completed running.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK