Shape the future of FME with your ideas
Exciting Updates to the Ideas Page! Learn more about the recent changes we’ve made to bring more transparency, ease of use, and a clearer view of how your ideas are progressing.
Open ideas have been reviewed by our Customer Success team and are open for commenting and voting.
Hi FME InnovatorsI was wondering do we have any ideas/thoughts in future to implement/release any FME transformers and tools in AI Era, Object Detection is evolving alot and Instead of using External API , FME Owned transformer and Labelling.The Era of AI is evolving, I have kept few notes of Object detection for which i was exploring towards it and I was thinking of FME will have its own API , tools and methods in labelling(Training a model within FME) rather than depending on external Api connectors.What is Object Detection?Object Detection is a method in computer vision that detects and identifies objects in an image or video. While image classification predicts a single label for an entire image, object detection finds several objects in a single image, giving each of them a bounding box and a class label.Object detection takes care of two main functions:Localisation Localization - Where is the object? Classification - What is the object? Traditional Machine Learning for Object DetectionBefore the emergence of deep learning, the traditional approach to object detection was handcrafted features and classical ML algorithms. Traditional object detection techniques require you to do manual feature extraction and suffer from problems with too much variation, such as lighting changes, scale changes, and background changes. Haar Cascades Introduced by Viola and Jones (2001). Utilised for initial face detection (e.g. OpenCV’s face detector). Based on Haar-like features and a cascade of classifiers. Histogram of Oriented Gradients (HOG) + SVM Detect objects by utilising gradient orientations. Popularised by Dalal and Triggs for pedestrian detection. More compact and robust than Haar, but computationally expensive. Selective search + SVM Provides region proposals, which are classified. Helped bridge the gap between traditional machine learning and deep learning. While these machine learning methods set the groundwork, they simply could not outpace both the accuracy and scale of the now deep learning models.Deep Learning for Object DetectionDeep learning has transformed object detection by automating feature extraction via Convolutional Neural Networks (CNNs). Deep learning models automatically learn progressively abstract features from the data, improving speed and accuracy.Two-Stage DetectorsTwo-stage detectors separate the region proposal from classification. R-CNN (Regions with CNN Features) Uses Selective Search to propose region proposals. Uses a CNN to extract features from each proposed region and classify each region. Very accurate, but slow (each region is processed independently). Fast R-CNN This model shared convolutional computation across the image plane. It adds an ROI pooling layer to extract features using shared feature maps. Faster than R-CNN, but still not real-time, close to real-time. Faster R-CNN Introduces a Region Proposal Network for end-to-end training and prediction. Achieves accuracies very close to real-time performance. Single-Stage DetectorsSingle-stage detectors eliminate the need for region proposal and are capable of predicting bounding boxes and class labels directly. YOLO (You Only Look Once) This system is targeted for real-time detection. YOLO divides images into a grid and makes predictions about bounding boxes for each cell in the grid. The versions began with YOLOv3, then to YOLOv4, YOLOv5, and continue to the latest - YOLOv8 (the most recent versions now leverage Transformer-based modifications). SSD (Single Shot MultiBox Detector) SSD uses feature maps from multiple convolutional layers to perform detection. SSD offers a good tradeoff between speed and accuracy. RetinaNet RetinaNet introduced 'Focal Loss', or re-weighted losses, to aid in addressing the issue of class imbalance during training. RetinaNet shows good results across a range of benchmarks.Innovative Architectures and Trends (2025)Modern architectures combine CNNs, Transformers, and self-supervised learning techniques for better generalisation. DETR (Detection Transformer) An end-to-end object detection pipeline that employs Transformers. Negates the need for anchor boxes and Non-Max Suppression (NMS). Very accurate but less computationally efficient than YOLO. Vision Transformers (ViT) Attention mechanism (global feature extraction). Used with a hybrid CNN backbone for efficiency. Self-supervised learning (SSL) Models that are pretrained on unlabeled data (MAE, SimCLR) will transfer better with limited labelled datasets. Tools and FrameworksHere are some popular frameworks for implementing object detection: TensorFlow Object Detection API PyTorch + TorchVision Ultralytics YOLOv8 Detectron2 (by Meta AI) MMDetection Thanks
Often, I’m dealing with large datasets and I want to do a quick comparison of the schemas. I only want one record from each. I appreciate that you can set Schema Scanner or Sampler to only sample one record, but it still processes the remaining features to an output port, which is wasting resource in many cases. How about a setting in both made available, that just takes the first (specified) number of features and then stops reading from the source?
If you select a view called “banana” in the Revit reader but the view doesn’t exist in the file (for instance, in a parameterised or generic workspace), the reader will default to the first available view. This can lead to unwanted data being read, and cannot be controlled by the user without accessing the log file.Please improve the Revit reader by:Not reading any data if the selected view is not available Adding the view that was used to access data to the feature attributes
When I have a job that runs for 5-15 minutes (and this is with a subset of data) it would be nice to be able to sneak a look at the cached features at any point during the running workbench. I could click on the green loading box and see what features have loaded so far. This way I can be QC-ing output before the workbench has even completed running.
When a workspace is stopped by clicking the "Stop" button, no feature caches are preserved. It would be great to have an option to preserve these partially built feature caches if a workspace is manually stopped.Usage scenario: I'm running a workspace that takes a long time to run and I notice an issue (error or warning) that I'd like to investigate but don't want to have to wait for the workspace run to complete just to take a look at the problem.
Currently the FME Flow logfile cleanup is a one-size fits all setting. So all job logs older than the interval that’s set get deleted.I would like to propose a more fine-grained control, so on a per-workspace level be able to override that setting. Let’s say for example we have it set to 7 days, but have some jobs that run monthly. It could be beneficial to be able to see the log from the previous run if the current run fails. And on the other hand, if we have jobs that run many (say hundreds), I see limited value in being able to see one from a week ago, so I may want to shorten that retention period to say 1-2 days.
When you add a "file" manual key to an automation which get run by an automation app, the option is to choose a file already in the resources of FME Flow. It would be nice to have an option for a user to upload a file like what's already available in workspace apps. For a User Parameter "File" the user is able to upload a file which gets used by the workspace the workspace app runs. The same would be ideal for automation apps.
It would be nice to be able to give each FME instance an individual name (and possible icon) to be able to distinguish different instances without checking the sometimes cryptic or long url. Handling different stages in the same browser sometimes leads to tab change after tab change to get the right instance.Different Instances of FME FlowIt would be possible to directly chose the correct instance if the admin defined name could be shown instead of FME Flow as title and possibly a selected favicon - that is already possible to change going into the webapp but not documented and the path may vary between versions.
Lots of organisations have several FME Flow Instances, Dev, Test, Prod for example.It would be helpful to have the ability to select different colour pallets for each environment to quickly differentiate between each env.For example, you would be able to change the sidebar colour to blue for dev, green for test and red for prod
ESRI only makes arcSDE connections available via a ‘sde file’ which is a propierity file stored on disk, sometimes C drive, other times on network storage. Often we struggle with access to these files, as some are open and read only access, others have higher privileges to write to the SDE geodatabase.So its confusing to me having *.sde files as “Database Connections”In 2025, we now have the option to store database connection in FME Flow. We can change the dataset path from local to the Shared Resources on FME, either the Engine or the Data folder.Flow database connection with SDE typeThis enables further re-use of the connection within Flow… however the problem then becomes how can FME authors manage the connection in Form?FME Flow connection storage is great, but not necessarily for SDE file database connectionsIn practice, you go to re-use the flow connection for SDE and despite the path being to Resouces (engine and shared to roles) the error is repeatable Error’s connecting to feature types in ArcSDE geodb encountered in 2025.1The workaround for now is to follow Option 2 in article https://support.safe.com/hc/en-us/articles/30212601575693-How-to-Create-and-Manage-Esri-Geodatabase-ArcSDE-Connections-in-FME storing a single SDE file in a network share location that is accessible to both Form and Flow Can Safe software please add an enhancement to help find a better solution surrounding SDE connections and FME?Maybe ESRI have ideas or community wishes to move away from only have *.sde files as the single means to connect to Spatial Database Engine. What's needed is another means/protocol to properly “direct connect” to the DBMS and to include the sde registry.Connect to the DBMS registered for SDE, add a new option to get license and work as SDEA requirement for FME Flow instance is to have ArcServer installed for the licensing of SDE/FGDB. Perhaps this opens up something new.
In the older Transformer Guides, we sadly lost FME Lizard… but it had an overview, components of a simple workspace, basic transformer placement, basic running tips and inspectionIn the latest Transformer Guide we have the new transformers being added but only the transformer description textI’m asking for the graphics for each transformer to be returned. They still exist in the Help documentation, but the latest transformer guide, it lacks that nice visual, simple guide to what a transformer handles:Please rewrite the PDF to include the graphic/illustrations. Surely an FME workspace can transform the help from the web into a PDF and I believe 2025.1 has a nice PDF styler transformer to assist
Please add support for colors / appearances to the I3S writer. Currently, colors which are set within the feature in ArcSDE are lost, and all features within the published scene become white.
I’d like to request a more prominent notification flag for when FME packages need updates rather than the manual review process now. Unless a person manually checks you are only made aware of an issue when the workspace generates an error. Thanks!
A minor bug I’ve possibly just discovered, though something in my memory is thinking I’ve heard about it before. If the FeatureMerger transformer is not connected to any data connections and you move the requestor port down below supplier then drag it to a connection, the connection goes to the uppermost port - in this case the ‘Supplier’ port. I would say the default most people would want is that it looks for the ‘Requestor’ port, regardless of whether it is the uppermost port or not. So this is a bit of a bug report and idea at the same time. Screen recording gif is attached to show current behaviour in FME(R) 2025.1.1.0 (20250730 - Build 25615 - WIN64)
I would propose a switch in the backend to show or hide the geometry definition input field.My users are all over the place, some of them understand that they have to set a point on the map and are confused thereafter when a whole GeoJSON syntax shows up instead of coordinates they expect, others think they should write a street address in the input field (mistaking the input field for a geolocator), etc…I can leave a message for them but hey, how many will read it...
It would be very helpful if the number of neighbors found were returned as an additional attribute (e.g. _count). This would save the effort of creating a list and subsequently using a ListElementCounter.
It would be nice to be able on one click to open the folder in which the workbench is stored. This is very simple but could be very useful.Actually the shortest way that I find to do that is :CTRL + SHIFT + SCopy paste the suggested pathWindows + EPaste the path and EnterThe path is visible in the header but not queryableThis is very linked to https://community.safe.com/s/idea/0874Q000000TlPhQAK/detailBut in a more general way (not only log oriented)
How the Bookmark Name is currently labeled into the bookmark title/handle section at a fixed left placement means often the label is not visible.I think placing the name into the visible/rendered section of the bookmark would be useful as the label would be visible more often when working on sections within bookmarks.So long as the 'jumping around' of text during navigation of a workbench doesn't feel too distracting in testing.
Esri has been adding more options for handling dates. In geodatabases, they now offer new data types for “Date Only” “Time Only” and “Datetime with timezeone offset” for data in geodatabases.On feature services in ArcGIS Online and ArcGIS Enterprise, publishers can now define the timezone of the data underlying the service, and the timezone for display to clients. This allows service publishers to define how they prefer dates from the service should be displayed, and how dates in data being written to the service should be translated for storage in the underlying dataset. However, from my rough testing, the Rest services are still sending dates and UNIX values, so the timezone definition on the service is just there so clients (like ArcGIS Pro, ArcGIS Enterprise, etc) know what to do with the unix values on the client side before displaying or writing back. Plus from what I saw at the 2025 user conference, they are adding more datetime configuration options to Pro in future releases.It would be helpful if the new feature service reader could tap into these settings and control how data from date attributes gets pulled into the workspace initially. It’s just one less thing to have to translate as data comes into the workspace when the creator of the service already defined how they would prefer users to interact with the dates in that service. On the reader, I could see this as a parameter that has options likeDateTime Output FormatUnix: Values from Date columns are brought in as the UNIX values from Esri. FME UTC: Values from date columns are brought in with FME Datetime format and timezones not translatedFME with Timezone from Service: Values from date columns are brought in with FME Datetime format but with the timezone added. On the writer, I could see a parameter to control how FME date attributes can be translated on writing to the service so they honor the “Time zone of the data” setting.I personally just started looking at migrating from the old ArcGIS Portal reader to the new ArcGIS Feature Service reader. With the old reader, date info was automatically translated into FME datetime format. With the new reader, date columns initially load in Unix format. That means I’ve now got to do a datetime conversion on data from any reader if I want to work with them on the FME workbench. But maybe I’m missing something on how best to work with this kind of data in the workbench. I haven’t found anything yet about this change in any of the documentation or blog articles. I only found out about it after submitting a ticket. So if anyone has more detail on whether the change is intentional, or how do deal with it now that it’s in the new version, please let me know.
When inspecting features, particularly identifying point geometries, I find myself wanting to visualise the Identifier attribute for alot of points, sometimes along lines and centre of polygons. Obviously desktop GIS have this functionality…. but I’m asking to build on the Mark location option which already has a Label option. Can this same code be added under the Display Control, where users edit the styling of each dataset/layerData Inspection lacks functionality to add simple labels for a point dataset/layerA workaround in workbench is to use LabelPointReplacer transformer to create text geomSo the updated UI will need Label section in the drawing style… and as pictured belowability to select the Attribute Value column and make this the label text the font and colour would be nice label size (replacing font size?) may also be of assistance, though projected coordinates are likely needed to display in metresDisplay Control drawing styles to include a Label
Please add Import from Feature Cache (Similar to AttributeFilter) where you can populate the right side or the left side from values. It would also be great to have some multi row copy/paste (drag drop) to be able to easily adjust the mappings withing the mapping editor.
Hi,the birds are chirping that Bentley is soon to phase out ProjectWise login via logical accounts. It would be nice if there was an option to connect to PW via IMS login as a transformer or an option to configure the connection in the settings
Currently, in FME Flow, when a user edits and saves an existing schedule, the scheduled job appears to run under the username of the last person who saved it, even though the ownership of the schedule has not changed. This behavior can cause issues when the last editor does not have the necessary permissions to access specific data sources or resources, leading to unexpected job failures.We request that schedules retain the original owner's execution context, regardless of who last modified or saved the schedule settings. Saving or editing a schedule should not modify the account under which the schedule executes unless the schedule's ownership is explicitly changed.
We would like to request support for Azure DevOps as a remote Git provider in the Version Control functionality of FME Flow. One of our clients recently upgraded from FME Flow 2024.2.1 to FME Flow 2025.1.2. In the previous version, they were successfully using Azure DevOps Git repositories to manage workspace versioning. After the upgrade, they are no longer able to push changes to their remote repository. The UI reports: “There was a problem communicating with the REST API.”And the backend logs show HTTP 500 errors when attempting to push.According to the documentation, only GitHub.com is officially supported. Azure DevOps is not listed, although it previously worked without issue. This limitation significantly impacts their ability to maintain version history and collaborate effectively. Could you please consider:Adding official support for Azure DevOps Git repositories in FME Flow Version Control. Providing documentation or configuration guidance for Azure DevOps integration. Ensuring compatibility with common enterprise Git platforms beyond GitHub.com.This feature would be highly valuable for organizations using Microsoft and would align FME Flow with broader enterprise DevOps practices. Please let us know if this request will be considered for a future release and if so, in which upcoming release. Thank you for your support! Kind regards,Joëlle Jansen-SoepenbergFME Consultant
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK