So I’ve been experimenting with using FME to build specific AI agents/agentic workflows.
I’m using OpenAI’s API and leveraging function calling. It’s been quite fun and a really good learning experience.
You can let the agent make decisions - if you give it tools to use then you can use looping transformers to let it use the various tools you give it, constantly updating it’s context.
Looping transformers are a bit of a paint to work with and hopefully there could be something done in this space to make building these tools in FME easier.
I’m still not sure what the best way to do this is or even if it makes any sense to use FME for this but I made some progress anyway:
for example: should the tools be other CustomTransfomers for example or whole workspaces or maybe both. Should this be something somehow handled in Automations rather than a workspace? So many options
So far the agent isn’t interactive and I’m just putting all the context into the logfile, but in the end it would be real nice to have this linked to a slack/teams channel for example.
And for anyone wanting to experiment with OpenAI’s API it can be super cheap. Just top up $10 and it’ll last you ages (unless you start uploading images/files).
Currently FME’s OpenAIConnector in FME 2025 doesn’t let you leverage the tools/function calling so I made some tweaks to allow this. I also haven’t taken advantage of MCP yet - I’m still not 100% sure how this could tie in with FME.
Anyway just looking to see if others have played around or had any thoughts on an AI agent in an FME workspace.
Just top up $10 and it’ll last you ages.
Credit expire after one year. I’ve just lost 14$ that way :’)
A small thing, but it would be nice to have an FME Python MCP server that I could add to my GitHub Copilot in VS Code.
Hi
Why not using local models? They are getting better and better. It’s clear that there are many paths ahead. I am very curious how Safe will react to tools like qgis/blender mcp way. Will it be “just” workbench creation (as an xml network)? Calling external tools as you do? FME parameters fine tuning live with live data? FME functions calling (for this the fact that a lot of parameters are “hardcoded” like in AttributeValidator makes things harder? I am very curious to see how things will evolve but a bit doubtful about a full “remote” AI as openAI (apart if you can have the models running in your infra) as you would need to push more and more data to it to improve the process.
Hi
Why not using local models? They are getting better and better. It’s clear that there are many paths ahead. I am very curious how Safe will react to tools like qgis/blender mcp way. Will it be “just” workbench creation (as an xml network)? Calling external tools as you do? FME parameters fine tuning live with live data? FME functions calling (for this the fact that a lot of parameters are “hardcoded” like in AttributeValidator makes things harder? I am very curious to see how things will evolve but a bit doubtful about a full “remote” AI as openAI (apart if you can have the models running in your infra) as you would need to push more and more data to it to improve the process.
Sorry I got carried away…
I used OpenAI’s api because it was just easier for me to test/experiment with an existing model - In the end it would be great to have some kind of process where a user could pick their own model to use.
QGIS and Blender both have an underlying Python API as I understand it, I’m not 100% on the details on how their UI integrates with the Python api but I’m pretty sure UI elements for blender for example can be fully put together with python. So having an AI write a blender plugin or process just works out of the box.
FME of course also has a python API, however, there is no (exposed) API to control FME Workbench itself - mostly it’s limited to the PythonCaller or FME Packages. I struggle to see how Safe will put together something able to have an AI write it’s own workspace - I would love though if this AI topic somehow motivated Safe to do something along this line - for example it would be pretty nice to somehow go back and forward between a workspace and a python script.
Perhaps this will force safe to expand their python API even more and make it easier to just use python outside of workbench completely. But I think the FME UI is a huge part of what makes FME great for its users.
I have no projects to share myself - mostly just me experimenting to see how easy it is to use FME to use AI to help make simple decisions which usually require a human to do the inputs.
My process was essentially to create an “Agent” that was tasked with running a series of jobs - it needed to extract information from the input data and then pick the right data to use in what job and what settings to use for the jobs (e.g., input path, coordinate system, etc). The input was just a folder / hard drive of data delivered by a customer - we don’t get that many deliveries so we don’t have strict delivery requirements on structure or naming - it’s usually pretty clear but sometimes we do have to take a look at the data.
The “Agent” was given systems instructions like “You are a GIS expert and are responsible for making decisions and processing data blah blah blah”. The prompt was something like:
“here is a bunch of folder paths you need to figure out what to do with the data blah blah”
I then defined several “tools” it was able to use - things like, for example, “Coordinate System Extractor”, “Metadata Extractor” etc. The AI would then decide which tools to use.
After the AI Call I could parse the JSON output and just set up a TestFilter with the AI’s tool decision and then use FME functions to do what the AI wanted/needed to do. E.g., for Coordinate System Extractor I would read the file that the AI wanted to find out about and extract the Coordinate System. I would then update the prompt/context for the AI with what tool it chose to use as well as the result. Then it would be able to use another “tool”.
The idea was to have the AI gather all the information it needed to input the settings for the jobs, something that our team just knows from experience and context - For example given a particular city we know what EPSG code / coordinate we should use in the jobs - The AI often also knows this as well which was nice, often if it had enough context it would completely skip the coordinate system tool - and if the filename had enough information it was able to just start the jobs directly.
What I realised though while working on it was that it would probably just be easier to cut out the AI and put in delivery requirements - There was just A LOT of overhead with the whole process whether that was keeping track of the context, giving feedback to the user (slack, FME Log file teams) and just building all the tools and defining all the tools.
In addition the jobs it would be triggering could potentially run for days so getting the inputs wrong in a real world setting would be costly.
Nevertheless I saw the potential and understood the workflow and what kinds of business processes this would work well with and where the overhead would just not be worth it. It also helped me really understand how other Agents and tools use works.
With agents like Github copilot the Agent can write it’s own tools (e.g., in python) to extract the needed information and then trigger the job - having an agent that is able to create it’s own tools for these kinds of things is pretty helpful and where FME falls down.
FME is great for non-coders with a good understanding of data and processes, however, now that us non-coders or slow coders can work with an AI to write code a significant benefit of FME is suddenly not as big of a benefit as it once was.
AI is definitely shaking things up so lets see what falls out! I’m also very curious to see where Safe goes with AI here
It’s true that AI lowers the barrier to coding, but it also increases the risk of code being executed without proper oversight of its logic. People often judge AI based on whether the output looks reasonable, without examining the steps that led to it. That can easily mislead us. This is even more the case for non-coders, who can’t assess the process if it is written in Python or complex SQL.
Without fine-grained control over the rights of agents (or AI-generated “users”), it also becomes dangerously easy to corrupt a database or a repository (I have seen this in production).
In contrast, FME enables non-coders to review the steps before execution and to clarify their thought process by “writing,” even if that writing takes the form of connecting boxes. This transparency, and the deterministic output, are very valuable.
Some AI tools (FME or others) will offer graph-like descriptions of steps as a proposal. Some AI will also allow parameters to be more “context-aware.” Let’s see if Safe can leverage all the Transformers/Formats to be the platform to do this. Thanks to AI, it is now much faster for competitors to build those blocks.
Old discussion as a reminder :