I spent a few hours trying to understand MCP and, like many things, it didn’t really click until I started building.
First, I played with MCPCaller using tools created by others. That part was interesting, but not very insightful. Those tools were already answering questions directly, so I didn’t really see the need for anything more.
It only clicked when I pointed AI at my own data.
I built a few simple MCP tools (workspaces) in FME (discover, profile, values, filter…), and instead of wiring a fixed workflow, I let an AI planner decide what to do next. That’s where the loop comes in:
“Let’s see what’s here → what’s relevant → get values → compare → accumulate knowledge → answer.”
That’s when MCP started to make sense.
MCP tools are not about AI. It’s about exposing tools in a way that allows AI to use them efficiently.
The important part is not MCPCaller itself, but the pattern:
AI + loop + MCP tools → from a question to an answer.
I recorded a short demo with a mix of datasets (SQLite, DWG, Revit, Excel) — different questions, but just one workspace It even fits
Now I’m curious:
What do you think about this pattern?
And what MCP tools would you build?
Dmitri



