Skip to main content

MCP Adoption in Your Organization

  • April 17, 2026
  • 2 replies
  • 191 views

stewartatsafe
Safer
Forum|alt.badge.img+10

I am interested in discussing what MCP adoption currently looks like in your organizations. Do you have a corporate strategy for connecting your applications to your AI through MCP? Or are you running shadow AI operations and configuring MCP yourself on the side?

At Safe, in Product and Engineering, we are predominantly using Claude. We now have MCP servers on top of most systems. For systems that do not have MCP servers, we have used FME Flow MCP to connect them. The biggest issue seems to be reliability, both Claude’s ability to pick and use the correct MCP Tool. Then the tool actually working when it tries to connect.

2 replies

david_r
Celebrity
  • April 30, 2026

Hi ​@stewartatsafe,

Interesting question. We are in dialogue with our clients for how they approach and adopt AI, but as you can imagine, security when using cloud-based LLMs is a recurring issue that is holding most of them back. This is particularly true when talking about MCP, which isn’t a broadly understood technology. 

We have done some internal work on two fronts:

  • Using the MCP server provided by Flow to work with datasets
  • Writing our own MCP server to talk to a Flow instance for administration and analytics

Both with very interesting and promising outcomes, but that again highlights the security aspects of using cloud-based LLMs. However, we believe that as local LLMs are getting more mainstream, the subjet will get more traction.

Let me know if this is something that you’d like to discuss further, I’d be glad to give you a demo of what we’ve done so far.

Cheers, David

 


stewartatsafe
Safer
Forum|alt.badge.img+10

Thanks David. Security is going to be a big issue here, but I actually think FME has a strong story — maybe one we should be marketing more. A few angles:

  • PII scrubbing. Most MCP Servers (e.g. Salesforce) return everything by default — names, emails, the works. FME Flow sitting in the middle lets you strip or mask sensitive data before it ever reaches the agent or the LLM provider's context window.
  • Data minimization. Beyond PII, you can shape responses so the agent only sees the fields it needs — no full schemas, no internal pricing, no proprietary data leaking into an LLM. Least-privilege access to your systems, enforced at the integration layer.
  • Credential isolation. The agent never touches your Salesforce credentials, database connection strings, etc. FME Flow holds those secrets in its existing credential management. With direct MCP Servers, the agent framework often needs those credentials configured directly, which widens the blast radius if anything is compromised.
  • Trusted, audited infrastructure. FME Flow is a known entity — already part of your architecture, already audited, already ISO compliant. Extending it to AI agents doesn't introduce a new trust boundary. Every direct MCP Server does.
  • Centralized governance. One place to set data access policies, one place to audit, one place to revoke. Without FME in the middle, every MCP Server connection is a separate security surface your InfoSec team has to govern independently.

Be great to chat, will setup a call.