Skip to main content
FME Hub user antoine just uploaded a new template to the FME Hub.

ExprAgent_Ollama Generate & Run FME Expressions with an LLM

Type: Template workspace (Form)

Purpose: Convert a plain-English task into an Arithmetic Editor expression using a local LLM (Ollama), then evaluate that expression on every feature and write the numeric result to _result.

What it does

Scans the input schema and builds a compact JSON description of attributes.

Calls an Ollama tools-enabled model via /api/chat with a single tool (eval_expression) so the model returns only an expression.

Cleans the returned text

Evaluates the expression for each feature and stores the value in _result.

Why it’s useful

Turn a request like:

add 5 to _creation_instance and multiply by 3

check if the id of my instance is even

into a valid expression:

(@Value(_creation_instance) + 5) * 3

…then apply it across your data—no manual expression authoring needed.

Inputs & outputs

Input: Any feature type with attributes referenced by your task.

Outputs:

_expr — expression returned by the model (cleaned).

_result — numeric value computed per feature.

Requirements

Ollama running locally with a tools-enabled model. Recommended:

qwen3:8b, llama3.1:8b, granite3.3:8b, qwen3:1.7b, qwen2.5:1.5b

FME Form 2025.1+.

Extending

Add other tools (run_sql, run_python) and loop the chat if multi-step reasoning is needed.

Feed the result back to LLM to conclude on the question, make a loop...

Version & compatibility

v1.0 — initial release (single-pass tool call, single _result).

Tested with FME Form 2025.1 and Ollama ≥ 0.1.40.

Tags

Ollama · LLM · Function Calling · Arithmetic Editor · ExpressionEvaluator · Automation · FME AI



Would you like to know more? Click here to find out more details!
Be the first to reply!