MistralAIConnector
The MistralAIConnector transformer allows FME users to interact with Mistral AI's chat completion service directly from within FME Workbench.
Mistral AI is an AI research and deployment company focused on delivering cutting-edge open-weight models through simple APIs. This transformer wraps Mistral's API and exposes key parameters to help automate interactions with large language models.
🛠️ An API Key is required
You can sign up here and access your API keys here.
Parameters
User Parameters
API Key : The API key for authenticating requests to the Mistral API.
Chat Completion
System Prompt (optional) : An optional system-level instruction that sets the behavior or role of the assistant.
User Prompt (required) : The user query or message to send to the AI model.
Model : The Mistral model to use. Default is mistral-large-latest. You may input another supported model name manually.
Advanced Settings
These parameters allow you to fine-tune the behavior of the AI model. For full documentation on their effects, see the Mistral Chat API reference.
Temperature : Controls the randomness of the output. Higher values like 1.0 make output more creative, while lower values like 0.0 make it more focused and deterministic.
Top P : An alternative to temperature sampling. Limits token choices to a cumulative probability.
n : Number of completions to generate.
Stop : A string or list of strings to stop generation when encountered.
Max Tokens : Maximum number of tokens in the output.
Presence Penalty : Adjusts the likelihood of discussing new topics.
Frequency Penalty : Reduces repetition by penalizing repeated tokens.
Output Attributes
Each feature output by the transformer includes:
MessageContent : The main AI-generated response. This is the most relevant and usable output.
_response_body : The full raw JSON response returned by Mistral’s API. Useful for logging, debugging, or extracting other fields manually.
Would you like to know more? Click here to find out more details!