Skip to main content

Calling all FME Community Members!

 

With the official release of FME 2023.1 comes a brand new AI-powered tool: AI Assist. This new AI feature can be found as a component of our Regex, SQL, and Python editors, which accompany a variety of our transformers and formats. AI Assist is ready and available to help you generate code for your queries or custom functionality in FME.

 

PythonCaller_AIAssist 

We encourage you to make use of this Megathread to provide your feedback and discuss your experiences with FME's new AI Assist. Let us and your fellow FME Community members know how your testing goes, including your use case, any struggles or aha moments, and of course your tips for success. 

 

You can also give your fellow Community members a heads-up about any bugs you discover. Please also report these bugs directly to Safe Software Technical Support by creating a support case .

 

Check out our on-demand FME 2023.1 release webinar for a demo of AI Assist, and our AI Assist FAQ page for more details on this capability! 

 

Just keep in mind that AI Assist is in Tech Preview status, and should not be considered production-ready at this time.

 

We look forward to hearing about your experiences with FME's new AI Assist!

 

Happy testing!

The AI Assist dialog is model, and thus blocks access to the rest of FME workbench application. As a consequence you can't, for example, copy an error message from Translation Log in order to feed back to the AI when it's code generated the error.

It also means you have to close the dialog in order to run any code it produces.


A big missing feature, which I'm confident is in your roadmap already but I'll say it here to add emphasis:

AI HISTORY.

Currently I need to copy the question, generated code, and explanation to another location such as Onenote or Word in order to refer back to and (attempt to) understand why/how code X came to be. As such it's actually easier to use a 3rd party LLM front end and paste it's results into FME instead.

 

So with that in mind, what are the system prompts being fed to the LLM to prepare it for "all of the following are about FME"?


When using personal API key what are the valid model names? (where do we look these up?)

When I try to use `GPT-4` I'm told it's an invalid request. I have no problem using GPT-4 with other integrations (chatbotui.com and flux.paradigm.xyz for instance).

 

image

Hi @mattw1ilkie​ , thanks for your question. I know when I bring my own key, I usually type the model in lower case (eg. gpt-4) and that works well for me. If you have a look at this OpenAI Models document, under the Continuous Model Upgrades and the GPT-4 sections, you'll notice that OpenAI references their model names in lowercase. For the Model parameter of the web connection, I would suggest trying out one of the models listed in the GPT-4 section of the document linked above, being sure to enter the value exactly as its shown in the document. Just be sure that your API key has permissions for the model that you choose to try.


A big missing feature, which I'm confident is in your roadmap already but I'll say it here to add emphasis:

AI HISTORY.

Currently I need to copy the question, generated code, and explanation to another location such as Onenote or Word in order to refer back to and (attempt to) understand why/how code X came to be. As such it's actually easier to use a 3rd party LLM front end and paste it's results into FME instead.

 

So with that in mind, what are the system prompts being fed to the LLM to prepare it for "all of the following are about FME"?

@mattw1ilkie​ That's a great suggestion, and I've created FMEFORM-29591 to request this AI History functionality.

 

At this time, we are not publishing our system prompts for the AI Assist tool. This tool is still in technical preview status, and so its various components are subject to change as we test and refine them.


A big missing feature, which I'm confident is in your roadmap already but I'll say it here to add emphasis:

AI HISTORY.

Currently I need to copy the question, generated code, and explanation to another location such as Onenote or Word in order to refer back to and (attempt to) understand why/how code X came to be. As such it's actually easier to use a 3rd party LLM front end and paste it's results into FME instead.

 

So with that in mind, what are the system prompts being fed to the LLM to prepare it for "all of the following are about FME"?

An example session of using external chat agent instead of embedded AI Assistant.

https://gist.githubusercontent.com/maphew/aa958ad2f2116e292b4e0e3ad2ee771d/raw/ecbc23a936383c5ef00e84dde2d477083bc5a4a8/a-session-output.html" target="_blank">https://htmlpreview.github.io/?https://gist.githubusercontent.com/maphew/aa958ad2f2116e292b4e0e3ad2ee771d/raw/ecbc23a936383c5ef00e84dde2d477083bc5a4a8/a-session-output.html

 

Goal was to write and FME PythonCaller function to report on python environment. Elapsed time was ~30-45 minutes. Output html is bit garbage since ChatbotUI doesn't have a share feature yet. The session export json and script to convert is also attached if curious. And the working PythonCaller code too of course.

File results at https://gist.github.com/maphew/aa958ad2f2116e292b4e0e3ad2ee771d

 

Comparing the time spent and resulting progress I experienced using AI Assistant and Chatbot, using chatbot is 20 to 30% more efficient for me (but would be challenging if I only had a single display).


Reply