Use chat completion powered by OpenAI to its full potential, including token control and tool usage.Documentation Index
Fetch the complete documentation index at: https://messenger-docs.cogfy.com/llms.txt
Use this file to discover all available pages before exploring further.
Input
The user’s prompt — the actual input or question from the user.In most scenarios, you will want to interpolate this field with messages coming from the user (e.g., when using a Message Received trigger).
The model is the AI “brain” your users will interact with.Currently, Cogfy Workflows supports the following OpenAI models:
GPT-5.4, GPT-5.4 mini, GPT-5.4 nano, GPT-5.2, GPT-5.1, GPT-5, GPT-5 mini, GPT-5 nano, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4o, GPT-4o mini, GPT-4.For more information about these models, please refer to the official OpenAI documentation.
Also known as the system prompt, this field is used to guide the AI’s overall behavior.Use it to define personality, boundaries, verbosity, and other behavioral aspects.This field is very important because it affects everything the AI generates afterward, so use it carefully.
The upper limit on how many tokens the model can generate in its response.
- If the limit is low → shorter, more concise responses
- If the limit is high → longer, more detailed responses
Example:
The main reasons for this are economic, political, and histor—
A token is a fundamental unit of data — such as a word, part of a word, or character — that AI models use to process language.Large Language Models (LLMs) break text into tokens instead of reading whole words. Tokens serve as both input (for understanding) and output (for generating responses), and are also used as the basis for pricing and usage limits.For pricing, please refer to the official OpenAI documentation.
Output Schema
The output schema defines the properties available to subsequent nodes.An array of messages generated by the model, including assistant messages and tool calls. Does not include the user message.
The last message generated by the model, including assistant or tool call messages. Does not include the user message.
message_object
Represents a single message.
The role of the message sender.Can be:
user— sent by the userassistant— generated by the AI modeltool— generated by a tool call
The content of the message. Can be null if no content is present.
A list of tool calls made by the assistant in this message, if any.
The ID of the tool call associated with this message, if applicable (e.g., when the role is
tool).