Skip to main content

Documentation Index

Fetch the complete documentation index at: https://messenger-docs.cogfy.com/llms.txt

Use this file to discover all available pages before exploring further.

Use chat completion powered by OpenAI to its full potential, including token control and tool usage.

Input

Input
string
required
The user’s prompt — the actual input or question from the user.In most scenarios, you will want to interpolate this field with messages coming from the user (e.g., when using a Message Received trigger).
Model
string
required
The model is the AI “brain” your users will interact with.Currently, Cogfy Workflows supports the following OpenAI models:GPT-5.4, GPT-5.4 mini, GPT-5.4 nano, GPT-5.2, GPT-5.1, GPT-5, GPT-5 mini, GPT-5 nano, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano, GPT-4o, GPT-4o mini, GPT-4.
For more information about these models, please refer to the official OpenAI documentation.
Instructions
string
Also known as the system prompt, this field is used to guide the AI’s overall behavior.Use it to define personality, boundaries, verbosity, and other behavioral aspects.This field is very important because it affects everything the AI generates afterward, so use it carefully.
Max Output Tokens
number
required
The upper limit on how many tokens the model can generate in its response.
  • If the limit is low → shorter, more concise responses
  • If the limit is high → longer, more detailed responses
If the AI reaches this limit:The response stops immediately. It may be cut off mid-sentence or appear incomplete.

Example:

The main reasons for this are economic, political, and histor—

A token is a fundamental unit of data — such as a word, part of a word, or character — that AI models use to process language.Large Language Models (LLMs) break text into tokens instead of reading whole words. Tokens serve as both input (for understanding) and output (for generating responses), and are also used as the basis for pricing and usage limits.For pricing, please refer to the official OpenAI documentation.

Output Schema

The output schema defines the properties available to subsequent nodes.
messages
message_object[]
An array of messages generated by the model, including assistant messages and tool calls. Does not include the user message.
lastMessage
message_object
The last message generated by the model, including assistant or tool call messages. Does not include the user message.

message_object

Represents a single message.
role
string
The role of the message sender.Can be:
  • user — sent by the user
  • assistant — generated by the AI model
  • tool — generated by a tool call
content
string | null
The content of the message. Can be null if no content is present.
toolCalls
tool_call[]
A list of tool calls made by the assistant in this message, if any.
toolCallId
string | null
The ID of the tool call associated with this message, if applicable (e.g., when the role is tool).