Node Inputs
Required Fields
Instructions:Custom instructions provided to the LLM for task execution. Use dynamic variables (
{{myVariable}}
) to personalize input.Example:
"Write a story about F1 driver loosing the championship for this driver {{driver}} with the team {{team}}"
Optional Fields
Tools:Defines the tools or functions the agent can call during its execution.
Example:
"Scrapper"
Datasources:Specifies the data sources the agent can access or query.
Example:
Datasource:
"Internal company knowledge"
Instructions block:
Allows configuration of advanced LLM parameters such as temperature, token limits, and output formats. (see “Node AI Settings” for more information)
Node AI Settings
Model:Select the LLM model to be used.
Example:
GOOGLE_VERTEXAI - gemini-1.5-pro-002
Temperature:Controls creativity and randomness in responses.
- Lower values (e.g.,
0.2
) make outputs more deterministic. - Higher values (e.g.,
0.8
) increase creativity.
Sets the maximum number of tokens in the response.
Example:
1000
Top K:Restricts the response to the top
K
most likely tokens.Example:
40
Top P:Adjusts the probability distribution for token selection.
Example:
1
(uses the full distribution).
Output JSON Schema:Define the format of the response in JSON schema if structured output is required.
Example:
Node Output
Output:The response generated by the LLM based on the provided instructions, tools, and datasources.
Example Output:
Node Functionality
The Agent node:- Leverages LLMs for dynamic task processing and natural language interactions.
- Integrates tools and datasources to expand functionality and enrich outputs.
- Offers customization through advanced LLM settings for tailored responses.
- Supports structured outputs via JSON schema for compatibility with other systems.