This document explains the LLM node, a versatile tool for interacting with a Large Language Model (LLM) for diverse natural language processing tasks.
The LLM node allows users to input specific instructions, which are processed by a language model to generate text, provide analysis, or execute defined tasks.
{{myVariable}}
).Instructions
Provide specific instructions to the model. Use dynamic variables ({{myVariable}}
) to personalize input.
Examples:
"Write a story about F1 driver loosing the championship for this driver {{driver}} with the team {{team}}"
"Summarize the provided text in 100 words."
System Message (Optional)
Set the context for the AI model to operate under.
Example: "You are a data analyst specializing in market trends."
Choose the LLM to use.
Example: "OPENAI - gpt-4o-2024-05-13"
Controls the randomness of the output:
0.2
: Deterministic and focused.0.6
: Balanced between creativity and precision (default).0.8
: More creative and exploratory.Defines the length of the response.
Example: 1000
tokens.
Limits the selection to the top K
most likely tokens.
Example: 40
Adjusts probability for token selection.
Default: 1
(uses full probability distribution).
The LLM’s response based on the provided instructions.
The Reflection Agent works as a guide to refine AI-generated content by providing constructive feedback.
Content Analysis
Instruction: "Analyze if the text aligns with the user's search intent."
SEO Suggestions
Instruction: "Recommend additional keywords for optimizing this content for search engines."
Structural Review
Instruction: "Check if the content is well-structured with clear headings and subheadings."
The LLM node is adaptable for a wide range of use cases, making it an essential tool for generating, refining, and analyzing content.
This document explains the LLM node, a versatile tool for interacting with a Large Language Model (LLM) for diverse natural language processing tasks.
The LLM node allows users to input specific instructions, which are processed by a language model to generate text, provide analysis, or execute defined tasks.
{{myVariable}}
).Instructions
Provide specific instructions to the model. Use dynamic variables ({{myVariable}}
) to personalize input.
Examples:
"Write a story about F1 driver loosing the championship for this driver {{driver}} with the team {{team}}"
"Summarize the provided text in 100 words."
System Message (Optional)
Set the context for the AI model to operate under.
Example: "You are a data analyst specializing in market trends."
Choose the LLM to use.
Example: "OPENAI - gpt-4o-2024-05-13"
Controls the randomness of the output:
0.2
: Deterministic and focused.0.6
: Balanced between creativity and precision (default).0.8
: More creative and exploratory.Defines the length of the response.
Example: 1000
tokens.
Limits the selection to the top K
most likely tokens.
Example: 40
Adjusts probability for token selection.
Default: 1
(uses full probability distribution).
The LLM’s response based on the provided instructions.
The Reflection Agent works as a guide to refine AI-generated content by providing constructive feedback.
Content Analysis
Instruction: "Analyze if the text aligns with the user's search intent."
SEO Suggestions
Instruction: "Recommend additional keywords for optimizing this content for search engines."
Structural Review
Instruction: "Check if the content is well-structured with clear headings and subheadings."
The LLM node is adaptable for a wide range of use cases, making it an essential tool for generating, refining, and analyzing content.