Node Description

The LLM node allows users to input specific instructions, which are processed by a language model to generate text, provide analysis, or execute defined tasks.


Key Features

  • Customizable Instructions: Tailor the model’s response with detailed instructions and dynamic variables (e.g., {{myVariable}}).
  • Adjustable AI Parameters: Fine-tune model behavior by configuring temperature, token limits, and more.
  • Versatile Applications: Use for text generation, analysis, summarization, and other NLP tasks.

Node Inputs

Required Fields

Instructions
Provide specific instructions to the model. Use dynamic variables ({{myVariable}}) to personalize input.
Examples:

  • "Write a story about F1 driver loosing the championship for this driver {{driver}} with the team {{team}}"
  • "Summarize the provided text in 100 words."

Optional Fields

System Message (Optional)
Set the context for the AI model to operate under.
Example: "You are a data analyst specializing in market trends."


AI Settings

Model

Choose the LLM to use.
Example: "OPENAI - gpt-4o-2024-05-13"

Temperature

Controls the randomness of the output:

  • 0.2: Deterministic and focused.
  • 0.6: Balanced between creativity and precision (default).
  • 0.8: More creative and exploratory.

Max Output Tokens

Defines the length of the response.
Example: 1000 tokens.

Top K

Limits the selection to the top K most likely tokens.
Example: 40

Top P

Adjusts probability for token selection.
Default: 1 (uses full probability distribution).


Node Output

Output

The LLM’s response based on the provided instructions.

Example Output:

{
  "summary": "AI tools streamline workflows and enhance productivity in various industries.",
  "keywords": ["AI tools", "productivity", "workflows"]
}

Reflection Agent Integration

Purpose

The Reflection Agent works as a guide to refine AI-generated content by providing constructive feedback.

Best Practices

  • Start with broad guidance and refine with focused instructions.
  • Ensure instructions are clear and do not contradict.
  • Use feedback iteratively for improved content quality.

Example Usage

  1. Content Analysis
    Instruction: "Analyze if the text aligns with the user's search intent."

  2. SEO Suggestions
    Instruction: "Recommend additional keywords for optimizing this content for search engines."

  3. Structural Review
    Instruction: "Check if the content is well-structured with clear headings and subheadings."

The LLM node is adaptable for a wide range of use cases, making it an essential tool for generating, refining, and analyzing content.

Node Description

The LLM node allows users to input specific instructions, which are processed by a language model to generate text, provide analysis, or execute defined tasks.


Key Features

  • Customizable Instructions: Tailor the model’s response with detailed instructions and dynamic variables (e.g., {{myVariable}}).
  • Adjustable AI Parameters: Fine-tune model behavior by configuring temperature, token limits, and more.
  • Versatile Applications: Use for text generation, analysis, summarization, and other NLP tasks.

Node Inputs

Required Fields

Instructions
Provide specific instructions to the model. Use dynamic variables ({{myVariable}}) to personalize input.
Examples:

  • "Write a story about F1 driver loosing the championship for this driver {{driver}} with the team {{team}}"
  • "Summarize the provided text in 100 words."

Optional Fields

System Message (Optional)
Set the context for the AI model to operate under.
Example: "You are a data analyst specializing in market trends."


AI Settings

Model

Choose the LLM to use.
Example: "OPENAI - gpt-4o-2024-05-13"

Temperature

Controls the randomness of the output:

  • 0.2: Deterministic and focused.
  • 0.6: Balanced between creativity and precision (default).
  • 0.8: More creative and exploratory.

Max Output Tokens

Defines the length of the response.
Example: 1000 tokens.

Top K

Limits the selection to the top K most likely tokens.
Example: 40

Top P

Adjusts probability for token selection.
Default: 1 (uses full probability distribution).


Node Output

Output

The LLM’s response based on the provided instructions.

Example Output:

{
  "summary": "AI tools streamline workflows and enhance productivity in various industries.",
  "keywords": ["AI tools", "productivity", "workflows"]
}

Reflection Agent Integration

Purpose

The Reflection Agent works as a guide to refine AI-generated content by providing constructive feedback.

Best Practices

  • Start with broad guidance and refine with focused instructions.
  • Ensure instructions are clear and do not contradict.
  • Use feedback iteratively for improved content quality.

Example Usage

  1. Content Analysis
    Instruction: "Analyze if the text aligns with the user's search intent."

  2. SEO Suggestions
    Instruction: "Recommend additional keywords for optimizing this content for search engines."

  3. Structural Review
    Instruction: "Check if the content is well-structured with clear headings and subheadings."

The LLM node is adaptable for a wide range of use cases, making it an essential tool for generating, refining, and analyzing content.