What does this node do?
The LLM (Large Language Model) node lets you generate, transform, and analyze text using AI models like GPT, Claude, or Gemini. It’s the foundation for AI-powered workflows. Common uses:- Generate content (articles, summaries, emails)
- Analyze and extract information
- Transform text formats
- Answer questions about data
Quick setup
1
Add the LLM node
Find it in the node library under AI Nodes → LLM
2
Write your instructions
Tell the AI what to do. Be specific and clear.
3
Choose a model
Select from GPT, Claude, Gemini, etc.
4
Connect and run
Connect inputs and run your workflow
Configuration

Required fields
The prompt that tells the AI what to do. This is the most important field.Tips for good instructions:
- Be specific about what you want
- Provide context and examples
- Specify the output format
- Use variables to include dynamic data
Optional fields
Sets the AI’s persona and behavior context.Example:
AI settings
The AI model to use.
| Model | Best for | Speed | Cost |
|---|---|---|---|
| GPT | Complex tasks, reasoning | Medium | High |
| Claude | Long content, nuance | Medium | Medium |
| Gemini | Google integration | Fast | Medium |
Controls randomness in the output.
| Value | Behavior |
|---|---|
| 0.0-0.3 | Focused, consistent |
| 0.4-0.6 | Balanced (default) |
| 0.7-1.0 | Creative, varied |
Maximum length of the response in tokens.
- 500 tokens ≈ 375 words
- 1000 tokens ≈ 750 words
- 4000 tokens ≈ 3000 words
Define a JSON schema to get structured output.Example:
Output
The node returns the AI’s response:{{LLM_0.response}}
Examples
Content summarization
Instructions:Data extraction
Instructions:Content generation
Instructions:Best practices
Write effective prompts
- Good
- Bad
Use system messages
Set context for better results:Request structured output
For processing in subsequent nodes, request JSON:Handle long content
For content that might exceed token limits:- Chunk content before sending
- Process chunks separately
- Merge results
Common issues
Response is cut off
Response is cut off
Increase
max_output_tokens. The default may be too low for long outputs.JSON output is invalid
JSON output is invalid
- Use a JSON schema in settings
- Add “Return only valid JSON” to instructions
- Lower temperature for more consistent formatting
Results are too random
Results are too random
Lower the temperature (0.2-0.4) for more consistent outputs.
Response doesn't follow instructions
Response doesn't follow instructions
- Be more specific in your instructions
- Add examples of expected output
- Use a system message to set context

