Skip to main content

Overview

Draft & Goal supports multiple AI model providers, giving you flexibility to choose the best models for your workflows. You can enable built-in providers or add custom providers by configuring API keys. Available providers:
  • Built-in providers: Google Vertex AI, AWS Bedrock (pre-configured)
  • Custom providers: OpenAI, Anthropic, Google Gemini, OVHcloud (requires API key)
AI Models configuration page

Accessing AI Models

Navigate to SettingsModels in the studio to view and manage your AI model providers. The Models page displays:
  • All available model providers
  • Individual models within each provider
  • Activation status for each model
  • Configuration options for custom providers

Built-in providers

Google Vertex AI

Google Vertex AI models are pre-configured and available by default:
  • claude-opus-4-1@20250805
  • claude-sonnet-4-5@20250929
  • claude-sonnet-4@20250514
  • gemini-2.5-flash
  • gemini-2.5-flash-image
  • gemini-2.5-pro
  • gemini-3-pro-preview
  • imagen-4.0-generate-001
  • veo-3.0-generate-001
Google Vertex AI models list with toggle switches
Vertex AI models are available immediately and don’t require additional configuration. Simply toggle the models you want to use.

AWS Bedrock

AWS Bedrock models are also pre-configured and ready to use. Enable the models your team needs from the list.

Enabling and disabling models

Each model can be individually enabled or disabled:
  1. Navigate to the provider section
  2. Click the toggle switch next to each model
  3. Enabled models appear in blue
  4. Changes are saved automatically
Disable unused models to keep your model selection list clean in workflow nodes.

Adding custom providers

Add additional model providers by configuring their API keys.

OpenAI

To use OpenAI models (GPT-4, GPT-3.5, etc.):
  1. Click Configure on the OpenAI card
  2. Enter your OpenAI API key
  3. Click Save
Getting an OpenAI API key:
  1. Go to platform.openai.com
  2. Navigate to API Keys
  3. Create a new key
  4. Copy and paste into Draft & Goal

Anthropic

To use Anthropic models (Claude):
  1. Click Configure on the Anthropic card
  2. Enter your Anthropic API key
  3. Click Save
Anthropic API key configuration dialog Getting an Anthropic API key:
  1. Go to console.anthropic.com
  2. Navigate to API Keys
  3. Create a new key
  4. Copy and paste into Draft & Goal

Google Gemini

To use Google Gemini models via Google AI Studio:
  1. Click Configure on the Gemini card
  2. Enter your Google AI API key
  3. Click Save
Google Gemini API key configuration dialog Getting a Google AI API key:
  1. Go to makersuite.google.com
  2. Navigate to Get API Key
  3. Create a new key
  4. Copy and paste into Draft & Goal
This is different from Google Vertex AI. Vertex AI uses your Google Cloud credentials, while Gemini uses Google AI Studio keys.

OVHcloud

To use OVHcloud AI models:
  1. Click Configure on the OVH card
  2. Enter your OVHcloud API key
  3. Click Save
OVHcloud API key configuration dialog Getting an OVHcloud API key:
  1. Go to your OVHcloud console
  2. Navigate to AI Services
  3. Generate an API key
  4. Copy and paste into Draft & Goal

Using models in workflows

Once models are enabled, they become available in AI nodes:
  1. Add an LLM or AI Agent node to your workflow
  2. Open the node settings
  3. In the Model dropdown, select from available models
  4. Models are organized by provider
The model selector shows:
  • Provider name (e.g., OPENAI, GOOGLE_VERTEXAI, ANTHROPIC)
  • Model name (e.g., gpt-4o, claude-sonnet-4-5)
  • Only enabled models appear in the list

Best practices

Security

  • Protect API keys: Never share API keys in workflow descriptions or public documentation
  • Use workspace keys: Configure keys at the workspace level so individual users don’t need their own
  • Rotate regularly: Update API keys periodically for security
  • Monitor usage: Track API usage to detect unauthorized access

Cost management

  • Start small: Begin with faster, cheaper models for testing
  • Enable selectively: Only enable models your team actually uses
  • Monitor spend: Most providers offer usage dashboards
  • Set limits: Configure spending limits with your provider

Model selection

  • Match task to model: Use simpler models for simple tasks
  • Test different models: Compare results across providers
  • Consider latency: Faster models improve workflow execution time
  • Check availability: Ensure models are available in your region

Organization

Best practiceWhy it matters
Document which models to useTeam consistency
Disable unused modelsCleaner selection lists
Monitor deprecationsStay updated on model changes
Test before deployingEnsure compatibility

Troubleshooting

API key rejected

IssueSolution
Invalid API keyVerify the key is copied correctly without extra spaces
Key expiredGenerate a new key from the provider
Insufficient permissionsEnsure the key has the correct scopes/permissions
Billing issueCheck your provider account has an active payment method

Model not appearing

If a model doesn’t appear in workflow nodes:
  1. Verify the model is enabled (toggle is blue)
  2. Check that the provider is configured correctly
  3. Refresh the workflow editor
  4. Clear browser cache if needed

Rate limits

If you encounter rate limit errors:
  1. Check your provider’s rate limits
  2. Upgrade your provider plan if needed
  3. Implement retry logic in workflows
  4. Distribute requests across multiple models

Connection errors

If you can’t connect to a provider:
  1. Verify your API key is correct
  2. Check your internet connection
  3. Ensure the provider service is operational
  4. Contact the provider’s support if issues persist
API keys provide full access to your provider account. If a key is compromised, delete it immediately and generate a new one.

Provider comparison

Different providers have different strengths:
ProviderBest forStrengthsConsiderations
OpenAIGeneral purpose, GPT modelsBroad capabilities, well-documentedHigher cost for advanced models
AnthropicLong content, nuanced tasksLarge context windows, safety featuresLimited model selection
Google Vertex AIEnterprise, Google CloudIntegrated with GCP, diverse modelsRequires GCP setup
Google GeminiQuick prototypingEasy setup, multimodalRate limits on free tier
AWS BedrockEnterprise, AWS usersMultiple models, AWS integrationRequires AWS setup
OVHcloudEuropean users, GDPREuropean hosting, data residencySmaller model selection

Next steps