Overview
Draft & Goal helps data teams automate reporting, synchronize data between systems, and build data pipelines without code.Common data workflows
Automated reporting
Generate reports on a schedule:- Pull data from multiple sources
- Transform and aggregate
- Generate insights with AI
- Deliver via email or Slack
Data synchronization
Keep systems in sync:- Export from source system
- Transform data format
- Load into destination
- Verify and log results
Data enrichment
Add value to raw data:- Fetch records from database
- Enrich with external data
- Apply AI analysis
- Write back enriched data
Monitoring and alerts
Track metrics and alert on anomalies:- Query metrics regularly
- Compare to thresholds
- Detect anomalies
- Send alerts
Available data nodes
Google Suite
| Node | Description |
|---|---|
| Google Sheets | Read/write spreadsheets |
| Google BigQuery Reader | Query data warehouse |
| Google BigQuery Writer | Insert data to BigQuery |
| Google Analytics | Web analytics data |
| Google Slides | Create presentations |
Transformation
| Node | Description |
|---|---|
| JSON Path Extractor | Extract from JSON |
| Merge | Populate templates with data |
| Find & Replace | Transform text |
| Loop | Process arrays |
External APIs
| Node | Description |
|---|---|
| API Connector | Pre-built API integrations |
Example: Weekly KPI dashboard
Automatically generate a weekly KPI dashboard:What it does
- Fetch data from Analytics, BigQuery, and HubSpot
- Merge into a unified dataset
- Analyze with AI to generate insights
- Create slides with charts and commentary
- Email to stakeholders automatically
Time saved
| Task | Manual | Automated |
|---|---|---|
| Data collection | 2 hours | 0 |
| Analysis | 3 hours | 5 minutes |
| Report creation | 2 hours | 0 |
| Distribution | 30 min | 0 |
| Total | 7.5 hours | 5 minutes |
Data pipeline patterns
ETL (Extract, Transform, Load)
Event-driven pipelines
Scheduled batch processing
Best practices
Error handling
Always handle data pipeline failures:- Log all operations
- Alert on failures
- Implement retries
- Have fallback paths
Data validation
Validate data quality:- Check for nulls
- Verify data types
- Validate ranges
- Flag anomalies
Incremental processing
For large datasets:- Track last processed timestamp
- Only fetch new records
- Maintain state between runs
Getting started
1
Connect your data sources
Set up integrations for Google, databases, APIs
2
Define your output
Decide where results should go (Sheets, BigQuery, etc.)
3
Build the pipeline
Connect nodes to transform data
4
Schedule execution
Set up recurring runs

