Looker BI from the IDE
Build NewCRM schema discovery to production Looker dashboards — without leaving your IDE. Spec-first provisioning, iterative editing, and programmatic automation replace the traditional multi-month, multi-role BI project.
Timelines are engagement estimates including discovery, pipeline setup, data modeling, and stakeholder review. The CLI itself provisions dashboards in seconds.
The Problem
Starting from scratch
Standing up BI from zero requires data engineers for pipelines, analytics engineers for modeling, and BI developers for dashboards. Three roles working sequentially. Months to first dashboard. Every change goes through the same slow cycle.
Already have Looker
Even orgs with existing Looker instances spend weeks on every new dashboard. Manual tile layout in the GUI, no version control on dashboard definitions, no dry-run preview, changes coupled to a BI admin's calendar.
Spec-First Dashboard Provisioning
Dashboards are defined as version-controlled YAML specs — tiles, filters, layout, drill-throughs. The CLI applies these specs through Looker's programmatic API with deterministic upsert logic. --dry-run previews mutations before writing. Merge mode adds without touching unmanaged elements. Replace mode requires explicit --force.
How It Works
Six steps from CRM data to production dashboards in Looker.
Discover Schema
Connect to Salesforce and map objects, fields, relationships, and picklist values
Build Data Pipeline
Extract from CRM and load into BigQuery with incremental refresh schedules
Model Analytics Layer
Write SQL transformations: staging, business logic, and metric definitions
Define Semantic Layer
Generate LookML models: dimensions, measures, explores, and join logic
Provision Dashboards
Apply YAML specs via Looker API: tiles, filters, layout, drill-throughs
Iterate in Minutes
Add tiles, modify filters, refine layout — all from the IDE
How It Executes
RevOps needs a renewals dashboard in Looker. Let me verify the Looker connection and discover the Salesforce schema.
g-gremlin looker doctor --json
✓ Connection: OK | Instance: client.cloud.looker.com | API: 4.0 | Permissions: admin
g-gremlin sfdc describe --sobject Opportunity --fields-only
✓ 52 fields mapped. Key: StageName, Amount, CloseDate, OwnerId, ForecastCategory
g-gremlin looker describe models --json
✓ Models: analytics (6 explores), marketing (3 explores)
g-gremlin looker describe explores --model analytics --json
✓ Explores: renewals, pipeline, retention, forecast, accounts, opportunities
Schema mapped, Looker connected, analytics model ready. Provisioning renewals dashboard from YAML spec.
g-gremlin looker dashboard create --spec config/bi/looker/renewals.dashboard.yaml --dry-run --json
[Dry run] Would create "Renewals Pipeline" in space 123
3 tiles: Monthly Renewal ARR (column), Renewed ARR (single_value), At-Risk (table)
2 filters: fiscal_quarter, owner
Mode: merge | No changes written
g-gremlin looker dashboard create --spec config/bi/looker/renewals.dashboard.yaml --json
✓ Dashboard created: id=456 "Renewals Pipeline"
3 tiles created | 2 filters applied
URL: https://client.cloud.looker.com/dashboards/456
g-gremlin looker dashboard add-filter --dashboard-id 456 --spec fiscal_year.yaml --json
✓ Filter added: fiscal_year | Dashboard 456 now has 3 filters
Renewals dashboard live in Looker. 3 tiles, 3 filters. Next: provision remaining 5 dashboards from specs.
Key Capabilities
Progressive Introspection
Discover models, explores, dimensions, and measures. No context-switching to a separate BI tool.
Inline Queries
Run ad-hoc queries against any explore. Output CSV or JSON for validation before building dashboards.
Spec-First Provisioning
Define dashboards as version-controlled YAML specs. Deterministic upsert with --dry-run preview.
Iterative Editing
Add or remove individual tiles and filters on live dashboards. No full redeploy required.
Round-Trip Workflow
Describe an existing dashboard, edit the output, redeploy. No manual format translation.
Safe Mutation Controls
Merge mode preserves unmanaged elements. Replace mode requires explicit --force. Dry-run on everything.
Engagement Timeline
Milestone 1 — One Working Dashboard
- Connect to Salesforce and map the relevant schema
- Stand up the data pipeline (CRM → BigQuery)
- Build the analytics layer for one domain
- Deliver one complete, interactive dashboard in Looker
Milestone 2 — Full Dashboard Suite
- Expand to remaining dashboards
- Add LookML validation and metric regression checks
- Document the change workflow for ongoing iteration
- Stakeholder review and polish rounds
These are engagement timelines including discovery, pipeline setup, data modeling, and stakeholder review. The CLI provisions dashboards in seconds.
By the Numbers
Note: Timelines are engagement estimates. Someone still writes YAML specs, defines the data model, and reviews dashboards. The tooling eliminates the dedicated Looker admin role and collapses the analytics engineer + dashboard builder into one workflow.
Works With Your Existing Stack
CRM Sources
Salesforce, HubSpot, Dynamics 365, or any CRM with API access. g-gremlin handles schema discovery and data extraction.
Data Warehouse
BigQuery as Looker's data source. Data lands in BigQuery via Google's Data Transfer Service or g-gremlin's sink commands.
Visualization
Looker (full Looker with LookML, not Looker Studio). g-gremlin provisions dashboards via Looker's programmatic API.
See It Before You Commit
One working dashboard in your Looker instance. No long-term commitment. No new vendors.