Use Case Discovery
Define what you're coaching Spotter to answer before touching the data model. Coaching without a clear use case leads to over-engineered models and context that doesn't match what users actually ask.
Identify Your Target Users
Start by identifying who you're coaching Spotter for. Pick one team or user group at a time — a scoped use case produces better coaching than a model trying to serve everyone at once.
Collect Real Questions
Gather the actual questions your target users ask — not what you think they'll ask, but what they type when trying to get answers. Business users phrase queries differently from analysts, so unfiltered input matters.
Group the questions by topic (e.g. pipeline metrics, conversion, account health). This reveals where coaching effort should be concentrated.
Check Data Model Coverage
Once you have a representative set of questions, validate your data model against them:
- Does it have the tables and columns needed to answer these questions?
- Are there questions it simply cannot answer? Flag those as out of scope before coaching starts.
- Are there columns no user query will ever need? Remove them — a lean, focused model performs better than a bloated one.
Optimize Your Data Model
Before any coaching, Spotter needs to be able to read your model semantically. Most accuracy issues trace back here — fix the model first, add coaching second.
Column Names
Use human-readable names. Avoid abbreviations, jargon, and names that overlap with ThoughtSpot search keywords. Keep names unique across the model. Aim for under 50 columns — lean, focused models perform better.
txn_dt, use Transaction Date. If you can't rename, add synonyms in step 2.
Synonyms
Add synonyms for any column name where business users use different terms. Spotter uses these to resolve natural language queries to the right column.
Example: Column Order Date → add synonyms: Transaction Date, Purchase Date, Sale Date
Formulas
Create model-level formulas (including pre-aggregated ones) for key metrics. If a metric has a fixed definition, define it in the data model to reduce latency and accuracy issues — Spotter will use the pre-defined formula directly instead of inferring it.
Example: Define Net Revenue as Gross Revenue - Refunds - Discounts once in the model. Don't leave Spotter to guess the calculation each time.
AI Context
AI Context embeds permanent business knowledge directly on columns — it instructs Spotter how to interpret and use each column for all queries.
How to generate: Open the data model → three-dot menu → Generate AI Context → review and refine each column.
- Disambiguation: When two similar columns exist, use AI Context to set priority. "Prefer this column for all revenue queries. This is the primary date for when a sale occurred."
- Boolean columns: Clarify values. "true = valid transaction, false = invalid transaction"
- Non-standard values: Explain internal codes. "Contains medicine shortforms. 'MP' = Metoprolol"
- Deprecated columns: Mark them. "Do not use this column. Replaced by Order Date v2."
Spotter Self-Diagnosis
After generating AI Context, ask Spotter to surface its own confusion. It can scan the entire data model and tell you exactly what it's uncertain about — and why. This is one of the fastest ways to find coaching gaps before users run into them.
Fix the issues Spotter surfaces in AI Context or column descriptions. For anything that needs immediate correction, clarify directly in conversation and ask Spotter to remember — it saves the correction to its memory for that model.
Cold Start with Liveboard
Get broad coverage of your business logic quickly by pointing Spotter at a trusted Liveboard — without writing anything manually. This is the fastest way to get Spotter up to speed on a new or unfamiliar data model.
Add a Liveboard as a Memory Source
Pick a trusted Liveboard that reflects real, verified business definitions for the data model. It should contain the key metrics and analyses your team actually uses.
How: Go to Data Workspace → Memory Sources → add the Liveboard → click Generate Memory.
Spotter reads the Liveboard's visualizations and absorbs definitions, filters, and metric logic automatically. The richer and more representative the Liveboard, the better the coverage.
Verify the Learnings
Memory reflects the Liveboard at the time of generation. Test Spotter with representative questions covering the topics in the Liveboard before relying on it.
- Ask questions that mirror the Liveboard's charts and metrics
- Download and review the generated memory JSON to inspect what was learned
- Look for incorrect generalizations or stale definitions
- Correct anything wrong directly in conversation — Spotter will save corrections as memory
When Liveboard Memory is Not Suitable
| Situation | Better Approach |
|---|---|
| Data model or Liveboards change frequently | Prefer conversation learning (Page 5) for definitions that evolve |
| Need to migrate context across clusters (dev → staging → prod) | Memory cannot be cleanly exported. Use Data Model Instructions or Reference Questions instead |
Manage Memory Access
Before opening conversation learning to the wider team, validate what Spotter has learned with people who know the data and can confirm whether the answers are right. These users are your quality gate.
Identify Your Power Users
Pick 2–5 people who understand the data model and know the expected outcomes for the use case — data model owners, senior analysts, or business leads who can tell immediately when an answer is wrong.
Share Coaching Access
Give power users data model editing or coaching rights as appropriate. They should be able to test Spotter directly and — if they find gaps — suggest or add coaching themselves.
If you are not ready to share editing rights yet, have them test via Spotter and report findings back to you.
Collect Expected Outcomes
Ask your power users to test the coaching that's been added so far — starting with the Liveboard memory — and to tell you explicitly what the right answers should be.
Document every gap — questions that return wrong answers, missing filters, or metrics that are off. These become your coaching backlog.
Fill the Gaps Before Rolling Out
Use the feedback to close the gaps you found — add AI Context, update Data Model Instructions, add additional Liveboards, or correct directly in conversation. Only move to broader conversation learning (Page 5) once your power users confirm the core questions are working correctly.
Learn & Train on the Go
Conversation is your primary ongoing training mechanism. Use it while working — not just during setup. When conversation isn't enough, reach for manual coaching tools.
Learning from Conversation
Correct Spotter Directly
When an answer is wrong or incomplete, correct Spotter in the conversation and ask it to remember. It saves the correction as memory and applies it to all future queries on that model.
Ask Spotter What It Assumes
Surface hidden assumptions before they cause problems. Ask Spotter what it thinks about a topic — then confirm correct ones and correct wrong ones.
Ending the prompt with "I will help confirm the definition" signals to Spotter that a correction is coming, which produces more precise assumption statements. Ask it to remember each correction and it will update its memory for the model.
Analyse the Impact of Your Coaching
After correcting Spotter or adding new context, ask it to suggest questions to verify the learning stuck. This closes the loop — you're not guessing whether the coaching worked.
Run the suggested questions and check that answers reflect the coaching you added. If something is still wrong, correct it in the same conversation and retest.
When to Rely on Other Mechanisms
Reach for other memory sources when you need precision, stability, or exact formula control — or when memory isn't available. Memory (Liveboard learning and conversation learning) requires Spotter 3. If your users are on Spotter Classic or Spotter 2, these tools are your primary coaching mechanism instead.
Data Model Instructions
Instructions serve the same purpose as memory from conversation — they teach Spotter rules for the data model. Use them only when conversation learning is not working for a particular use case, or to state explicit overrides: rules that are stable and should not evolve over time.
- Default filters that always apply regardless of context
- Rules where you explicitly do not want Spotter to update or revise the definition over time
How to write them: Direct commands, not conversational. Use "Prefer A over B" rather than hard overrides where possible. Group related rules together; separate unrelated ones on new lines. Do not use for complex formulas.
Where: Data Workspace → select model → Instructions tab
Reference Questions + NL Context
Use only when a specific question requires a very particular answer that cannot be generalized from memory — typically for complex formulas with specific denominator logic, date filters, or non-standard column combinations.
Always add NL Context. The Reference Question shows Spotter what the correct answer looks like. The NL Context explains why — the business logic behind it. This is what lets Spotter generalize the learning to similar future questions.
Where: Ask the question in Spotter → correct the answer → click Add to Coaching → add NL Context → save as Reference Question
Business Terms — Last Resort
Use only for simple, universal TML mappings where a term maps directly to a specific column value or filter.
Example: "N.Am." → country = 'North America'
Anything you previously did with Business Terms can now be done via conversation learning. Prefer that — it's faster to update and requires no manual maintenance. Do not use Business Terms for definitions that vary by context or change over time.
Diagnosing Common Problems
If Spotter is still getting something wrong after setup, start by asking it to explain its reasoning. It can surface its own confusion — diagnose first, then fix the root cause before adding more coaching on top.
Diagnose first: Ask Spotter — "Why did you use [column X] for this query?" — it will explain its reasoning and what it was confused about.
Fix in order:
Diagnose first: Ask Spotter — "What do you understand by [term]?" — it will state its current assumption.
Fix based on scope:
Diagnose first: Ask Spotter — "What rules do you have for [topic]?" — review what it surfaces.
Fix:
Diagnose first: Is this a formula with a fixed, universal definition — or a calculation that should adapt flexibly based on context?
If the formula is rigid (always the same definition)
Examples: ARR, Net Revenue, Gross Margin
If the calculation should flex by context
Examples: monthly growth %, % contribution, period-over-period comparison
When to Use What
Start with what you're seeing — find your symptom, follow the fix.
What Are You Seeing?
| Symptom | Diagnose First | Fix |
|---|---|---|
| Spotter picks the wrong column e.g. uses Ship Date instead of Order Date |
Ask: "Why did you use [column X] for this?" | Review AI Context on the correct column — is it clear and instructional? Check synonyms and indexing. Fix the data model first. Correct in conversation only if the issue persists after fixing metadata. |
| Spotter doesn't know your business context e.g. wrong definition of "active customers" |
Ask: "What do you understand by [term]?" | Broad topic → add a trusted Liveboard as a memory source. Specific question → correct directly in conversation and ask Spotter to remember. |
| Formula is wrong or calculation fails e.g. ARR calculated incorrectly, wrong % contribution logic |
Is this a rigid formula or a flexible calculation? | Rigid → define once in the data model as a formula (Page 2). Flexible → add a Reference Question with the correct pattern + NL Context explaining the logic. |
| Inconsistent answers across sessions e.g. same question answered differently each time |
Ask: "What rules do you have for [topic]?" | Correct conflicting context in conversation; ask Spotter to consolidate. If the rule must be stable → add a Data Model Instruction. |
| Incorrect value selection e.g. wrong status code, region name, or category value |
Ask: "Why didn't you choose [column + value]?" | Review indexing status of the column. Review AI Context and data model semantics. Fine-tune with conversation if the issue persists. Use Business Terms only if you need consistent value mappings across orgs. |
| Conflicting memory sources e.g. two Liveboards define the same metric differently |
Ask: "Why are you confused about [topic]? What context is conflicting?" | Review data model semantics and fix any conflicts. Correct remaining inconsistencies in conversation. If the rule must be stable → add a Data Model Instruction. |
Tool Comparison
| Data Model Instructions | Memory (Liveboard / Conversation) | Reference Questions | Business Terms | |
|---|---|---|---|---|
| Created by | You, manually | AI — from Liveboards or conversation | You, manually | You, manually |
| Maintained by | You, manually | AI — auto-updated from conversation | You, manually | You, manually |
| Enforcement | Strict — overrides memory | Contextual | Contextual (guides reasoning) | Strict TML mapping |
| Best for | Global rules that must never change | Broad definitions, evolving business logic | Specific complex queries, exact formulas | Simple universal term → value mapping |
| Use as | Foundational constraint | Primary coaching mechanism | Precision override | Last resort |