Spotter now has Memory

Teach Spotter your business. Once.

26.5 introduces memory — a smarter, faster way to give Spotter the context it needs to answer your team's questions accurately. Instead of writing coaching manually from scratch, Spotter can now learn directly from your Liveboards, update its understanding through conversation, and remember corrections across every future query.

01
Define your use case before you start
02
Optimize the data model as the foundation
03
Seed broad knowledge from a Liveboard in minutes
04
Manage memory access — validate with power users before rolling out
05
Train and correct Spotter directly in conversation
06
Verify coaching impact with Spotter's own test questions
Before You Begin

Use Case Discovery

Define what you're coaching Spotter to answer before touching the data model. Coaching without a clear use case leads to over-engineered models and context that doesn't match what users actually ask.

1

Identify Your Target Users

Start by identifying who you're coaching Spotter for. Pick one team or user group at a time — a scoped use case produces better coaching than a model trying to serve everyone at once.

💡 Focus on a team with urgent data needs and high potential Spotter usage — Sales, Marketing, Customer Success, or a specific ops function.
2

Collect Real Questions

Gather the actual questions your target users ask — not what you think they'll ask, but what they type when trying to get answers. Business users phrase queries differently from analysts, so unfiltered input matters.

Group the questions by topic (e.g. pipeline metrics, conversion, account health). This reveals where coaching effort should be concentrated.

💬 Example groups for a Sales persona: Pipeline Metrics, Conversion Funnel, Team Comparisons, Regional Performance.
3

Check Data Model Coverage

Once you have a representative set of questions, validate your data model against them:

  • Does it have the tables and columns needed to answer these questions?
  • Are there questions it simply cannot answer? Flag those as out of scope before coaching starts.
  • Are there columns no user query will ever need? Remove them — a lean, focused model performs better than a bloated one.
💡 The goal is a data model scoped tightly to your use case. Coaching an overly broad model wastes effort and introduces noise that hurts accuracy.
Foundation

Optimize Your Data Model

Before any coaching, Spotter needs to be able to read your model semantically. Most accuracy issues trace back here — fix the model first, add coaching second.

1

Column Names

Use human-readable names. Avoid abbreviations, jargon, and names that overlap with ThoughtSpot search keywords. Keep names unique across the model. Aim for under 50 columns — lean, focused models perform better.

💡 Instead of txn_dt, use Transaction Date. If you can't rename, add synonyms in step 2.
2

Synonyms

Add synonyms for any column name where business users use different terms. Spotter uses these to resolve natural language queries to the right column.

Example: Column Order Date → add synonyms: Transaction Date, Purchase Date, Sale Date

3

Formulas

Create model-level formulas (including pre-aggregated ones) for key metrics. If a metric has a fixed definition, define it in the data model to reduce latency and accuracy issues — Spotter will use the pre-defined formula directly instead of inferring it.

Example: Define Net Revenue as Gross Revenue - Refunds - Discounts once in the model. Don't leave Spotter to guess the calculation each time.

4

AI Context

AI Context embeds permanent business knowledge directly on columns — it instructs Spotter how to interpret and use each column for all queries.

How to generate: Open the data model → three-dot menu → Generate AI Context → review and refine each column.

  • Disambiguation: When two similar columns exist, use AI Context to set priority. "Prefer this column for all revenue queries. This is the primary date for when a sale occurred."
  • Boolean columns: Clarify values. "true = valid transaction, false = invalid transaction"
  • Non-standard values: Explain internal codes. "Contains medicine shortforms. 'MP' = Metoprolol"
  • Deprecated columns: Mark them. "Do not use this column. Replaced by Order Date v2."
💡 Write AI Context as a command to the AI, not a note for a human. Keep under 400 characters. Focus on ambiguous, frequently-used, or complex columns first.
⚠️ Check before proceeding: Review the Spotter Model Readiness documentation for the full checklist — it also covers indexing, data types, and date column handling. Run the Spotter Optimization tool from the model menu to auto-fix indexing, date formatting, and type mismatches.
Broad Coverage

Cold Start with Liveboard

Get broad coverage of your business logic quickly by pointing Spotter at a trusted Liveboard — without writing anything manually. This is the fastest way to get Spotter up to speed on a new or unfamiliar data model.

1

Add a Liveboard as a Memory Source

Pick a trusted Liveboard that reflects real, verified business definitions for the data model. It should contain the key metrics and analyses your team actually uses.

How: Go to Data Workspace → Memory Sources → add the Liveboard → click Generate Memory.

Spotter reads the Liveboard's visualizations and absorbs definitions, filters, and metric logic automatically. The richer and more representative the Liveboard, the better the coverage.

💡 You can add multiple Liveboards. Each adds to the model's memory — but verify for conflicts (see Step 2).
2

Verify the Learnings

Memory reflects the Liveboard at the time of generation. Test Spotter with representative questions covering the topics in the Liveboard before relying on it.

  • Ask questions that mirror the Liveboard's charts and metrics
  • Download and review the generated memory JSON to inspect what was learned
  • Look for incorrect generalizations or stale definitions
  • Correct anything wrong directly in conversation — Spotter will save corrections as memory
⚠️ Memory does not auto-sync when the Liveboard or data model changes. If your model is actively evolving, re-generate memory after significant changes — or prefer conversation learning (Page 5) for frequently changing definitions.

When Liveboard Memory is Not Suitable

Situation Better Approach
Data model or Liveboards change frequently Prefer conversation learning (Page 5) for definitions that evolve
Need to migrate context across clusters (dev → staging → prod) Memory cannot be cleanly exported. Use Data Model Instructions or Reference Questions instead
Memory Access

Manage Memory Access

Before opening conversation learning to the wider team, validate what Spotter has learned with people who know the data and can confirm whether the answers are right. These users are your quality gate.

1

Identify Your Power Users

Pick 2–5 people who understand the data model and know the expected outcomes for the use case — data model owners, senior analysts, or business leads who can tell immediately when an answer is wrong.

💡 Power users know when an answer is wrong in a way regular users cannot articulate: wrong denominator, missing filter, a metric that's off by 20%. Their feedback is precise.
2

Share Coaching Access

Give power users data model editing or coaching rights as appropriate. They should be able to test Spotter directly and — if they find gaps — suggest or add coaching themselves.

If you are not ready to share editing rights yet, have them test via Spotter and report findings back to you.

3

Collect Expected Outcomes

Ask your power users to test the coaching that's been added so far — starting with the Liveboard memory — and to tell you explicitly what the right answers should be.

💬 Ask them: "What questions should Spotter be able to answer here? What's the exact expected output?" and "What questions do you typically ask about this data that we haven't covered?"

Document every gap — questions that return wrong answers, missing filters, or metrics that are off. These become your coaching backlog.

4

Fill the Gaps Before Rolling Out

Use the feedback to close the gaps you found — add AI Context, update Data Model Instructions, add additional Liveboards, or correct directly in conversation. Only move to broader conversation learning (Page 5) once your power users confirm the core questions are working correctly.

💡 The output of this step is a validated question set with expected outcomes. Keep these as your ongoing coaching test cases — run them whenever you make significant coaching changes.
Ongoing Training

Learn & Train on the Go

Conversation is your primary ongoing training mechanism. Use it while working — not just during setup. When conversation isn't enough, reach for manual coaching tools.

Learning from Conversation

1

Correct Spotter Directly

When an answer is wrong or incomplete, correct Spotter in the conversation and ask it to remember. It saves the correction as memory and applies it to all future queries on that model.

💬 Example: "The denominator for Spotter3 adoption should only include Spotter accounts. Remember this."
2

Ask Spotter What It Assumes

Surface hidden assumptions before they cause problems. Ask Spotter what it thinks about a topic — then confirm correct ones and correct wrong ones.

💬 Try: "What are your assumptions about [topic]? Tell me what you think each one means — I will help confirm the definition." — then reply to each assumption to confirm or correct.

Ending the prompt with "I will help confirm the definition" signals to Spotter that a correction is coming, which produces more precise assumption statements. Ask it to remember each correction and it will update its memory for the model.

3

Analyse the Impact of Your Coaching

After correcting Spotter or adding new context, ask it to suggest questions to verify the learning stuck. This closes the loop — you're not guessing whether the coaching worked.

💬 Try: "Based on what you just learned, what are a few questions I can ask to test whether you're applying this correctly?"

Run the suggested questions and check that answers reflect the coaching you added. If something is still wrong, correct it in the same conversation and retest.

When to Rely on Other Mechanisms

Reach for other memory sources when you need precision, stability, or exact formula control — or when memory isn't available. Memory (Liveboard learning and conversation learning) requires Spotter 3. If your users are on Spotter Classic or Spotter 2, these tools are your primary coaching mechanism instead.

Data Model Instructions

Instructions serve the same purpose as memory from conversation — they teach Spotter rules for the data model. Use them only when conversation learning is not working for a particular use case, or to state explicit overrides: rules that are stable and should not evolve over time.

  • Default filters that always apply regardless of context
  • Rules where you explicitly do not want Spotter to update or revise the definition over time
💡 Example: "Always filter for production and paid clusters unless the user specifies otherwise." — a stable global rule you never want changed by future conversation corrections.

How to write them: Direct commands, not conversational. Use "Prefer A over B" rather than hard overrides where possible. Group related rules together; separate unrelated ones on new lines. Do not use for complex formulas.

Where: Data Workspace → select model → Instructions tab

Reference Questions + NL Context

Use only when a specific question requires a very particular answer that cannot be generalized from memory — typically for complex formulas with specific denominator logic, date filters, or non-standard column combinations.

💬 Example: "% Spotter3 adoption" requires a specific denominator (only Spotter accounts) with a specific version filter (≥ 26.2). Memory cannot reliably infer these exact filter values.

Always add NL Context. The Reference Question shows Spotter what the correct answer looks like. The NL Context explains why — the business logic behind it. This is what lets Spotter generalize the learning to similar future questions.

Where: Ask the question in Spotter → correct the answer → click Add to Coaching → add NL Context → save as Reference Question

Business Terms — Last Resort

Use only for simple, universal TML mappings where a term maps directly to a specific column value or filter.

Example: "N.Am."country = 'North America'

Anything you previously did with Business Terms can now be done via conversation learning. Prefer that — it's faster to update and requires no manual maintenance. Do not use Business Terms for definitions that vary by context or change over time.

Troubleshooting

Diagnosing Common Problems

If Spotter is still getting something wrong after setup, start by asking it to explain its reasoning. It can surface its own confusion — diagnose first, then fix the root cause before adding more coaching on top.

💡 Diagnostic principle: Before adding coaching, always ask Spotter "Why did you answer it that way?" or "What are your assumptions about [topic]?" — it will tell you what went wrong.

Diagnose first: Ask Spotter — "Why did you use [column X] for this query?" — it will explain its reasoning and what it was confused about.

Fix in order:

1
Review data model semantics — is the AI Context on the correct column clear and instructional? Are synonyms accurate? Is indexing enabled on the right column?
2
Fix the data model first (AI Context, synonyms, indexing) — this is the root cause in most cases. Column disambiguation belongs in AI Context, not in coaching.
3
Only if the issue persists after fixing the model → correct in conversation and ask Spotter to remember the correct column mapping.

Diagnose first: Ask Spotter — "What do you understand by [term]?" — it will state its current assumption.

Fix based on scope:

1
Broad topic with multiple related metrics (e.g. "active customers", "Spotter adoption") → add the relevant Liveboard to memory. This gives Spotter the full business context at once.
2
Specific questions only (e.g. one particular KPI is wrong) → correct directly in conversation and ask Spotter to remember the definition.

Diagnose first: Ask Spotter — "What rules do you have for [topic]?" — review what it surfaces.

Fix:

1
Review memory for conflicting context from multiple sources (e.g. two Liveboards that define the same metric differently).
2
Correct the conflict in conversation — give Spotter the authoritative definition and ask it to consolidate and remember.
3
If the rule must never change or be overridden by future memory → move it to a Data Model Instruction. Instructions always take precedence over memory.

Diagnose first: Is this a formula with a fixed, universal definition — or a calculation that should adapt flexibly based on context?

If the formula is rigid (always the same definition)

Examples: ARR, Net Revenue, Gross Margin

1
Define it once in the data model as a formula or pre-aggregated formula (Page 2, Step 3). Spotter will use the pre-defined calculation directly — no coaching needed.

If the calculation should flex by context

Examples: monthly growth %, % contribution, period-over-period comparison

1
Review the reasoning pane — where does the formula break? Wrong denominator, wrong date column, missing filter?
2
Correct the answer in Spotter — adjust the search tokens to the right pattern.
3
Click Add to Coaching → save as a Reference Question. Add NL Context explaining the pattern: which measure, which denominator, which date — so Spotter can generalize to similar flexible queries.
Quick Reference

When to Use What

Start with what you're seeing — find your symptom, follow the fix.

What Are You Seeing?

Symptom Diagnose First Fix
Spotter picks the wrong column
e.g. uses Ship Date instead of Order Date
Ask: "Why did you use [column X] for this?" Review AI Context on the correct column — is it clear and instructional? Check synonyms and indexing. Fix the data model first. Correct in conversation only if the issue persists after fixing metadata.
Spotter doesn't know your business context
e.g. wrong definition of "active customers"
Ask: "What do you understand by [term]?" Broad topic → add a trusted Liveboard as a memory source.
Specific question → correct directly in conversation and ask Spotter to remember.
Formula is wrong or calculation fails
e.g. ARR calculated incorrectly, wrong % contribution logic
Is this a rigid formula or a flexible calculation? Rigid → define once in the data model as a formula (Page 2).

Flexible → add a Reference Question with the correct pattern + NL Context explaining the logic.
Inconsistent answers across sessions
e.g. same question answered differently each time
Ask: "What rules do you have for [topic]?" Correct conflicting context in conversation; ask Spotter to consolidate. If the rule must be stable → add a Data Model Instruction.
Incorrect value selection
e.g. wrong status code, region name, or category value
Ask: "Why didn't you choose [column + value]?" Review indexing status of the column. Review AI Context and data model semantics. Fine-tune with conversation if the issue persists. Use Business Terms only if you need consistent value mappings across orgs.
Conflicting memory sources
e.g. two Liveboards define the same metric differently
Ask: "Why are you confused about [topic]? What context is conflicting?" Review data model semantics and fix any conflicts. Correct remaining inconsistencies in conversation. If the rule must be stable → add a Data Model Instruction.

Tool Comparison

Data Model Instructions Memory (Liveboard / Conversation) Reference Questions Business Terms
Created by You, manually AI — from Liveboards or conversation You, manually You, manually
Maintained by You, manually AI — auto-updated from conversation You, manually You, manually
Enforcement Strict — overrides memory Contextual Contextual (guides reasoning) Strict TML mapping
Best for Global rules that must never change Broad definitions, evolving business logic Specific complex queries, exact formulas Simple universal term → value mapping
Use as Foundational constraint Primary coaching mechanism Precision override Last resort