Skip to main content

Overview

Crafting effective prompts enables you to extract deeper insights from your data and get the answers you need faster. Learn how to structure your questions, choose the right context level, and use different prompt types helps you unlock the full power of Dovetail’s AI.

Understanding Chat Context Levels

Dovetail’s contextual chat automatically applies filters based on your current location, giving you distinct analysis scopes. The filter icon indicates your current context level and can be toggled on/off.
Context LevelContextUse CasesExample
Object level (Micro Analysis)Focus on individual transcripts, notes, documents, insights, etcPerfect for detailed insights and specific customer understanding

Works best for most types of questions due to focused scope.
”What specific concerns does this customer raise about our pricing?”
Project or Channel level (Meso Analysis)Synthesize across all data within a specific project or channelRequires very specific questions to avoid vague results. Use specific keywords/names in your queries. Works best when you need insights aggregated across the project or channel.”What did users say about our checkout process?” (instead of “What are common themes?”)
Workspace or Folder level (Macro Analysis)Query across all projects and data in your folder or workspaceWorks best when you need insights aggregated across a folder or workspace. Use specific keywords/names in your queries as broad questions can produce vague results with large datasets.”What are users’ top pain points with our search experience?” (instead of “What are top pain points?”)

Types of Prompts

Contextual chat supports various types of prompts beyond simple questions. Understanding these different approaches will help you get more value from the tool.
Direct questions are the most straightforward way to interact with your data.Simple Questions:
  • “What features do customers request most often?”
  • “How do users describe our customer support?”
Analytical Questions:
  • “What patterns emerge in churn feedback about our billing system?”
  • “What underlying issues cause customers to contact support about integrations?”
  • “What factors influence customer satisfaction with our onboarding process?”Request summaries at different levels of detail for various purposes.

Crafting Effective Prompts

The quality of your answer depends entirely on the quality of your question. The broader your scope (from a single data object to an entire workspace), the more specific and targeted your prompt needs to be to get clear, actionable results.

The Specificity Principle

The system performs searches based on your language, so use specific keywords/names that would actually appear in your data. Concrete terms like “onboarding,” “mobile app,” or “profile setup” work best. Generic questions produce vague results:
  • ❌ “What do customers think?”
  • ✅ “What specific concerns do customers express about our onboarding process?”
Be specific about topics:
  • ❌ “What are common themes?”
  • ✅ “What did customers say about our mobile app performance?”
Use customer language:
  • ❌ “Any insights about pain points?”
  • ✅ “What frustrations do users describe when trying to complete their profile setup?”

Use Natural Language

Frame prompts using words your customers would actually use:
  • “What problems do customers mention with billing?”
  • “How do users describe the signup process?”
  • “What complaints appear about our customer support?”
  • “What do customers like about our dashboard?”

Filtering by Custom Fields

You can narrow your results by referencing custom fields in your queries. The system recognizes field names and values, allowing you to combine field filters with natural language questions.
Field TypeStructureExample
Text FieldUse data where [Field Name] is [value]Only include documents where Status is Active
Select/Dropdown FieldOnly include data where [Field Name] is [option]Use interviews where Product is Mobile App
Boolean FieldOnly show data where [Field Name] is true/falseUse documents where Published is true
Date FieldOnly include data where [Field Name] is [date/date range]Use interviews where Interview Date is after January 1, 2024
Single field filter:
  • “What feedback do we have where Priority is High?”
  • “Only include notes where Customer Type is Enterprise”
Multiple field filters:
  • “Use interviews where Product is Web App and Status is Completed”
  • “Only include notes where Region is North America and Published is true”
Combined with search terms:
  • “Find feedback about login issues where Severity is Critical”
  • “What do customers say about pricing? Only include data where Product is SaaS”
Tips for Field Filtering
  1. Use the exact field name as it appears in your project
  2. For select fields, use the exact option value
  3. You can combine multiple field filters in one query
  4. Field filters work with date ranges, tags, and other filters
  5. The system understands natural language, so you don’t need exact syntax

Formatting Your Responses

The way you structure your question determines how your answer is formatted. Here are effective approaches: Ask for lists and rankings:
  • “List the top 5 pain points mentioned in customer calls”
  • “Rank feature requests by frequency”
Request structured comparisons:
  • “Compare mobile app feedback vs. web app feedback”
  • “What are the differences between new user and power user needs?”
Ask for organized output:
  • “Summarize billing issues in bullet points”
  • “Organize customer complaints by product area”
Request prioritized or weighted results:
  • “Which issues have the highest impact on customer satisfaction?”
  • “What problems do enterprise customers mention most often?”
Ask for temporal analysis:
  • “What themes emerged in Q4 customer interviews?”
  • “How has feedback about our checkout process changed over time?”
Request segmented analysis:
  • “What do enterprise customers say about pricing?”
  • “What problems do customers in Europe mention most often?”

Response Length and Detail

For longer, more comprehensive responses:
  • Use explicit detail requests: “Provide a detailed analysis of…” or “Give me a comprehensive explanation of…”
  • Ask for multiple aspects: “What are the main themes, specific examples, and patterns?”
  • Use question types that trigger depth: “Explain…”, “Analyze…”, “Compare…”
  • Request structured formats: “Break this down with headings and bullet points”
For refined results:
  • Follow up if needed: “Can you expand on that?” or “Provide more detail about [specific aspect]”
  • Ask chat to reorganize: “Can you reformat that as a table?” or “Group those by priority”
Tips for Better Results
  1. Be specific about structure: Instead of “What are common themes?”, try “List the top 3 usability issues with specific examples from user interviews”
  2. Use natural language with clear intent: “What did customers say about our mobile app performance?” works better than vague questions
  3. Specify your desired grouping: “Organize customer complaints by product area” or “Group feature requests by user persona”
  4. Explicitly request prioritization: Chat doesn’t automatically prioritize by quantitative signals. If you want responses weighted by specific criteria (like ARR, frequency, or impact), mention that in your prompt.
  5. Know your context level: Data object level works best for most questions. The broader your scope, the more specific your language needs to be.
  6. Use feature/process names: Reference specific parts of your product or service that would appear in customer language
  7. Test and refine: If results are vague, make your question more specific or ask follow-up questions to narrow the focus
  8. Check citations: Always verify quotes are accurate and in context
Remember: Vague questions lead to vague results. The system works best when you use specific keywords that are likely to appear in your data, and when you clearly state how you want the information structured.

Crafting your Project Overview

Your project overview becomes part of the AI’s context, helping it understand the project’s purpose, scope, and domain. A well-crafted overview improves search quality and answer relevance throughout your conversations.

What to Include

1. Project Purpose & Goals Describe what the project is researching or analyzing:
  • Primary research questions or objectives
  • Business context
  • Expected outcomes
2. Key Terminology & Domain Context Help the AI understand your domain-specific language:
  • Industry-specific terms and concepts
  • Product/service names and features being discussed
  • Customer personas or user segments
  • Important acronyms or abbreviations
3. Data Sources & Types Provide context about what data the project contains:
  • Types of data (interviews, surveys, support tickets, etc.)
  • Data collection methods
  • Time periods covered
  • Geographic or demographic scope
4. Research Methodology Explain how the research was conducted:
  • How data was collected
  • Key stakeholders or participants
  • Important dates or milestones
  • Any specific frameworks or methodologies used
5. Key Themes & Topics Surface important patterns upfront:
  • Main themes or tags that appear frequently
  • Important patterns or insights already identified
  • Areas of focus or interest
Best Practices
  1. Be concise but comprehensive: The overview is included in the chat context, so provide enough background without being overly verbose
  2. Use specific terms: Include exact product names, feature names, and terminology that appears in your data
  3. Update as needed: Revise your overview as the project evolves or new insights emerge
  4. Think about searchability: Include keywords that will help the AI connect questions to relevant data

Workspace Chat Customization

Available on the Enterprise plan
Enterprise workspaces can customize chat behavior through workspace-level guidance (max 10,000 characters).

What You Can Customize

1. Role and Persona
  • Define the assistant’s role (e.g., “You are a UX research specialist…”)
  • Set expertise areas
  • Specify the perspective to take
2. Response Style and Tone
  • Formality level (casual, professional, academic)
  • Voice (first person, third person, neutral)
  • Tone (friendly, analytical, concise)
  • Language preferences (e.g., “Always use British English spelling”)
3. Formatting and Structure
  • Preferred structure (bullets, paragraphs, numbered lists)
  • Heading usage
  • Citation format preferences
  • Length preferences (override SHORT/MEDIUM/LONG defaults)
4. Content Focus and Priorities
  • What to emphasize (e.g., “Focus on actionable insights over observations”)
  • What to de-emphasize (e.g., “Minimize discussion of methodology”)
  • Domain-specific priorities (e.g., “Prioritize customer pain points over positive feedback”)
5. Rules and Constraints
  • What to include or exclude
  • Terminology preferences (e.g., “Use ‘participants’ instead of ‘users’”)
  • Naming conventions
6. Domain-Specific Guidance
  • Industry terminology
  • Compliance requirements (e.g., “Always anonymize customer names”)
  • Research methodology preferences
  • Analysis frameworks to use

Example Configuration

You are a UX research assistant specializing in B2B SaaS products.Focus Areas:
  • Prioritize insights about product usability and workflow efficiency
  • Emphasize quantitative data when available
  • Always consider enterprise security and compliance requirements
Response Style:
  • Use a professional but approachable tone
  • Structure responses with clear headings and bullet points
  • Keep responses concise (prefer SHORT to MEDIUM length)
Terminology:
  • Use “customers” not “users”
  • Use “features” not “functionality”
  • Always refer to “workspaces” not “accounts”
Content Rules:
  • Never mention specific competitor products by name
  • Always anonymize customer names in responses
  • Focus on actionable insights that can drive product decisions
Search Strategy:
  • When comparing features, search across all projects in the workspace
  • Prioritize recent data (last 6 months) unless otherwise specified
  • Always search for both positive and negative feedback when analyzing features

Persona-based Examples

Context LevelExample Prompts
Object levelQuestion: “What usability issues does this participant encounter during task completion?”

Synthesis: “Synthesize this participant’s feedback about navigation, search, and overall workflow”

Analysis: “Identify the root cause of this user’s confusion with the interface”
Project levelSummary: “Summarize all usability issues found in this testing round with severity levels”

Comparison: “Compare how users describe the old vs. new navigation design”

Report: “Create a research summary to share with the product team highlighting critical issues”
Workspace or folder levelPatterns: “What usability patterns emerge across all research projects this year?”

Synthesis: “Synthesize all feedback about our search functionality across projects”