Overview
Crafting effective prompts enables you to extract deeper insights from your data and get the answers you need faster. Learn how to structure your questions, choose the right context level, and use different prompt types helps you unlock the full power of Dovetail’s AI.Understanding Chat Context Levels
Dovetail’s contextual chat automatically applies filters based on your current location, giving you distinct analysis scopes. The filter icon indicates your current context level and can be toggled on/off.| Context Level | Context | Use Cases | Example |
|---|---|---|---|
| Object level (Micro Analysis) | Focus on individual transcripts, notes, documents, insights, etc | Perfect for detailed insights and specific customer understanding Works best for most types of questions due to focused scope. | ”What specific concerns does this customer raise about our pricing?” |
| Project or Channel level (Meso Analysis) | Synthesize across all data within a specific project or channel | Requires very specific questions to avoid vague results. Use specific keywords/names in your queries. Works best when you need insights aggregated across the project or channel. | ”What did users say about our checkout process?” (instead of “What are common themes?”) |
| Workspace or Folder level (Macro Analysis) | Query across all projects and data in your folder or workspace | Works best when you need insights aggregated across a folder or workspace. Use specific keywords/names in your queries as broad questions can produce vague results with large datasets. | ”What are users’ top pain points with our search experience?” (instead of “What are top pain points?”) |
Types of Prompts
Contextual chat supports various types of prompts beyond simple questions. Understanding these different approaches will help you get more value from the tool.- Questions
- Synthesis
- Summarization
- Tasks
Direct questions are the most straightforward way to interact with your data.Simple Questions:
- “What features do customers request most often?”
- “How do users describe our customer support?”
- “What patterns emerge in churn feedback about our billing system?”
- “What underlying issues cause customers to contact support about integrations?”
- “What factors influence customer satisfaction with our onboarding process?”Request summaries at different levels of detail for various purposes.
Crafting Effective Prompts
The quality of your answer depends entirely on the quality of your question. The broader your scope (from a single data object to an entire workspace), the more specific and targeted your prompt needs to be to get clear, actionable results.The Specificity Principle
The system performs searches based on your language, so use specific keywords/names that would actually appear in your data. Concrete terms like “onboarding,” “mobile app,” or “profile setup” work best. Generic questions produce vague results:- ❌ “What do customers think?”
- ✅ “What specific concerns do customers express about our onboarding process?”
- ❌ “What are common themes?”
- ✅ “What did customers say about our mobile app performance?”
- ❌ “Any insights about pain points?”
- ✅ “What frustrations do users describe when trying to complete their profile setup?”
Use Natural Language
Frame prompts using words your customers would actually use:- “What problems do customers mention with billing?”
- “How do users describe the signup process?”
- “What complaints appear about our customer support?”
- “What do customers like about our dashboard?”
Filtering by Custom Fields
You can narrow your results by referencing custom fields in your queries. The system recognizes field names and values, allowing you to combine field filters with natural language questions.| Field Type | Structure | Example |
|---|---|---|
| Text Field | Use data where [Field Name] is [value] | Only include documents where Status is Active |
| Select/Dropdown Field | Only include data where [Field Name] is [option] | Use interviews where Product is Mobile App |
| Boolean Field | Only show data where [Field Name] is true/false | Use documents where Published is true |
| Date Field | Only include data where [Field Name] is [date/date range] | Use interviews where Interview Date is after January 1, 2024 |
- “What feedback do we have where Priority is High?”
- “Only include notes where Customer Type is Enterprise”
- “Use interviews where Product is Web App and Status is Completed”
- “Only include notes where Region is North America and Published is true”
- “Find feedback about login issues where Severity is Critical”
- “What do customers say about pricing? Only include data where Product is SaaS”
Formatting Your Responses
The way you structure your question determines how your answer is formatted. Here are effective approaches: Ask for lists and rankings:- “List the top 5 pain points mentioned in customer calls”
- “Rank feature requests by frequency”
- “Compare mobile app feedback vs. web app feedback”
- “What are the differences between new user and power user needs?”
- “Summarize billing issues in bullet points”
- “Organize customer complaints by product area”
- “Which issues have the highest impact on customer satisfaction?”
- “What problems do enterprise customers mention most often?”
- “What themes emerged in Q4 customer interviews?”
- “How has feedback about our checkout process changed over time?”
- “What do enterprise customers say about pricing?”
- “What problems do customers in Europe mention most often?”
Response Length and Detail
For longer, more comprehensive responses:- Use explicit detail requests: “Provide a detailed analysis of…” or “Give me a comprehensive explanation of…”
- Ask for multiple aspects: “What are the main themes, specific examples, and patterns?”
- Use question types that trigger depth: “Explain…”, “Analyze…”, “Compare…”
- Request structured formats: “Break this down with headings and bullet points”
- Follow up if needed: “Can you expand on that?” or “Provide more detail about [specific aspect]”
- Ask chat to reorganize: “Can you reformat that as a table?” or “Group those by priority”
Crafting your Project Overview
Your project overview becomes part of the AI’s context, helping it understand the project’s purpose, scope, and domain. A well-crafted overview improves search quality and answer relevance throughout your conversations.What to Include
1. Project Purpose & Goals Describe what the project is researching or analyzing:- Primary research questions or objectives
- Business context
- Expected outcomes
- Industry-specific terms and concepts
- Product/service names and features being discussed
- Customer personas or user segments
- Important acronyms or abbreviations
- Types of data (interviews, surveys, support tickets, etc.)
- Data collection methods
- Time periods covered
- Geographic or demographic scope
- How data was collected
- Key stakeholders or participants
- Important dates or milestones
- Any specific frameworks or methodologies used
- Main themes or tags that appear frequently
- Important patterns or insights already identified
- Areas of focus or interest
Workspace Chat Customization
Available on the Enterprise plan
What You Can Customize
1. Role and Persona- Define the assistant’s role (e.g., “You are a UX research specialist…”)
- Set expertise areas
- Specify the perspective to take
- Formality level (casual, professional, academic)
- Voice (first person, third person, neutral)
- Tone (friendly, analytical, concise)
- Language preferences (e.g., “Always use British English spelling”)
- Preferred structure (bullets, paragraphs, numbered lists)
- Heading usage
- Citation format preferences
- Length preferences (override SHORT/MEDIUM/LONG defaults)
- What to emphasize (e.g., “Focus on actionable insights over observations”)
- What to de-emphasize (e.g., “Minimize discussion of methodology”)
- Domain-specific priorities (e.g., “Prioritize customer pain points over positive feedback”)
- What to include or exclude
- Terminology preferences (e.g., “Use ‘participants’ instead of ‘users’”)
- Naming conventions
- Industry terminology
- Compliance requirements (e.g., “Always anonymize customer names”)
- Research methodology preferences
- Analysis frameworks to use
Example Configuration
Persona-based Examples
- Researcher
- Product Manager
- Designer
- Customer Success
- Sales
| Context Level | Example Prompts |
|---|---|
| Object level | Question: “What usability issues does this participant encounter during task completion?” Synthesis: “Synthesize this participant’s feedback about navigation, search, and overall workflow” Analysis: “Identify the root cause of this user’s confusion with the interface” |
| Project level | Summary: “Summarize all usability issues found in this testing round with severity levels” Comparison: “Compare how users describe the old vs. new navigation design” Report: “Create a research summary to share with the product team highlighting critical issues” |
| Workspace or folder level | Patterns: “What usability patterns emerge across all research projects this year?” Synthesis: “Synthesize all feedback about our search functionality across projects” |