Overview
What it does
Tavily provides AI-optimized web search and research capabilities specifically designed for AI agents and LLMs. Search the real-time web, extract content from specific URLs, crawl websites systematically, and map site structures - all optimized for accuracy and relevance. Results include source citations, relevance scoring, and intelligent content filtering to deliver the most useful information without noise.Key Features:
- AI-powered web search optimized for LLMs
- Real-time internet access for current information
- Content extraction from specific URLs
- Intelligent web crawling with depth control
- Website structure mapping and analysis
- Multiple search depths (basic/advanced)
- Domain and time-based filtering
- News-specific search mode
- Image search with descriptions
- Natural language crawler instructions
Use Cases
Research & Analysis: Market research, competitive intelligence, academic research, trend analysis, fact-checking and verificationContent Discovery: News monitoring, content research for writing, product research and comparison, technical documentation explorationData Collection: Systematic site content extraction, structured data gathering, site architecture analysis, content auditingIntelligence Gathering: Due diligence investigations, brand monitoring, industry research, regulatory compliance research
Tavily provides AI-optimized search results specifically designed for AI agents and LLMs, filtering noise and delivering accurate, relevant information with authoritative source citations.
Quick Start
1
Get your API key
Sign up for a Tavily account at app.tavily.com/sign-upFree Tier Includes:
- 1,000 search requests/month
- All 4 search tools available
- Basic and advanced search depths
- Real-time web access
- AI-powered result optimization
- Source citations and relevance scoring
- No credit card required
- Verify your email address
- Navigate to your API dashboard
- Copy your API key from the dashboard
2
Add to NimbleBrain Studio
In NimbleBrain Studio:
- Navigate to MCP Servers in the sidebar
- Click Add Server
- Search for “Tavily” in the server registry
- Click Configure
- Paste your API key in the TAVILY_API_KEY field
- Click Save & Enable
The server will automatically connect and enable real-time web search in your conversations.
3
Test your connection
In your Studio chat, try this prompt:You should see:
- Real-time search results from authoritative sources
- Content summaries and key information
- Source URLs with publication dates
- Relevance scores for each result
- The 🔧 tool indicator confirming Tavily is working
Available Tools
tavily-search
tavily-search
Powerful AI-optimized web search with comprehensive real-time results, customizable filtering, and intelligent ranking specifically designed for research and information gathering.Parameters:
Returns:Example Usage:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| query | string | Yes | - | Search query or research question |
| search_depth | string | No | ”basic” | Search depth: “basic” (fast) or “advanced” (thorough) |
| topic | string | No | ”general” | Search category: “general” or “news” |
| days | number | No | 3 | Days back for news search (news topic only) |
| time_range | string | No | - | Time range: “day”/“week”/“month”/“year” or “d”/“w”/“m”/“y” |
| start_date | string | No | "" | Start date for results (format: YYYY-MM-DD) |
| end_date | string | No | "" | End date for results (format: YYYY-MM-DD) |
| max_results | number | No | 10 | Maximum results (5-20) |
| include_images | boolean | No | false | Include query-related images |
| include_image_descriptions | boolean | No | false | Include images with descriptions |
| include_raw_content | boolean | No | false | Include full HTML content |
| include_domains | array | No | [] | Domains to specifically include |
| exclude_domains | array | No | [] | Domains to exclude |
| country | string | No | "" | Boost results from specific country (general topic only) |
| include_favicon | boolean | No | false | Include favicon URLs |
tavily-extract
tavily-extract
Extract and process raw content from specific URLs with intelligent parsing. Perfect for gathering detailed information from known sources, analyzing specific articles, or collecting data from multiple pages.Parameters:
Returns:Example Usage:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| urls | array | Yes | - | List of URLs to extract content from |
| extract_depth | string | No | ”basic” | Extraction depth: “basic” or “advanced” |
| include_images | boolean | No | false | Include images from pages |
| format | string | No | ”markdown” | Output format: “markdown” or “text” |
| include_favicon | boolean | No | false | Include favicon URLs |
tavily-crawl
tavily-crawl
Systematically explore and extract content from websites starting from a base URL. The crawler intelligently follows links like a graph traversal, with configurable depth, breadth, and filtering options.Parameters:
Returns:Example Usage:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| url | string | Yes | - | Root URL to begin the crawl |
| max_depth | integer | No | 1 | Maximum crawl depth from base URL (minimum: 1) |
| max_breadth | integer | No | 20 | Maximum links per page (minimum: 1) |
| limit | integer | No | 50 | Total pages to crawl before stopping (minimum: 1) |
| instructions | string | No | - | Natural language instructions for crawler |
| select_paths | array | No | [] | Regex patterns for path filtering (e.g., /docs/.*) |
| select_domains | array | No | [] | Regex patterns for domain filtering |
| allow_external | boolean | No | true | Whether to return external links |
| extract_depth | string | No | ”basic” | Content extraction: “basic” or “advanced” |
| format | string | No | ”markdown” | Output format: “markdown” or “text” |
| include_favicon | boolean | No | false | Include favicon URLs |
tavily-map
tavily-map
Create a structured map of website URLs to understand site architecture, content organization, and navigation paths. Perfect for site audits, content discovery, and analyzing website structure without extracting full content.Parameters:
Returns:Example Usage:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| url | string | Yes | - | Root URL to begin mapping |
| max_depth | integer | No | 1 | Maximum mapping depth (minimum: 1) |
| max_breadth | integer | No | 20 | Maximum links per level (minimum: 1) |
| limit | integer | No | 50 | Total links to map (minimum: 1) |
| instructions | string | No | - | Natural language instructions for mapping |
| select_paths | array | No | [] | Regex patterns for path selection |
| select_domains | array | No | [] | Regex patterns for domain selection |
| allow_external | boolean | No | true | Include external links in results |
Authentication
API Key Required: This server requires a Tavily API key to access web search and research capabilities.
Getting Your API Key
- Create an account at app.tavily.com/sign-up
- Choose your plan (Free tier available with no credit card)
- Verify your email address
- Navigate to the API dashboard
- Copy your API key
- Add it to your Studio server configuration
Rate Limits & Pricing
| Plan | Requests/Month | Features | Price |
|---|---|---|---|
| Free | 1,000 | All 4 tools, basic + advanced search | $0 |
| Basic | 10,000 | + Priority support | $49/mo |
| Pro | 50,000 | + Higher rate limits, advanced features | $149/mo |
| Enterprise | Custom | Dedicated support, custom limits, SLA | Custom |
- Each search query = 1 request
- Each extract call = 1 request (regardless of URL count)
- Each crawl session = 1 request (regardless of pages crawled)
- Each map operation = 1 request
- Failed requests don’t count toward limit
- Limit resets on the 1st of each month
Managing Your API Key in Studio
Your API key is securely stored in NimbleBrain Studio. To update it:- Go to Settings → MCP Servers
- Find “Tavily” in your server list
- Click Edit Configuration
- Update your API key
- Click Save
Studio automatically manages server connections - no manual restarts required.
Security Best Practices
Protect Your API Key
Protect Your API Key
Your Tavily API key grants access to your search quota:
- Never share your API key publicly
- Don’t commit keys to version control
- Rotate keys periodically (every 90 days)
- Monitor usage for unexpected activity
- Use separate keys for different environments
- Keep keys in secure credential managers
Monitor Usage
Monitor Usage
Track your API usage to avoid hitting limits:
- Check usage dashboard regularly at app.tavily.com
- Set up usage alerts in Tavily (available in dashboard)
- Implement query caching for repeated searches
- Use appropriate search depth for your needs
- Optimize query frequency in automated workflows
- Review request patterns monthly
- Basic search depth uses fewer resources than advanced
- Batch extract operations when possible (multiple URLs in one call)
- Use map before crawl to understand site structure
- Cache search results for frequently accessed queries
Content Filtering
Content Filtering
Configure search filters to control results quality:
- Use domain allowlists for trusted sources (include_domains)
- Exclude unreliable or low-quality domains (exclude_domains)
- Filter by date for recent content (time_range, start_date, end_date)
- Adjust search depth based on research needs
- Use topic=“news” for current events
- Control result count to manage API usage (max_results)
- Research:
["arxiv.org", "scholar.google.com", "ieee.org"] - News:
["nytimes.com", "reuters.com", "bloomberg.com"] - Tech:
["techcrunch.com", "arstechnica.com", "theverge.com"]
Proper filtering improves result quality and reduces time spent reviewing irrelevant content.
Example Workflows
- Market Research
- Fact Checking
- News Monitoring
- Academic Research
- Product Research
- Competitive Intelligence
- Content Research
- Technical Research
- Website Analysis
Scenario: Research competitors and market trends in electric vehicle industryPrompt:What happens:
- Searches authoritative business and tech news sources
- Filters for content from last 3 months
- Extracts key facts, statistics, and quotes
- Provides source citations with publication dates
- Ranks results by relevance and authority
- Identifies trends and patterns across sources
- Market share data from industry reports
- Recent news about BYD, Rivian, Lucid competitors
- Analyst predictions and expert opinions
- Statistical trends with time series data
- Cited sources from Bloomberg, Reuters, industry publications
- “What are the main challenges facing EV manufacturers according to these sources?”
- “Extract detailed content from the top 3 most relevant articles”
- “Search for investor sentiment about EV stocks in the same time period”
Troubleshooting
Rate Limit Exceeded
Rate Limit Exceeded
Error Message:Cause: You’ve exceeded your monthly request limit (1,000 for Free tier)Solutions:
- Check your usage at app.tavily.com dashboard
- Wait until next month for automatic limit reset (1st of the month)
- Upgrade to a higher tier plan ($49/mo for 10,000 requests)
- Implement result caching for repeated queries
- Optimize query frequency in workflows
- Use more specific queries to reduce trial-and-error searches
- Consider batching research tasks monthly
- Use basic search depth when advanced isn’t needed
- Cache search results in your workflow
- Batch multiple URLs in single extract call
- Map before crawling to plan efficiently
Invalid API Key
Invalid API Key
Error Message:Solutions:
- Verify your API key in Studio server settings (check for extra spaces)
- Confirm your Tavily account is active
- Check if key was revoked or regenerated
- Regenerate key from Tavily dashboard if necessary
- Ensure you copied the entire key without truncation
- Verify no special characters were added during copy/paste
- Navigate to Settings → MCP Servers → Tavily
- Click Edit Configuration
- Paste new API key (no quotes, no spaces)
- Save changes
- Verify “Active” status appears
No Results Found
No Results Found
Error Message:Solutions:
- Broaden your search query (use less specific terms)
- Remove overly restrictive domain filters
- Expand time range or remove date filters
- Check spelling and terminology
- Try alternative keywords or phrases
- Use more general terms first, then narrow with follow-up queries
- Verify the topic has publicly available web content
- Try removing country restrictions if set
- ❌ Too specific: “John Smith from Acme Corp’s opinion on XYZ from last Tuesday”
- ✅ Better: “Expert opinions on XYZ technology”
- ✅ Best: “Recent analysis of XYZ technology trends”
- Try basic query without filters
- If results appear, gradually add filters back
- Test with known queries that should have results
- Verify internet connectivity
Slow Response Time
Slow Response Time
Issue: Searches taking longer than expectedSolutions:
- Check your internet connection speed
- Verify Tavily API status at status.tavily.com (if available)
- Reduce search depth to “basic” for faster results
- Decrease max_results parameter (e.g., 5 instead of 20)
- Simplify complex multi-part queries
- Break compound questions into separate queries
- Consider query complexity (more filters = more processing)
- Check for Tavily service announcements
- Basic search: 2-3 seconds
- Advanced search: 4-6 seconds
- Extract (single URL): 1-2 seconds
- Extract (multiple URLs): 2-4 seconds
- Crawl (depth 1, 10 pages): 3-5 seconds
- Crawl (depth 2, 50 pages): 8-12 seconds
- Map (50 URLs): 2-4 seconds
- Use basic search for quick lookups
- Advanced search for comprehensive research only
- Limit crawl depth and breadth for faster results
- Map sites before crawling to understand scope
Advanced searches and deep crawls are more thorough but take longer. Balance speed vs. comprehensiveness based on your needs.
Poor Result Quality
Poor Result Quality
Issue: Results are not relevant or low qualitySolutions:
- Use more specific, detailed search queries
- Add domain filters for trusted, authoritative sources
- Exclude known low-quality or unreliable domains
- Increase search depth from “basic” to “advanced”
- Provide more context in your query
- Use quotes for exact phrase matching
- Filter by date for recent, current content
- Specify topic=“news” for current events
- Too broad, mixed quality results
- More specific topic, time frame, industry
- Very specific, implies authoritative sources
- Academic:
["arxiv.org", "scholar.google.com", "pubmed.gov"] - Business:
["bloomberg.com", "reuters.com", "wsj.com"] - Tech:
["arstechnica.com", "techcrunch.com", "theverge.com"]
Domain Filtering Not Working
Domain Filtering Not Working
Issue: Domain filters not producing expected resultsSolutions:❌ Incorrect:Testing approach:
- Verify domain format: use “example.com” not “https://example.com”
- Check for typos in domain names
- Ensure domains actually have content matching your query
- Don’t over-filter (too many restrictions = no results)
- Use include OR exclude, not both for same domains
- Test without filters first to verify content exists
- Check that domain is spelled exactly as it appears in URLs
- Search without domain filters first
- Verify which domains appear in results
- Use exact domain names from those results
- Add filters one at a time
Crawl/Map Incomplete Results
Crawl/Map Incomplete Results
Issue: Crawler or mapper returning fewer pages than expectedSolutions:
- Check site’s robots.txt (some sites block crawlers)
- Increase limit parameter (default is 50)
- Increase max_depth to explore deeper
- Increase max_breadth to follow more links per page
- Remove overly restrictive select_paths patterns
- Verify select_domains regex is correct
- Check if site requires authentication (crawler can’t access)
- Some sites may have JavaScript-only navigation (not crawlable)
limit: Total pages to process before stoppingmax_depth: How many levels deep from base URLmax_breadth: How many links per page
- depth=2, breadth=10, limit=50
- Level 0: 1 page (base URL)
- Level 1: up to 10 pages (first 10 links from base)
- Level 2: up to 100 pages (10 links from each L1 page)
- But limited to 50 total pages by
limitparameter
- Map first to understand actual site structure
- Use path patterns to focus on relevant sections
- Adjust depth/breadth based on map results
- Increase limit gradually to avoid over-crawling
Server Connection Issues
Server Connection Issues
Error Message:Solutions:
- Check your internet connection
- Verify server is enabled in Studio (Settings → MCP Servers)
- Try disabling and re-enabling the Tavily server
- Check Tavily API status (look for status page or announcements)
- Verify Studio is not experiencing service interruptions
- Clear Studio cache and retry
- Try a simple test query to isolate issue
- Contact support if issue persists
- Test internet connection (open a website)
- Check Studio status (other servers working?)
- Verify API key is valid (test in Tavily dashboard)
- Try disabling/re-enabling in Studio
- Check for error details in Studio logs (if available)
Studio manages all server infrastructure automatically - no local setup or maintenance required.
Tools Not Triggering
Tools Not Triggering
Issue: Studio doesn’t use Tavily tools when expectedSolutions:
- Be more explicit: mention “search the web” or “use Tavily”
- Verify server shows “Active” status in Studio
- Check API key is correctly configured
- Provide clear action verbs: “search”, “find”, “extract”, “crawl”
- Include specific URLs when you want extraction
- Don’t ask Studio to find URLs - provide them directly
- Use phrases that indicate web search intent
- “Search the web for recent AI developments”
- “Find the latest news about climate policy”
- “Search for expert opinions on remote work”
- “Extract content from this URL: [url]”
- “Crawl docs.example.com and extract all content”
- “What’s the latest on AI?” (might use general knowledge)
- “Tell me about climate policy” (might not search web)
- “What does this page say?” (no URL provided)
Links & Resources
GitHub Repository
View source code, report issues, and contribute
Tavily Documentation
Official Tavily API reference and guides
Tavily Dashboard
Manage your API keys and monitor usage
Report Issues
Found a bug? Submit an issue on GitHub
Learning Resources
Search Optimization
Search Optimization
Tips for Better Search Results:1. Query Crafting:
- Be specific: Include context, timeframe, and domain
- Use industry terminology and proper nouns
- Specify what you’re looking for (research, news, opinions, statistics)
- Include qualifiers (recent, official, expert, peer-reviewed)
- Basic: Quick lookups, general information, common topics
- Advanced: Comprehensive research, academic work, competitive intelligence
- Domain allowlists for trusted sources
- Time ranges for current information
- Topic selection (general vs. news)
- Geographic filtering when relevant
- Start broad to understand landscape
- Refine with follow-up questions
- Extract full content from promising sources
- Cross-reference across multiple results
- Check publication dates
- Verify author credentials
- Look for citations and references
- Compare across multiple sources
- Prioritize primary sources over secondary
Research Workflows
Research Workflows
Effective Research Process:Phase 1: Discovery (Basic Search)
- Start with broad queries to map the topic
- Identify key terms, names, and concepts
- Find authoritative sources and publications
- Understand the current state of knowledge
- Use advanced search for comprehensive coverage
- Apply domain filters for quality sources
- Extract full content from key articles
- Identify gaps and questions for further research
- Verify facts across multiple sources
- Check publication dates and recency
- Identify consensus vs. outlier opinions
- Note conflicting information for further investigation
- Map site structures for systematic coverage
- Crawl documentation or knowledge bases
- Extract structured information at scale
- Build comprehensive topic databases
- Combine findings into coherent insights
- Track all sources and citations
- Identify trends and patterns
- Formulate conclusions based on evidence
Studio automatically tracks sources and citations throughout your research session for easy reference.
Source Evaluation
Source Evaluation
Assessing Source Quality and Reliability:Authority Indicators:
- Author credentials and expertise
- Publication reputation and peer review status
- Citations and references provided
- Institutional backing or sponsorship
- Domain authority (.edu, .gov, established publications)
- Publication date matches your needs
- Updates and corrections noted
- Reflects current understanding
- Historical context provided when needed
- Balanced presentation of evidence
- Multiple perspectives included
- Clear distinction between fact and opinion
- Potential biases disclosed
- Funding sources transparent
- Primary sources cited
- Data and statistics provided
- Methodology clearly described
- Reproducible or verifiable claims
- Expert consensus acknowledged
- Compare across multiple sources
- Look for consensus on key facts
- Note areas of disagreement
- Identify potential errors or outliers
- Verify claims with official sources
- No author or source attribution
- Sensational headlines
- No citations or sources
- Conflicts with established facts
- Poor grammar or unprofessional presentation
Advanced Query Techniques
Advanced Query Techniques
Strategies for Complex Research:1. Comparison Queries:2. Trend Analysis:3. Expert Insights:4. Evidence Gathering:5. Temporal Queries:6. Geographic Specificity:7. Format-Specific Searches:
Crawling & Mapping Strategies
Crawling & Mapping Strategies
Effective Website Analysis:When to Use Map vs. Crawl:Use Map when:Domain Patterns:Best Practices:
- Understanding site structure first
- You need a site overview
- Planning a targeted crawl
- Checking link validity
- Auditing site architecture
- You don’t need page content yet
- You need actual page content
- Building a knowledge base
- Extracting documentation
- Collecting data at scale
- Deep content analysis required
- Following up after mapping
- 1: Current page + direct links (homepage + main sections)
- 2: Two levels deep (homepage → section → subsection)
- 3+: Deep exploration (use cautiously with large sites)
- 10: Focused exploration, main links only
- 20: Balanced coverage (default)
- 50+: Comprehensive but slower
- 20: Quick sampling
- 50: Medium site coverage (default)
- 100: Large site exploration
- 200+: Comprehensive archival
- Always map large sites first
- Use path patterns to focus on relevant sections
- Start with conservative limits
- Increase depth/breadth based on map results
- Use instructions for semantic filtering
- Respect site’s robots.txt and crawling policies
API Usage & Costs
API Usage & Costs
Understanding Request Consumption:Request Counting:2. Strategic Search Depth:4. Query Optimization:
- Search: 1 request per query (regardless of max_results)
- Extract: 1 request per call (multiple URLs = still 1 request)
- Crawl: 1 request per session (all pages crawled = 1 request)
- Map: 1 request per session (all URLs mapped = 1 request)
- Use basic search (faster, same request count)
- Reserve advanced for critical research only
- Both consume 1 request, but advanced may take longer
- Craft specific queries to get right results first time
- Poor query → no results → retry → multiple requests
- Good query → useful results → one request
- Enable caching in Studio for repeated queries
- Cache research results in your workflow
- Avoid re-searching the same topics
- Batch research tasks monthly if possible
- Schedule regular monitoring (weekly, not daily)
- Consolidate similar queries into comprehensive searches
- Daily research: ~33 requests/day
- Weekly monitoring: ~250 requests/week
- Mix of search (80%), extract (15%), crawl (5%)
Ready to Research? Enable Tavily in NimbleBrain Studio and start with the free tier. You get 1,000 requests per month to explore all 4 powerful research tools!