Skip to main content

Overview

What it does

Tavily provides AI-optimized web search and research capabilities specifically designed for AI agents and LLMs. Search the real-time web, extract content from specific URLs, crawl websites systematically, and map site structures - all optimized for accuracy and relevance. Results include source citations, relevance scoring, and intelligent content filtering to deliver the most useful information without noise.Key Features:
  • AI-powered web search optimized for LLMs
  • Real-time internet access for current information
  • Content extraction from specific URLs
  • Intelligent web crawling with depth control
  • Website structure mapping and analysis
  • Multiple search depths (basic/advanced)
  • Domain and time-based filtering
  • News-specific search mode
  • Image search with descriptions
  • Natural language crawler instructions

Use Cases

Research & Analysis: Market research, competitive intelligence, academic research, trend analysis, fact-checking and verificationContent Discovery: News monitoring, content research for writing, product research and comparison, technical documentation explorationData Collection: Systematic site content extraction, structured data gathering, site architecture analysis, content auditingIntelligence Gathering: Due diligence investigations, brand monitoring, industry research, regulatory compliance research
Tavily provides AI-optimized search results specifically designed for AI agents and LLMs, filtering noise and delivering accurate, relevant information with authoritative source citations.

Quick Start

1

Get your API key

Sign up for a Tavily account at app.tavily.com/sign-upFree Tier Includes:
  • 1,000 search requests/month
  • All 4 search tools available
  • Basic and advanced search depths
  • Real-time web access
  • AI-powered result optimization
  • Source citations and relevance scoring
  • No credit card required
After signing up:
  1. Verify your email address
  2. Navigate to your API dashboard
  3. Copy your API key from the dashboard
The free tier provides 1,000 requests per month - perfect for testing and moderate research usage.
2

Add to NimbleBrain Studio

In NimbleBrain Studio:
  1. Navigate to MCP Servers in the sidebar
  2. Click Add Server
  3. Search for “Tavily” in the server registry
  4. Click Configure
  5. Paste your API key in the TAVILY_API_KEY field
  6. Click Save & Enable
The server will automatically connect and enable real-time web search in your conversations.
3

Test your connection

In your Studio chat, try this prompt:
"Search the web for the latest news about artificial intelligence breakthroughs this week"
You should see:
  • Real-time search results from authoritative sources
  • Content summaries and key information
  • Source URLs with publication dates
  • Relevance scores for each result
  • The 🔧 tool indicator confirming Tavily is working
Look for cited sources and relevance scores to confirm search quality.

Available Tools

Extract and process raw content from specific URLs with intelligent parsing. Perfect for gathering detailed information from known sources, analyzing specific articles, or collecting data from multiple pages.Parameters:
ParameterTypeRequiredDefaultDescription
urlsarrayYes-List of URLs to extract content from
extract_depthstringNo”basic”Extraction depth: “basic” or “advanced”
include_imagesbooleanNofalseInclude images from pages
formatstringNo”markdown”Output format: “markdown” or “text”
include_faviconbooleanNofalseInclude favicon URLs
Returns:
{
  "results": [
    {
      "url": "https://example.com/article",
      "raw_content": "Extracted content in specified format...",
      "images": [
        "https://example.com/image1.jpg",
        "https://example.com/image2.jpg"
      ],
      "favicon": "https://example.com/favicon.ico"
    }
  ]
}
Example Usage:
"Extract the content from these three blog posts: [url1], [url2], [url3]"
"Extract all content and images from this LinkedIn profile: [linkedin-url]" (use advanced depth)
"Get the full text content from this research paper: [pdf-url]"
Use advanced extraction depth for complex pages like LinkedIn profiles, embedded content, or pages with tables and structured data. Basic depth is faster for simple articles.
Systematically explore and extract content from websites starting from a base URL. The crawler intelligently follows links like a graph traversal, with configurable depth, breadth, and filtering options.Parameters:
ParameterTypeRequiredDefaultDescription
urlstringYes-Root URL to begin the crawl
max_depthintegerNo1Maximum crawl depth from base URL (minimum: 1)
max_breadthintegerNo20Maximum links per page (minimum: 1)
limitintegerNo50Total pages to crawl before stopping (minimum: 1)
instructionsstringNo-Natural language instructions for crawler
select_pathsarrayNo[]Regex patterns for path filtering (e.g., /docs/.*)
select_domainsarrayNo[]Regex patterns for domain filtering
allow_externalbooleanNotrueWhether to return external links
extract_depthstringNo”basic”Content extraction: “basic” or “advanced”
formatstringNo”markdown”Output format: “markdown” or “text”
include_faviconbooleanNofalseInclude favicon URLs
Returns:
{
  "base_url": "https://example.com",
  "results": [
    {
      "url": "https://example.com/page1",
      "raw_content": "Extracted page content...",
      "favicon": "https://example.com/favicon.ico"
    },
    {
      "url": "https://example.com/page2",
      "raw_content": "Extracted page content..."
    }
  ],
  "response_time": 3.5
}
Example Usage:
"Crawl the React documentation starting from docs.react.dev and extract all content about hooks"
"Systematically crawl example.com/blog with max depth of 2 and extract all blog posts"
"Crawl the /api section of this documentation site and map all endpoints"
Be mindful of crawl limits. Large sites with high depth/breadth settings can consume many pages quickly. Start with conservative settings and adjust as needed.
Create a structured map of website URLs to understand site architecture, content organization, and navigation paths. Perfect for site audits, content discovery, and analyzing website structure without extracting full content.Parameters:
ParameterTypeRequiredDefaultDescription
urlstringYes-Root URL to begin mapping
max_depthintegerNo1Maximum mapping depth (minimum: 1)
max_breadthintegerNo20Maximum links per level (minimum: 1)
limitintegerNo50Total links to map (minimum: 1)
instructionsstringNo-Natural language instructions for mapping
select_pathsarrayNo[]Regex patterns for path selection
select_domainsarrayNo[]Regex patterns for domain selection
allow_externalbooleanNotrueInclude external links in results
Returns:
{
  "base_url": "https://example.com",
  "results": [
    "https://example.com/about",
    "https://example.com/products",
    "https://example.com/products/item1",
    "https://example.com/contact",
    "https://example.com/blog"
  ],
  "response_time": 2.1
}
Example Usage:
"Map the structure of example.com to see all main sections and pages"
"Create a site map for docs.example.com focusing only on the /api paths"
"Map all product pages on this e-commerce site starting from /products"
Mapping is faster than crawling since it doesn’t extract content. Use it first to understand site structure before deciding what to crawl.

Authentication

API Key Required: This server requires a Tavily API key to access web search and research capabilities.

Getting Your API Key

  1. Create an account at app.tavily.com/sign-up
  2. Choose your plan (Free tier available with no credit card)
  3. Verify your email address
  4. Navigate to the API dashboard
  5. Copy your API key
  6. Add it to your Studio server configuration
Start with the Free tier - 1,000 requests/month is sufficient for extensive testing and many research projects.

Rate Limits & Pricing

PlanRequests/MonthFeaturesPrice
Free1,000All 4 tools, basic + advanced search$0
Basic10,000+ Priority support$49/mo
Pro50,000+ Higher rate limits, advanced features$149/mo
EnterpriseCustomDedicated support, custom limits, SLACustom
Request Counting:
  • Each search query = 1 request
  • Each extract call = 1 request (regardless of URL count)
  • Each crawl session = 1 request (regardless of pages crawled)
  • Each map operation = 1 request
  • Failed requests don’t count toward limit
  • Limit resets on the 1st of each month
Free tier is limited to 1,000 requests per month. Monitor your usage in the Tavily dashboard to avoid hitting limits.

Managing Your API Key in Studio

Your API key is securely stored in NimbleBrain Studio. To update it:
  1. Go to SettingsMCP Servers
  2. Find “Tavily” in your server list
  3. Click Edit Configuration
  4. Update your API key
  5. Click Save
Studio automatically manages server connections - no manual restarts required.

Security Best Practices

Your Tavily API key grants access to your search quota:
  • Never share your API key publicly
  • Don’t commit keys to version control
  • Rotate keys periodically (every 90 days)
  • Monitor usage for unexpected activity
  • Use separate keys for different environments
  • Keep keys in secure credential managers
If your key is compromised, regenerate it immediately in the Tavily dashboard under API Settings.
Track your API usage to avoid hitting limits:
  • Check usage dashboard regularly at app.tavily.com
  • Set up usage alerts in Tavily (available in dashboard)
  • Implement query caching for repeated searches
  • Use appropriate search depth for your needs
  • Optimize query frequency in automated workflows
  • Review request patterns monthly
Usage optimization tips:
  • Basic search depth uses fewer resources than advanced
  • Batch extract operations when possible (multiple URLs in one call)
  • Use map before crawl to understand site structure
  • Cache search results for frequently accessed queries
Studio can cache search results - enable caching in settings to reduce API calls for repeated queries.
Configure search filters to control results quality:
  • Use domain allowlists for trusted sources (include_domains)
  • Exclude unreliable or low-quality domains (exclude_domains)
  • Filter by date for recent content (time_range, start_date, end_date)
  • Adjust search depth based on research needs
  • Use topic=“news” for current events
  • Control result count to manage API usage (max_results)
Domain filtering examples:
  • Research: ["arxiv.org", "scholar.google.com", "ieee.org"]
  • News: ["nytimes.com", "reuters.com", "bloomberg.com"]
  • Tech: ["techcrunch.com", "arstechnica.com", "theverge.com"]
Proper filtering improves result quality and reduces time spent reviewing irrelevant content.

Example Workflows

  • Market Research
  • Fact Checking
  • News Monitoring
  • Academic Research
  • Product Research
  • Competitive Intelligence
  • Content Research
  • Technical Research
  • Website Analysis
Scenario: Research competitors and market trends in electric vehicle industryPrompt:
"Search for recent developments in the electric vehicle market, focusing on Tesla's main competitors and market share trends from the last 3 months"
What happens:
  • Searches authoritative business and tech news sources
  • Filters for content from last 3 months
  • Extracts key facts, statistics, and quotes
  • Provides source citations with publication dates
  • Ranks results by relevance and authority
  • Identifies trends and patterns across sources
Time: 3-5 seconds (advanced search) API calls: 1 requestExample Response:
  • Market share data from industry reports
  • Recent news about BYD, Rivian, Lucid competitors
  • Analyst predictions and expert opinions
  • Statistical trends with time series data
  • Cited sources from Bloomberg, Reuters, industry publications
Follow-up prompts:
  • “What are the main challenges facing EV manufacturers according to these sources?”
  • “Extract detailed content from the top 3 most relevant articles”
  • “Search for investor sentiment about EV stocks in the same time period”
Use advanced search depth for thorough research with more authoritative sources. Combine with domain filtering to focus on business publications.

Troubleshooting

Error Message:
429 Too Many Requests - Monthly quota exceeded
Cause: You’ve exceeded your monthly request limit (1,000 for Free tier)Solutions:
  • Check your usage at app.tavily.com dashboard
  • Wait until next month for automatic limit reset (1st of the month)
  • Upgrade to a higher tier plan ($49/mo for 10,000 requests)
  • Implement result caching for repeated queries
  • Optimize query frequency in workflows
  • Use more specific queries to reduce trial-and-error searches
  • Consider batching research tasks monthly
Usage optimization:
  • Use basic search depth when advanced isn’t needed
  • Cache search results in your workflow
  • Batch multiple URLs in single extract call
  • Map before crawling to plan efficiently
Enable caching in Studio settings to automatically reuse recent search results and reduce API calls.
Error Message:
401 Unauthorized - Invalid API key
Solutions:
  • Verify your API key in Studio server settings (check for extra spaces)
  • Confirm your Tavily account is active
  • Check if key was revoked or regenerated
  • Regenerate key from Tavily dashboard if necessary
  • Ensure you copied the entire key without truncation
  • Verify no special characters were added during copy/paste
To update API key in Studio:
  1. Navigate to Settings → MCP Servers → Tavily
  2. Click Edit Configuration
  3. Paste new API key (no quotes, no spaces)
  4. Save changes
  5. Verify “Active” status appears
API keys are account-specific. Don’t share keys between team members - each person should have their own account and key.
Error Message:
No relevant results found for query
Solutions:
  • Broaden your search query (use less specific terms)
  • Remove overly restrictive domain filters
  • Expand time range or remove date filters
  • Check spelling and terminology
  • Try alternative keywords or phrases
  • Use more general terms first, then narrow with follow-up queries
  • Verify the topic has publicly available web content
  • Try removing country restrictions if set
Query optimization examples:
  • ❌ Too specific: “John Smith from Acme Corp’s opinion on XYZ from last Tuesday”
  • ✅ Better: “Expert opinions on XYZ technology”
  • ✅ Best: “Recent analysis of XYZ technology trends”
Troubleshooting steps:
  1. Try basic query without filters
  2. If results appear, gradually add filters back
  3. Test with known queries that should have results
  4. Verify internet connectivity
Start with broad queries to establish that content exists, then use follow-up queries to narrow focus based on initial results.
Issue: Searches taking longer than expectedSolutions:
  • Check your internet connection speed
  • Verify Tavily API status at status.tavily.com (if available)
  • Reduce search depth to “basic” for faster results
  • Decrease max_results parameter (e.g., 5 instead of 20)
  • Simplify complex multi-part queries
  • Break compound questions into separate queries
  • Consider query complexity (more filters = more processing)
  • Check for Tavily service announcements
Typical response times:
  • Basic search: 2-3 seconds
  • Advanced search: 4-6 seconds
  • Extract (single URL): 1-2 seconds
  • Extract (multiple URLs): 2-4 seconds
  • Crawl (depth 1, 10 pages): 3-5 seconds
  • Crawl (depth 2, 50 pages): 8-12 seconds
  • Map (50 URLs): 2-4 seconds
Performance tips:
  • Use basic search for quick lookups
  • Advanced search for comprehensive research only
  • Limit crawl depth and breadth for faster results
  • Map sites before crawling to understand scope
Advanced searches and deep crawls are more thorough but take longer. Balance speed vs. comprehensiveness based on your needs.
Issue: Results are not relevant or low qualitySolutions:
  • Use more specific, detailed search queries
  • Add domain filters for trusted, authoritative sources
  • Exclude known low-quality or unreliable domains
  • Increase search depth from “basic” to “advanced”
  • Provide more context in your query
  • Use quotes for exact phrase matching
  • Filter by date for recent, current content
  • Specify topic=“news” for current events
Query improvement examples:Basic: “AI news”
  • Too broad, mixed quality results
Better: “Latest AI breakthroughs in healthcare 2024”
  • More specific topic, time frame, industry
Best: “Recent FDA-approved AI medical diagnostic tools and clinical trial results”
  • Very specific, implies authoritative sources
Domain filtering for quality:
  • Academic: ["arxiv.org", "scholar.google.com", "pubmed.gov"]
  • Business: ["bloomberg.com", "reuters.com", "wsj.com"]
  • Tech: ["arstechnica.com", "techcrunch.com", "theverge.com"]
More specific queries with context and constraints produce higher quality, more relevant results. Invest time in query crafting.
Issue: Domain filters not producing expected resultsSolutions:
  • Verify domain format: use “example.com” not “https://example.com
  • Check for typos in domain names
  • Ensure domains actually have content matching your query
  • Don’t over-filter (too many restrictions = no results)
  • Use include OR exclude, not both for same domains
  • Test without filters first to verify content exists
  • Check that domain is spelled exactly as it appears in URLs
Correct domain format examples:✅ Correct:
["nytimes.com", "washingtonpost.com", "reuters.com"]
❌ Incorrect:
["https://nytimes.com", "www.washingtonpost.com", "https://www.reuters.com/"]
Testing approach:
  1. Search without domain filters first
  2. Verify which domains appear in results
  3. Use exact domain names from those results
  4. Add filters one at a time
Overly restrictive domain filters can result in zero results even for valid queries. Start with 2-3 trusted domains, not 20.
Issue: Crawler or mapper returning fewer pages than expectedSolutions:
  • Check site’s robots.txt (some sites block crawlers)
  • Increase limit parameter (default is 50)
  • Increase max_depth to explore deeper
  • Increase max_breadth to follow more links per page
  • Remove overly restrictive select_paths patterns
  • Verify select_domains regex is correct
  • Check if site requires authentication (crawler can’t access)
  • Some sites may have JavaScript-only navigation (not crawlable)
Understanding limits:
  • limit: Total pages to process before stopping
  • max_depth: How many levels deep from base URL
  • max_breadth: How many links per page
Example:
  • depth=2, breadth=10, limit=50
  • Level 0: 1 page (base URL)
  • Level 1: up to 10 pages (first 10 links from base)
  • Level 2: up to 100 pages (10 links from each L1 page)
  • But limited to 50 total pages by limit parameter
Optimization:
  • Map first to understand actual site structure
  • Use path patterns to focus on relevant sections
  • Adjust depth/breadth based on map results
  • Increase limit gradually to avoid over-crawling
Always map a site first to understand its structure and size before crawling. This helps you set appropriate limits.
Error Message:
Server connection timeout or unavailable
Solutions:
  • Check your internet connection
  • Verify server is enabled in Studio (Settings → MCP Servers)
  • Try disabling and re-enabling the Tavily server
  • Check Tavily API status (look for status page or announcements)
  • Verify Studio is not experiencing service interruptions
  • Clear Studio cache and retry
  • Try a simple test query to isolate issue
  • Contact support if issue persists
Connection troubleshooting steps:
  1. Test internet connection (open a website)
  2. Check Studio status (other servers working?)
  3. Verify API key is valid (test in Tavily dashboard)
  4. Try disabling/re-enabling in Studio
  5. Check for error details in Studio logs (if available)
Studio manages all server infrastructure automatically - no local setup or maintenance required.
Issue: Studio doesn’t use Tavily tools when expectedSolutions:
  • Be more explicit: mention “search the web” or “use Tavily”
  • Verify server shows “Active” status in Studio
  • Check API key is correctly configured
  • Provide clear action verbs: “search”, “find”, “extract”, “crawl”
  • Include specific URLs when you want extraction
  • Don’t ask Studio to find URLs - provide them directly
  • Use phrases that indicate web search intent
Example effective prompts:✅ Good - Clear web search intent:
  • “Search the web for recent AI developments”
  • “Find the latest news about climate policy”
  • “Search for expert opinions on remote work”
✅ Good - Explicit extraction:
  • “Extract content from this URL: [url]”
  • “Crawl docs.example.com and extract all content”
❌ Ambiguous - May not trigger tools:
  • “What’s the latest on AI?” (might use general knowledge)
  • “Tell me about climate policy” (might not search web)
  • “What does this page say?” (no URL provided)
Make your intent clear: use “search the web”, “extract from URL”, “crawl this site” to explicitly signal tool usage.

Learning Resources

Tips for Better Search Results:1. Query Crafting:
  • Be specific: Include context, timeframe, and domain
  • Use industry terminology and proper nouns
  • Specify what you’re looking for (research, news, opinions, statistics)
  • Include qualifiers (recent, official, expert, peer-reviewed)
2. Search Depth Selection:
  • Basic: Quick lookups, general information, common topics
  • Advanced: Comprehensive research, academic work, competitive intelligence
3. Effective Filtering:
  • Domain allowlists for trusted sources
  • Time ranges for current information
  • Topic selection (general vs. news)
  • Geographic filtering when relevant
4. Iterative Research:
  • Start broad to understand landscape
  • Refine with follow-up questions
  • Extract full content from promising sources
  • Cross-reference across multiple results
5. Source Evaluation:
  • Check publication dates
  • Verify author credentials
  • Look for citations and references
  • Compare across multiple sources
  • Prioritize primary sources over secondary
Think of Tavily as a research assistant - the more context and direction you provide, the better the results.
Effective Research Process:Phase 1: Discovery (Basic Search)
  • Start with broad queries to map the topic
  • Identify key terms, names, and concepts
  • Find authoritative sources and publications
  • Understand the current state of knowledge
Phase 2: Deep Dive (Advanced Search + Extract)
  • Use advanced search for comprehensive coverage
  • Apply domain filters for quality sources
  • Extract full content from key articles
  • Identify gaps and questions for further research
Phase 3: Verification (Cross-referencing)
  • Verify facts across multiple sources
  • Check publication dates and recency
  • Identify consensus vs. outlier opinions
  • Note conflicting information for further investigation
Phase 4: Specialized Research (Crawl/Map)
  • Map site structures for systematic coverage
  • Crawl documentation or knowledge bases
  • Extract structured information at scale
  • Build comprehensive topic databases
Phase 5: Synthesis
  • Combine findings into coherent insights
  • Track all sources and citations
  • Identify trends and patterns
  • Formulate conclusions based on evidence
Studio automatically tracks sources and citations throughout your research session for easy reference.
Assessing Source Quality and Reliability:Authority Indicators:
  • Author credentials and expertise
  • Publication reputation and peer review status
  • Citations and references provided
  • Institutional backing or sponsorship
  • Domain authority (.edu, .gov, established publications)
Recency Evaluation:
  • Publication date matches your needs
  • Updates and corrections noted
  • Reflects current understanding
  • Historical context provided when needed
Objectivity Assessment:
  • Balanced presentation of evidence
  • Multiple perspectives included
  • Clear distinction between fact and opinion
  • Potential biases disclosed
  • Funding sources transparent
Evidence Quality:
  • Primary sources cited
  • Data and statistics provided
  • Methodology clearly described
  • Reproducible or verifiable claims
  • Expert consensus acknowledged
Cross-referencing:
  • Compare across multiple sources
  • Look for consensus on key facts
  • Note areas of disagreement
  • Identify potential errors or outliers
  • Verify claims with official sources
Red Flags:
  • No author or source attribution
  • Sensational headlines
  • No citations or sources
  • Conflicts with established facts
  • Poor grammar or unprofessional presentation
AI summaries are helpful for efficiency but always verify critical information with original authoritative sources.
Strategies for Complex Research:1. Comparison Queries:
"Compare [A] vs [B] based on [criteria] with recent data"
"What are the main differences between [A] and [B]?"
"Pros and cons of [topic] from expert perspectives"
2. Trend Analysis:
"What are emerging trends in [industry] for 2024?"
"How has [topic] evolved over the past [timeframe]?"
"Latest developments in [field] according to recent research"
3. Expert Insights:
"What do leading experts say about [topic]?"
"Find recent expert opinions and analysis on [issue]"
"Search for thought leadership content about [subject]"
4. Evidence Gathering:
"Find statistics and data about [topic] from authoritative sources"
"What research and studies support [claim]?"
"Search for peer-reviewed evidence on [medical/scientific topic]"
5. Temporal Queries:
"Latest news from the past 24 hours about [event]"
"Historical analysis of [topic] from 2020 to present"
"Track how [narrative/understanding] has changed over time"
6. Geographic Specificity:
"US market analysis of [product/service]"
"European regulations regarding [topic]"
"Search prioritizing sources from [country]"
7. Format-Specific Searches:
"Find official documentation for [technical topic]"
"Search for case studies about [business challenge]"
"Locate white papers on [industry topic]"
"Find tutorial or how-to content for [skill]"
Frame queries as specific research objectives rather than vague questions. Include context, constraints, and desired source types for best results.
Effective Website Analysis:When to Use Map vs. Crawl:Use Map when:
  • Understanding site structure first
  • You need a site overview
  • Planning a targeted crawl
  • Checking link validity
  • Auditing site architecture
  • You don’t need page content yet
Use Crawl when:
  • You need actual page content
  • Building a knowledge base
  • Extracting documentation
  • Collecting data at scale
  • Deep content analysis required
  • Following up after mapping
Parameter Strategy:max_depth:
  • 1: Current page + direct links (homepage + main sections)
  • 2: Two levels deep (homepage → section → subsection)
  • 3+: Deep exploration (use cautiously with large sites)
max_breadth:
  • 10: Focused exploration, main links only
  • 20: Balanced coverage (default)
  • 50+: Comprehensive but slower
limit:
  • 20: Quick sampling
  • 50: Medium site coverage (default)
  • 100: Large site exploration
  • 200+: Comprehensive archival
Path Patterns:
["/docs/.*"]           // Only documentation pages
["/api/v2/.*"]        // Specific API version
["/blog/\\d{4}/.*"]   // Blog posts with year
["/products/.*"]       // Product pages
Domain Patterns:
["^docs\\.example\\.com$"]           // Exact subdomain
["^.*\\.example\\.com$"]             // All subdomains
["^(docs|api)\\.example\\.com$"]     // Specific subdomains
Best Practices:
  1. Always map large sites first
  2. Use path patterns to focus on relevant sections
  3. Start with conservative limits
  4. Increase depth/breadth based on map results
  5. Use instructions for semantic filtering
  6. Respect site’s robots.txt and crawling policies
Large sites with high depth/breadth can consume many pages quickly. Start conservatively and adjust based on results.
Understanding Request Consumption:Request Counting:
  • Search: 1 request per query (regardless of max_results)
  • Extract: 1 request per call (multiple URLs = still 1 request)
  • Crawl: 1 request per session (all pages crawled = 1 request)
  • Map: 1 request per session (all URLs mapped = 1 request)
Cost-Effective Strategies:1. Batch Operations:
✅ Extract multiple URLs in one call:
"Extract content from these 5 URLs: [url1, url2, url3, url4, url5]"
→ 1 request

❌ Extract URLs separately:
5 separate extract calls
→ 5 requests
2. Strategic Search Depth:
  • Use basic search (faster, same request count)
  • Reserve advanced for critical research only
  • Both consume 1 request, but advanced may take longer
3. Efficient Crawling:
✅ Map first, then targeted crawl:
Map (1 request) → understand structure → focused crawl (1 request)
→ 2 requests total, but more efficient

❌ Blind crawling:
Large crawl with poor parameters → wasted pages
→ 1 request, but inefficient use
4. Query Optimization:
  • Craft specific queries to get right results first time
  • Poor query → no results → retry → multiple requests
  • Good query → useful results → one request
5. Result Caching:
  • Enable caching in Studio for repeated queries
  • Cache research results in your workflow
  • Avoid re-searching the same topics
6. Time-Based Planning:
  • Batch research tasks monthly if possible
  • Schedule regular monitoring (weekly, not daily)
  • Consolidate similar queries into comprehensive searches
Monthly Planning (1,000 requests):
  • Daily research: ~33 requests/day
  • Weekly monitoring: ~250 requests/week
  • Mix of search (80%), extract (15%), crawl (5%)
Track your usage patterns in the Tavily dashboard. Adjust research habits based on monthly consumption trends.

Ready to Research? Enable Tavily in NimbleBrain Studio and start with the free tier. You get 1,000 requests per month to explore all 4 powerful research tools!