Natural Language Database Queries
Intelligent Index Tuning
Overview
The PostgreSQL MCP Server transforms database management into a natural conversation. Instead of writing complex SQL queries, analyzing execution plans, or manually tuning indexes, you simply describe what you need in plain English. The server intelligently translates your requests into optimized database operations. Perfect for:- Data analysts who want instant answers without writing SQL
- Developers optimizing application performance
- Database administrators monitoring system health
- Teams wanting to democratize database access safely
- Schema Intelligence: Explore database structure through natural language
- Safe SQL Execution: Read-only mode protects production data
- Query Optimization: Get execution plans with hypothetical index simulation
- Performance Tuning: Industrial-strength index recommendations (DTA algorithm)
- Health Monitoring: Comprehensive checks for indexes, connections, vacuum, replication, and more
Quick Start
Set Up PostgreSQL Access
Add Server to NimbleBrain Studio
- Open NimbleBrain Studio
- Navigate to the MCP Servers page
- Search for “PostgreSQL MCP Server”
- Click “Add to Workspace”
- In the environment variables, add:
DATABASE_URI: Your full connection string (e.g.,postgresql://user:password@host:port/database)
- For production use, set
--access-mode restrictedto enable read-only safety features
Test Your Connection
Available Tools
list_schemas - Discover Database Structure
list_schemas - Discover Database Structure
- Retrieves all schemas from
information_schema.schemata - Classifies schemas as System, System Information, or User schemas
- Shows schema owners for permission auditing
- Ordered by type and name for easy navigation
schema_name: Name of the schemaschema_owner: Owner of the schemaschema_type: Classification (System Schema, System Information Schema, or User Schema)
- “What schemas exist in my database?”
- “Show me all the database schemas”
- “List the user-created schemas”
- “Which schemas are system schemas?”
list_objects - Browse Schema Contents
list_objects - Browse Schema Contents
- Queries
information_schematables for object metadata - Filters by schema and object type
- Returns structured information about each object
- Supports tables, views, sequences, and extensions
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
schema_name | string | Yes | - | Schema to query (e.g., “public”, “auth”) |
object_type | string | No | ”table” | Type of object: “table”, “view”, “sequence”, or “extension” |
- Tables/Views: schema, name, type
- Sequences: schema, name, data_type
- Extensions: name, version, relocatable
- “What tables are in the public schema?”
- “Show me all views in the analytics schema”
- “List sequences in the public schema”
- “What extensions are installed?”
get_object_details - Inspect Table Structure
get_object_details - Inspect Table Structure
- Gets complete structure of tables and views
- Shows all columns with data types, nullability, and defaults
- Lists constraints (primary keys, foreign keys, unique, check)
- Displays all indexes with their definitions
- Provides sequence and extension details
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
schema_name | string | Yes | - | Schema containing the object |
object_name | string | Yes | - | Name of the object to inspect |
object_type | string | No | ”table” | Type: “table”, “view”, “sequence”, or “extension” |
- Tables/Views: basic info, columns array, constraints array, indexes array
- Sequences: schema, name, data_type, start_value, increment
- Extensions: name, version, relocatable status
- “Show me the structure of the users table”
- “What columns does the orders table have?”
- “What indexes exist on the products table?”
- “Show me all constraints on the accounts table”
execute_sql - Run Database Queries
execute_sql - Run Database Queries
- Executes any SQL statement in unrestricted mode
- Enforces read-only operations in restricted mode
- Returns query results as structured data
- Provides timeout protection (30s in restricted mode)
- Prevents destructive operations in production
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
sql | string | Yes | - | SQL query to execute |
- Unrestricted (Development): Allows all SQL operations including INSERT, UPDATE, DELETE, DROP
- Restricted (Production): Only allows SELECT queries, enforces 30-second timeout, blocks destructive operations
- “Show me the first 10 users”
- “Count how many orders were placed today”
- “Find all products with price greater than 100”
- “Get the average order value by month”
explain_query - Analyze Query Performance
explain_query - Analyze Query Performance
- Generates execution plans with EXPLAIN
- Shows estimated vs actual costs with ANALYZE
- Simulates hypothetical indexes using HypoPG extension
- Reveals sequential scans, index usage, and join strategies
- Identifies performance bottlenecks before execution
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
sql | string | Yes | - | SQL query to explain |
analyze | boolean | No | false | Run query for real statistics (takes longer but more accurate) |
hypothetical_indexes | array | No | [] | Simulated indexes to test performance impact |
table: Table name (e.g., “users”)columns: Array of column names (e.g., [“email”] or [“last_name”, “first_name”])using: Optional index method (default: “btree”, options: “hash”, “gist”, “gin”, etc.)
- “Explain how this query will run: SELECT * FROM users WHERE email = ‘[email protected]’”
- “Show me the execution plan with analysis for my orders query”
- “What would happen if I added an index on users.email?”
- “Simulate an index on (last_name, first_name) and show the performance impact”
hypopg extension installed. The tool will notify you if it’s missing.get_top_queries - Find Performance Bottlenecks
get_top_queries - Find Performance Bottlenecks
pg_stat_statements extension.What it does:- Analyzes query statistics from pg_stat_statements
- Ranks queries by total time, mean time, or resource consumption
- Shows execution counts, average time, and resource usage
- Identifies queries that would benefit most from optimization
- Provides normalized query text (parameters replaced)
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
sort_by | string | No | ”resources” | Ranking criteria: “total_time”, “mean_time”, or “resources” |
limit | integer | No | 10 | Number of queries to return (for total_time/mean_time) |
- “resources”: Comprehensive resource analysis (CPU, I/O, memory, calls)
- “total_time”: Queries with highest cumulative execution time
- “mean_time”: Queries with highest average execution time per call
- “What are my slowest queries?”
- “Show me the most resource-intensive database operations”
- “Which queries have the highest average execution time?”
- “Find queries that run most frequently”
analyze_workload_indexes - Optimize Database Performance
analyze_workload_indexes - Optimize Database Performance
- Examines frequently executed queries from pg_stat_statements
- Applies Microsoft SQL Server DTA (Database Tuning Advisor) algorithm
- Recommends indexes that provide maximum performance improvement
- Considers index size limits and resource constraints
- Provides CREATE INDEX statements ready to execute
- Alternative LLM-based optimization available
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
max_index_size_mb | integer | No | 10000 | Maximum size for recommended indexes (in MB) |
method | string | No | ”dta” | Analysis method: “dta” or “llm” |
- “dta”: Microsoft SQL Server Database Tuning Advisor algorithm - proven, deterministic, production-ready
- “llm”: AI-powered optimization - experimental, considers broader context
- “Analyze my database and recommend indexes”
- “What indexes would improve my query performance?”
- “Optimize my database for better speed”
- “Find the best indexes for my workload”
analyze_query_indexes - Optimize Specific Queries
analyze_query_indexes - Optimize Specific Queries
- Analyzes provided queries instead of workload history
- Applies DTA algorithm or LLM optimization
- Recommends indexes optimized for specific query patterns
- Perfect for optimizing new features before deployment
- Simulates index impact on query execution plans
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
queries | array | Yes | - | List of SQL queries to analyze (1-10 queries) |
max_index_size_mb | integer | No | 10000 | Maximum size for recommended indexes (in MB) |
method | string | No | ”dta” | Analysis method: “dta” or “llm” |
- “Optimize these queries: [query list]”
- “What indexes would help this specific query?”
- “I have 3 slow queries, recommend indexes for them”
- “Analyze these queries and suggest performance improvements”
analyze_db_health - Monitor Database Health
analyze_db_health - Monitor Database Health
- Checks index health (invalid, duplicate, bloated indexes)
- Monitors connection utilization and connection pool status
- Analyzes vacuum health and transaction ID wraparound risk
- Validates sequences approaching maximum values
- Checks replication lag and replication slot health
- Measures buffer cache hit rates for tables and indexes
- Identifies invalid constraints
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
health_type | string | No | ”all” | Health check types (comma-separated or “all”) |
- “index”: Invalid, duplicate, and bloated indexes
- “connection”: Connection count and utilization
- “vacuum”: Vacuum health and transaction wraparound risk
- “sequence”: Sequences near maximum values
- “replication”: Replication lag and slot health
- “buffer”: Buffer cache hit rates
- “constraint”: Invalid constraints
- “all”: Runs all checks (default)
- “Check my database health”
- “Are there any problems with my database?”
- “Check index health”
- “Monitor connection utilization”
- “Is my database at risk of transaction ID wraparound?”
Authentication & Configuration
Connection Configuration
The PostgreSQL MCP Server requires database credentials to establish connections. You can provide connection information in two ways: Option 1: Connection URI (Recommended)PGHOST: Database hostnamePGPORT: Port number (default: 5432)PGDATABASE: Database namePGUSER: UsernamePGPASSWORD: Password
- Restricted Mode (Production)
- Unrestricted Mode (Development)
--access-mode restrictedFeatures:- ✅ Read-only SQL execution (SELECT only)
- ✅ 30-second query timeout
- ✅ Blocks destructive operations (DROP, DELETE, UPDATE)
- ✅ Safe for production data
- ✅ All analysis tools available
- Production database analysis
- Data exploration by non-DBAs
- Reporting and analytics
- AI-powered data queries
Managed PostgreSQL Services
The server works seamlessly with popular managed PostgreSQL providers:Supabase
Supabase
- Log in to your Supabase dashboard
- Navigate to Project Settings → Database
- Find “Connection String” section
- Copy the URI (Direct connection recommended for MCP)
- Replace
[YOUR-PASSWORD]with your database password
pg_stat_statements by default. For index optimization:Neon
Neon
- Open your Neon console
- Select your project and branch
- Go to Dashboard → Connection Details
- Copy the connection string
- ✅ Instant branching for safe testing
- ✅ Auto-scaling based on usage
- ✅ Built-in connection pooling
Amazon RDS PostgreSQL
Amazon RDS PostgreSQL
- Open AWS RDS Console
- Select your PostgreSQL instance
- Find “Connectivity & security” section
- Copy the endpoint URL
- Construct connection string
- Configure security groups to allow Studio IP addresses
- Use IAM database authentication for enhanced security
- Enable SSL/TLS connections
Google Cloud SQL
Google Cloud SQL
- Open Cloud SQL Console
- Select your PostgreSQL instance
- Navigate to Connections
- Get public IP address or use Cloud SQL Proxy
- Construct connection string
DigitalOcean Managed Databases
DigitalOcean Managed Databases
- Open DigitalOcean Control Panel
- Navigate to Databases
- Select your PostgreSQL cluster
- Go to Connection Details
- Copy the connection string (sslmode=require included)
Azure Database for PostgreSQL
Azure Database for PostgreSQL
- Open Azure Portal
- Navigate to your PostgreSQL server
- Select Connection strings under Settings
- Copy the connection string
@servername suffix in username.Self-Hosted PostgreSQL
Local PostgreSQL Installation
Local PostgreSQL Installation
postgresql.conf:Docker PostgreSQL
Docker PostgreSQL
Network Security
Network Security
pg_hba.conf to control access:Security Best Practices
Create Read-Only Users
--access-mode restricted for maximum safety.Enable SSL/TLS Encryption
require: Encrypt connection (recommended minimum)verify-ca: Encrypt and verify server certificateverify-full: Encrypt, verify certificate, and hostname
Use Connection Pooling
- Reduces connection overhead
- Better resource utilization
- Protects against connection exhaustion
Audit and Monitoring
Required PostgreSQL Extensions
For full functionality, install these extensions:| Extension | Required For | Installation |
|---|---|---|
pg_stat_statements | Query analysis, workload optimization, top queries | CREATE EXTENSION pg_stat_statements; |
hypopg | Hypothetical index simulation in explain plans | CREATE EXTENSION hypopg; |
Example Workflows
- Explore Unknown Database
- Find Slow Queries
- Optimize New Feature
- Data Analysis
- Database Health Monitoring
- Query Troubleshooting
- Schema Investigation
-
“What schemas exist in this database?”
- Server uses
list_schemasto show all database schemas - Identifies user schemas vs system schemas
- Server uses
-
“What tables are in the public schema?”
- Server uses
list_objectswith schema_name=“public” - Lists all tables with their types
- Server uses
-
“Show me the structure of the users table”
- Server uses
get_object_detailsfor the users table - Displays columns, data types, constraints, and indexes
- Server uses
-
“What are the first 10 users?”
- Server generates:
SELECT * FROM public.users LIMIT 10 - Executes query and returns results
- Server generates:
Troubleshooting
Error: No database URL provided
Error: No database URL provided
- Open NimbleBrain Studio
- Navigate to MCP Servers → PostgreSQL MCP Server
- Click “Configure” or “Edit”
- Add environment variable:
- Key:
DATABASE_URI - Value: Your full connection string
- Key:
- Save and restart the server
- Local:
postgresql://postgres:password@localhost:5432/mydb - Supabase:
postgresql://postgres:[email protected]:5432/postgres - Neon:
postgresql://user:[email protected]/neondb
Error: Could not connect to database
Error: Could not connect to database
-
Incorrect credentials
- Verify username and password
- Check for special characters that need URL encoding
-
Network/Firewall blocking connection
- Verify database server is reachable
- Check firewall rules allow port 5432
- For cloud databases, verify IP allowlist
-
Wrong host or port
- Confirm hostname is correct
- Verify port (default: 5432)
- Check if using connection pooler (different port)
-
Database doesn’t exist
- Verify database name in connection string
- Create database if needed
-
SSL/TLS required but not enabled
- Add
?sslmode=requireto connection string - Most managed services require SSL
- Add
Error: pg_stat_statements extension not found
Error: pg_stat_statements extension not found
pg_stat_statements extension.Solution:For managed services:- Most providers include this extension by default
- Check provider documentation for enabling extensions
-
Enable in postgresql.conf:
-
Restart PostgreSQL:
-
Create extension:
-
Verify installation:
get_top_queries- Cannot identify slow queriesanalyze_workload_indexes- Cannot analyze workload patterns
Error: hypopg extension not found
Error: hypopg extension not found
hypopg extension.Solution:For managed services:
Check if provider supports hypopg:- Supabase: ✅ Available
- Neon: ⚠️ Check current support
- RDS: ✅ Available (enable in parameter group)
- Cloud SQL: ⚠️ May need to request
-
Install hypopg:
-
Create extension:
-
Verify:
explain_querywith hypothetical_indexes parameter- Cannot simulate index impact before creation
Query timeout in restricted mode
Query timeout in restricted mode
-
Optimize the query:
- Use
explain_queryto identify bottlenecks - Use
analyze_query_indexesto get index recommendations - Add recommended indexes
- Use
-
Add appropriate indexes:
- Long queries usually indicate missing indexes
- Use index optimization tools
-
Use unrestricted mode (dev only):
- Only for non-production databases
- Change
--access-modetounrestricted - No timeout enforcement
-
Limit result set:
- Add LIMIT clause to reduce rows returned
- Use pagination for large datasets
- Filter data before aggregation
Access denied errors
Access denied errors
-
User lacks SELECT permissions:
-
Trying to modify data in restricted mode:
- Restricted mode blocks INSERT, UPDATE, DELETE, DROP
- This is intentional for production safety
- Use unrestricted mode only in development
-
Row-level security (RLS) policies:
- Some managed services (Supabase) use RLS
- Create policies or use service role key
- Or disable RLS for MCP user:
-
Schema-level permissions:
- Verify user has access to target schema
- Check
pg_namespaceprivileges
Connection pool exhausted
Connection pool exhausted
-
Use connection pooling:
- Deploy PgBouncer or similar pooler
- Point Studio to pooler instead of direct database
- Pooler reuses connections efficiently
-
Increase max_connections:
Warning: More connections = more memory usage
-
Close idle connections:
-
For managed services:
- Use built-in connection pooling
- Upgrade to plan with more connections
- Enable connection pooler mode
Slow index recommendations
Slow index recommendations
analyze_workload_indexes takes a very long time or times out.Cause: DTA algorithm analyzes large query workloads and tests multiple index combinations.Solutions:-
Use during off-peak hours:
- Run analysis when database is less busy
- Schedule as weekly or monthly task
-
Reduce max_index_size_mb:
- Lower value = faster analysis
- Try
max_index_size_mb=1000(1GB) instead of default 10GB
-
Use analyze_query_indexes for specific queries:
- Instead of analyzing entire workload
- Target problematic queries directly
- Faster and more focused
-
Ensure pg_stat_statements has data:
- If count is low, wait for more query history
- Reset stats if they’re stale:
SELECT pg_stat_statements_reset();
-
Try LLM method for faster results:
- Use
method="llm"instead ofmethod="dta" - Faster but less deterministic
- Good for initial exploration
- Use
SSL/TLS certificate errors
SSL/TLS certificate errors
- “SSL connection could not be established”
- “certificate verify failed”
- “self-signed certificate”
-
Adjust SSL mode:
sslmode=require- Encrypt but don’t verify certificatesslmode=verify-ca- Verify certificate authoritysslmode=verify-full- Full verification (most secure)
-
For self-signed certificates:
-
For development (not recommended for production):
-
For managed services:
- Download CA certificate from provider
- Add certificate to connection string
- Supabase, Neon, RDS all provide CA certificates
Unexpected query results
Unexpected query results
-
Check the generated SQL:
- Ask: “Show me the SQL you generated for that query”
- Verify it matches your intent
- Look for missing WHERE clauses or incorrect JOINs
-
Examine table structure:
- Use
get_object_detailsto see column names - Verify data types match expectations
- Check for unexpected NULLs
- Use
-
Test with simple queries:
- Start with:
SELECT COUNT(*) FROM table - Add filters incrementally
- Isolate the issue
- Start with:
-
Check for case sensitivity:
- PostgreSQL is case-sensitive for quoted identifiers
- Column “Name” ≠ column “name”
- Use lowercase for consistency
-
Verify schema context:
- Ensure you’re querying the right schema
- Specify schema explicitly:
schema.table - Use
list_schemasandlist_objectsto confirm
Links & Resources
GitHub Repository
PostgreSQL Documentation
pg_stat_statements
HypoPG Extension
MCP Protocol
NimbleBrain Studio
Learning Resources
Understanding Database Indexes
Understanding Database Indexes
- ✅ Columns in WHERE clauses
- ✅ Columns in JOIN conditions
- ✅ Columns in ORDER BY
- ✅ Columns frequently searched
- ❌ Small tables (faster to scan)
- ❌ Columns rarely queried
- ❌ Columns with low cardinality (few unique values)
- ❌ Tables with heavy INSERT/UPDATE (indexes slow writes)
Reading Execution Plans
Reading Execution Plans
- Reads entire table
- Slow for large tables
- Usually means missing index
- Uses index for fast lookup
- Much faster than sequential scan
- Ideal for queries with WHERE clauses
- cost: Estimated query expense (lower = faster)
- rows: Estimated rows returned
- width: Average row size in bytes
Database Health Monitoring
Database Health Monitoring
- Invalid indexes don’t serve queries (need rebuild)
- Duplicate indexes waste storage and slow writes
- Bloated indexes need REINDEX
- Monitor connection pool utilization
- Watch for connection leaks
- Tune max_connections appropriately
- Prevents transaction ID wraparound
- Reclaims storage from deleted rows
- Critical for long-term stability
- High hit rate (>95%) = good performance
- Low hit rate = need more memory
- Monitor with pg_stat_database
- Monitor replication lag
- Check replication slot health
- Ensure standbys stay synchronized
- Run health checks weekly
- Set up automated alerts
- Investigate warnings promptly
- Document baseline metrics
SQL Query Optimization
SQL Query Optimization
- Create indexes on filter columns
- Use covering indexes for SELECT columns
- Consider multi-column indexes for compound filters
- Use LIMIT for pagination
- Filter early with WHERE clauses
- Avoid SELECT * (specify columns)
- Index foreign key columns
- Use INNER JOIN instead of subqueries when possible
- Filter before joining when possible
- ❌ SELECT * FROM large_table
- ❌ Queries without WHERE clause on large tables
- ❌ Functions on indexed columns: WHERE LOWER(email) = ‘x’
- ❌ OR conditions spanning different columns
- Smaller types = better performance
- Use INTEGER instead of BIGINT when sufficient
- Use VARCHAR instead of TEXT for short strings
- Identify slow query with
get_top_queries - Analyze with
explain_query - Get recommendations with
analyze_query_indexes - Test with hypothetical indexes
- Create optimal indexes
- Verify improvement
PostgreSQL Access Patterns
PostgreSQL Access Patterns
- Characteristics: Many small, fast queries
- Examples: User login, place order, update profile
- Optimization: B-tree indexes, primary keys, foreign keys
- Best for: Application databases
- Characteristics: Few large, complex queries
- Examples: Sales reports, trend analysis, aggregations
- Optimization: Aggregated tables, materialized views, column stores
- Best for: Analytics, reporting
- Use read replicas for analytics
- Separate OLTP and OLAP databases
- Schedule heavy queries during off-peak
- Use connection pooling
- Studio queries are typically OLAP-style
- Use restricted mode for production
- Create read replicas for analytics
- Add indexes for common query patterns
Advanced PostgreSQL Features
Advanced PostgreSQL Features
- Tracks query execution statistics
- Identifies slow queries automatically
- Foundation for workload analysis
- Enable in all databases
- Simulates hypothetical indexes
- No storage overhead for testing
- Critical for index recommendations
- Safe to test in production
- Shows actual vs estimated query plans
- Identifies planner mistakes
- Reveals actual bottlenecks
- Use carefully (actually runs query)
- Standard SQL metadata views
- Database structure introspection
- Used by all schema discovery tools
- Always available
- Real-time database statistics
- Monitor connections, locks, activity
- Foundation for health checks
- Essential for troubleshooting
- Materialized views for pre-computed results
- Partitioning for large tables
- VACUUM and autovacuum for maintenance
- Write-ahead logging for durability
- Replication for high availability
--access-mode restricted when connecting to production databases through Studio. This prevents accidental data modifications while maintaining full access to analysis and optimization features.