Skip to main content
LangChain integration Integrate your deployed MCP servers with LangChain to build powerful AI agents and workflows in Python or TypeScript.

Overview

LangChain is a framework for developing applications powered by language models. By connecting your MCP servers to LangChain, you can create agents that leverage your custom tools alongside LLM capabilities.

Setup Instructions

1

Install required dependencies

Run: pip install langchain requests
2

Copy the Python code below to your project

Use the generated Python integration code
3

Replace tool names with actual names from your MCP server

Update the tool configuration to match your server’s tools
4

Initialize your LLM (OpenAI, Anthropic, etc.)

Set up your preferred language model
5

Run the agent with your prompts

Execute the agent with your queries

Python Integration Code

from langchain.tools import Tool
from langchain.agents import initialize_agent, AgentType
import requests

# Configure MCP connection
MCP_ENDPOINT = "https://mcp.nimbletools.ai/{workspace-id}/{server-name}/mcp"
MCP_TOKEN = "YOUR_WORKSPACE_TOKEN"

def call_mcp_tool(tool_name: str, **kwargs):
    """Call an MCP tool via HTTP"""
    headers = {"Authorization": f"Bearer {MCP_TOKEN}"}
    response = requests.post(
        f"{MCP_ENDPOINT}/tools/{tool_name}",
        json=kwargs,
        headers=headers
    )
    return response.json()

# Create LangChain tools
tools = [
    Tool(
        name="echo",
        func=lambda **kwargs: call_mcp_tool("your_tool_name", **kwargs),
        description="Access echo MCP server"
    )
]

# Initialize agent
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)
Replace your_tool_name with actual tool names from your MCP server’s documentation.

Getting Your Configuration

1

Navigate to Toolbox

Go to the Toolbox section in NimbleBrain Studio
2

Find Your Server

Locate the MCP server you want to connect
3

Click Connect

Click the Connect button on the server card
4

Select LangChain

Choose “LangChain” from the integration options
5

Copy Code

Copy the generated Python code to your clipboard

Complete Example

Here’s a complete example using Claude and a knowledge base MCP server:
from langchain.tools import Tool
from langchain.agents import initialize_agent, AgentType
from langchain_anthropic import ChatAnthropic
import requests
import os

# MCP Configuration
MCP_ENDPOINT = "https://mcp.nimbletools.ai/ws-my-workspace/knowledge-base/mcp"
MCP_TOKEN = os.getenv("NIMBLEBRAIN_TOKEN")

def search_knowledge_base(query: str) -> str:
    """Search the knowledge base for relevant information"""
    headers = {"Authorization": f"Bearer {MCP_TOKEN}"}
    response = requests.post(
        f"{MCP_ENDPOINT}/tools/search",
        json={"query": query},
        headers=headers
    )
    result = response.json()
    return result.get("content", [{}])[0].get("text", "No results found")

# Define tools
tools = [
    Tool(
        name="search_knowledge_base",
        func=search_knowledge_base,
        description="Search the company knowledge base for information about policies, procedures, and documentation"
    )
]

# Initialize LLM
llm = ChatAnthropic(
    model="claude-3-5-sonnet-20241022",
    anthropic_api_key=os.getenv("ANTHROPIC_API_KEY")
)

# Create agent
agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    handle_parsing_errors=True
)

# Run the agent
response = agent.run("What is our vacation policy?")
print(response)

TypeScript/JavaScript Integration

You can also use MCP servers with LangChain.js:
import { Tool } from "langchain/tools";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatAnthropic } from "@langchain/anthropic";

// MCP Tool wrapper
class MCPTool extends Tool {
  name = "search_knowledge_base";
  description = "Search the company knowledge base";

  private endpoint: string;
  private token: string;

  constructor(endpoint: string, token: string) {
    super();
    this.endpoint = endpoint;
    this.token = token;
  }

  async _call(query: string): Promise<string> {
    const response = await fetch(`${this.endpoint}/tools/search`, {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${this.token}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify({ query })
    });

    const result = await response.json();
    return result.content[0]?.text || "No results found";
  }
}

// Initialize
const tools = [
  new MCPTool(
    "https://mcp.nimbletools.ai/ws-my-workspace/knowledge-base/mcp",
    process.env.NIMBLEBRAIN_TOKEN!
  )
];

const llm = new ChatAnthropic({
  modelName: "claude-3-5-sonnet-20241022",
  anthropicApiKey: process.env.ANTHROPIC_API_KEY
});

const agent = await initializeAgentExecutorWithOptions(
  tools,
  llm,
  {
    agentType: "zero-shot-react-description",
    verbose: true
  }
);

// Run
const result = await agent.call({
  input: "What is our vacation policy?"
});
console.log(result.output);

Advanced Features

Multiple MCP Servers

Connect multiple MCP servers as different tools:
tools = [
    Tool(
        name="search_docs",
        func=lambda q: call_mcp_tool("docs", "search", query=q),
        description="Search documentation"
    ),
    Tool(
        name="query_database",
        func=lambda q: call_mcp_tool("database", "query", sql=q),
        description="Query the database"
    ),
    Tool(
        name="send_email",
        func=lambda to, subject, body: call_mcp_tool("email", "send",
            to=to, subject=subject, body=body),
        description="Send an email"
    )
]

Error Handling

Add robust error handling:
def call_mcp_tool(server: str, tool_name: str, **kwargs):
    """Call an MCP tool with error handling"""
    try:
        endpoint = f"https://mcp.nimbletools.ai/{WORKSPACE_ID}/{server}/mcp"
        headers = {"Authorization": f"Bearer {MCP_TOKEN}"}

        response = requests.post(
            f"{endpoint}/tools/{tool_name}",
            json=kwargs,
            headers=headers,
            timeout=30
        )
        response.raise_for_status()

        result = response.json()
        return result.get("content", [{}])[0].get("text", "No response")

    except requests.exceptions.Timeout:
        return "Error: Request timed out"
    except requests.exceptions.HTTPError as e:
        return f"Error: HTTP {e.response.status_code}"
    except Exception as e:
        return f"Error: {str(e)}"

Caching

Cache MCP tool results for better performance:
from functools import lru_cache

@lru_cache(maxsize=100)
def call_mcp_tool_cached(tool_name: str, query: str):
    """Call MCP tool with caching"""
    return call_mcp_tool(tool_name, query=query)

Troubleshooting

Possible causes:
  • Invalid or expired token
  • Token not set in environment
Solution:
  • Generate a new token in Studio
  • Set NIMBLEBRAIN_TOKEN environment variable
  • Verify token has correct permissions
Possible causes:
  • Tool description not clear enough
  • Agent choosing wrong tool
  • Tool name mismatch
Solution:
  • Improve tool description to be more specific
  • Use verbose=True to debug agent decisions
  • Verify tool name matches MCP server
Possible causes:
  • Server not running
  • Network issues
  • Long-running tool
Solution:
  • Check server status in Toolbox
  • Increase timeout value
  • Verify network connectivity
Possible causes:
  • Unexpected response format
  • MCP server error
  • Missing content field
Solution:
  • Add error handling for JSON parsing
  • Check MCP server logs in Studio
  • Verify API endpoint is correct

Best Practices

Write detailed descriptions so the agent knows when to use each tool:
description="Search the internal knowledge base for company policies, procedures, and documentation. Use this when users ask about HR policies, engineering processes, or company guidelines."
Store sensitive data in environment variables:
MCP_TOKEN = os.getenv("NIMBLEBRAIN_TOKEN")
if not MCP_TOKEN:
    raise ValueError("NIMBLEBRAIN_TOKEN not set")
Always handle potential errors gracefully to prevent agent failures.
Enable verbose mode during development:
agent = initialize_agent(..., verbose=True)
I