Skip to main content
Arcbeam integrates seamlessly with applications using OpenTelemetry instrumentation. Capture every chain execution, LLM call, and retrieval automatically.

What Gets Captured

Every chain run from start to finish, including nested chains and their relationships
All model invocations with prompts, responses, , and costs
Documents fetched from vector stores with similarity scores and metadata
External API calls and function executions with inputs and outputs
Multi-step reasoning and decisions showing the agent’s thought process
Latency for each operation to identify performance bottlenecks
Stack traces and error messages with full context

Installation

1

Check Prerequisites

Ensure you have:
  • Python 3.8 or higher
  • LangChain installed (pip install langchain)
  • Arcbeam account and API key
2

Install Arcbeam SDK

pip install arcbeam-connector
Installation complete! You’re ready to instrument your LangChain application.

Quick Start

1. Initialize Arcbeam

Add these lines at the start of your application:
from arcbeam_connector.langchain.connector import ArcbeamLangConnector

connector = ArcbeamLangConnector(
    base_url="http://platform.arcbeam.ai",  # Or your self-hosted URL
    api_key="your-api-key-here",  # Your Arcbeam API key
    project_id="your-project-id-here",  # Your project ID
)
connector.init()

2. Run Your LangChain Code

That’s it! Your existing LangChain code will now send traces to Arcbeam:
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.vectorstores import FAISS

# Your normal LangChain code - no changes needed
llm = OpenAI(temperature=0)
vectorstore = FAISS.load_local("index")
qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever()
)

# This execution is automatically traced
result = qa.run("What is the refund policy?")

Configuration

Add environment tag to organize traces:
from arcbeam_connector.langchain.connector import ArcbeamLangConnector

connector = ArcbeamLangConnector(
    base_url="http://platform.arcbeam.ai",
    api_key="your_api_key",
    project_id="your_project_id",
    environment="production",  # Tag traces by environment
)
connector.init()
The environment tag appears on every trace and can be used for filtering.
EnvironmentUse Case
developmentLocal testing and development
stagingPre-production testing
productionLive applications

Example: RAG Application

Full example with LangChain and Arcbeam:
import os
from arcbeam_connector.langchain.connector import ArcbeamLangConnector
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import PGVector

# Initialize Arcbeam
connector = ArcbeamLangConnector(
    base_url=os.getenv("ARCBEAM_BASE_URL", "http://platform.arcbeam.ai"),
    api_key=os.getenv("ARCBEAM_API_KEY"),
    project_id=os.getenv("ARCBEAM_PROJECT_ID"),
    environment="production",
)
connector.init()

# Set up LangChain components
embeddings = OpenAIEmbeddings()
vectorstore = PGVector(
    connection_string=os.getenv("DATABASE_URL"),
    embedding_function=embeddings,
    collection_name="support_docs"
)

llm = OpenAI(temperature=0, model="gpt-4")
qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3})
)

# Handle user request
def handle_query(user_id, query):
    # Run the chain (automatically traced)
    response = qa.run(query)

    return response

# Example usage
response = handle_query("user_123", "What's the return policy?")
print(response)

Debugging LangChain Applications

View Chain Execution

In the Arcbeam dashboard:
  1. Go to Traces page
  2. Find your trace
  3. View the span tree showing:
    • Chain execution span
    • Retrieval span (with documents)
    • LLM call span (with prompt and response)
    • Timing for each step

Find Slow Chains

Filter traces by duration:
  1. Set duration filter: > 5 seconds
  2. Review which chains are slow
  3. Check if retrieval or LLM is the bottleneck
  4. Optimize accordingly

Track Costs

Monitor LangChain application costs:
  1. View cost breakdown by model
  2. Identify expensive chains
  3. Find opportunities to reduce token usage
  4. Compare costs across different chain configurations

Best Practices

Initialize Early

Call connector.init() at application startup, before any LangChain code:
# Good: Initialize first
from arcbeam_connector.langchain.connector import ArcbeamLangConnector

connector = ArcbeamLangConnector(
    base_url="http://platform.arcbeam.ai",
    api_key="your-api-key-here",
    project_id="your-project-id-here",
)
connector.init()

from langchain.chains import RetrievalQA
# ... rest of your code

Use Environment Tags

Tag traces by environment for better organization:
connector = ArcbeamLangConnector(
    base_url="http://platform.arcbeam.ai",
    api_key="your-api-key-here",
    project_id="your-project-id-here",
    environment="production",  # or "dev", "staging"
)
connector.init()

Troubleshooting

Traces Not Appearing

Check API Key: Verify your API key is correct:
import os
print(os.getenv("ARCBEAM_API_KEY"))
Check Project ID: Ensure project ID is valid:
print(os.getenv("ARCBEAM_PROJECT_ID"))
Check Initialization: Make sure init() is called before LangChain code. Check Network: Ensure your application can reach https://api.arcbeam.ai.

Missing Retrieved Documents

Connect Dataset: Make sure you’ve added your vector store as a dataset in Arcbeam. Verify Schema Mapping: Check that document ID columns are mapped correctly. Sync Dataset: Trigger a manual sync to ensure metadata is current.

Next Steps