Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: <Issue related to /observability/how_to_guides/tracing/trace_with_opentelemetry> #608

Open
mindful-time opened this issue Dec 23, 2024 · 0 comments

Comments

@mindful-time
Copy link

mindful-time commented Dec 23, 2024

Issue: Unauthorized Access When Integrating Traceloop SDK with LangSmith with LLama-index

Description

When attempting to integrate Traceloop SDK with LangSmith for tracing LlamaIndex operations, receiving consistent 401 Unauthorized errors. The trace export attempts are failing with the following error:

ERROR:
opentelemetry.exporter.otlp.proto.http.trace_exporter: Failed to export batch 
code: 401, 
reason: {"error":"Unauthorized"}

Current Setup

  • Using Traceloop SDK to send traces to LangSmith's OpenTelemetry endpoint
  • Environment configured with Azure OpenAI for LlamaIndex operations
  • Traceloop initialization includes:

Steps to Reproduce

  1. Set up environment with LangSmith API key
  2. Initialize Traceloop SDK with LangSmith endpoint
  3. Run any LlamaIndex operation (in this case, a simple document query)
  4. Observe trace export failures in logs

Expected Behavior

  • Successful authentication with LangSmith
  • Proper export of OpenTelemetry traces to LangSmith

Actual Behavior

  • 401 Unauthorized errors when attempting to export traces
  • No traces being recorded in LangSmith

Potential Investigation Done

  1. Verifed LangSmith API key validity and permissions
  2. Confirmed correct environment variable name (LANGCHAIN_API_KEY)
  3. Validate LangSmith endpoint URL and authentication requirements
  4. Check Traceloop SDK configuration for LangSmith compatibility

Related Documentation

full implementation

# Initialize Traceloop SDK and LlamaIndex with Azure OpenAI
# This script sets up tracing and testing of the LlamaIndex integration with Azure OpenAI

import os
from traceloop.sdk import Traceloop

# Get LangSmith API key from environment variables for tracing
LANGSMITH_API_KEY = os.getenv("LANGCHAIN_API_KEY")

# Initialize Traceloop with LangSmith endpoint and authentication
# - api_endpoint: LangSmith OTEL endpoint for trace collection
# - headers: Authentication and content type headers
# - disable_batch: Send traces immediately without batching
# - app_name: Name of the application for trace identification
Traceloop.init(api_endpoint="https://api.smith.langchain.com/otel",
               headers=
               {
                "x-api-key": LANGSMITH_API_KEY, 
                "content-type": "application/protobuf"},
               disable_batch=True,
               app_name="test"
               )

# Import required LlamaIndex components
from llama_index.core import VectorStoreIndex, Document
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.core import Settings
from app.core.config import get_settings

# Load application settings
settings = get_settings()

# Define required Azure OpenAI settings to validate configuration
required_settings = [
        ("Azure OpenAI API Key", settings.azure_openai_api_key),
        ("Azure OpenAI Endpoint", settings.azure_openai_endpoint),
        ("Azure OpenAI Deployment Name", settings.azure_openai_deployment_name),
        ("Azure OpenAI API Version", settings.azure_openai_api_version),
        ("Azure OpenAI Embeddings Name", settings.azure_openai_embeddings_name),
        ("Azure OpenAI Embeddings Endpoint", settings.azure_openai_embeddings_endpoint),
    ]

# Configure LlamaIndex to use Azure OpenAI for text generation
Settings.llm = AzureOpenAI(
            model=settings.text_model,
            engine=settings.azure_openai_deployment_name,
            deployment_name=settings.azure_openai_deployment_name,
            api_key=settings.azure_openai_api_key,
            azure_endpoint=settings.azure_openai_endpoint,
            api_version=settings.azure_openai_api_version,
        )

# Configure LlamaIndex to use Azure OpenAI for embeddings
Settings.embed_model = AzureOpenAIEmbedding(
            model=settings.azure_openai_embeddings_model,
            deployment_name=settings.azure_openai_embeddings_name,
            api_key=settings.azure_openai_api_key,
            azure_endpoint=settings.azure_openai_embeddings_endpoint,
            api_version=settings.azure_openai_embeddings_api_version,
        )

# Test the setup with a sample document and query
try:
    # Create test index with example document
    documents = [Document.example()]
    index = VectorStoreIndex.from_documents(documents)
    query_engine = index.as_query_engine()
    
    # Run test query
    response = query_engine.query("What is this document about?")
    print(f"Query Response: {response}")
except Exception as e:
    print(f"Error occurred: {str(e)}")
    

HTTP Request is working

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant