top of page

Getting Started with LangSmith and LangChain: A Quickest Demo

Mr. Data Bugger

Introduction

Building AI applications often involves experimenting, debugging, and optimizing responses. LangSmith helps developers track and trace interactions with language models, making it easier to debug and monitor their applications.

In this blog, we’ll demonstrate how to set up LangSmith with LangChain to track and trace requests while invoking OpenAI's ChatOpenAI model.


Prerequisites

Before running the code, ensure you have:

  • An OpenAI API Key

  • A LangSmith API Key

  • Installed the required Python libraries:

pip install langchain langchain-openai python-dotenv

Loading Environment Variables

The first step is to load environment variables from a .env file. This helps keep sensitive information secure and separate from the code.

1. Create a .env file:

LANGCHAIN_API_KEY=your_langchain_api_key
LANGCHAIN_PROJECT=your_project_name
OPENAI_API_KEY=your_openai_api_key

2. Load Environment Variables in Python:

import os
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Verify environment variables are loaded
os.environ

The load_dotenv() function loads the variables into os.environ, making them accessible throughout the script.


Enabling LangSmith Tracking and Tracing

LangSmith enables tracking of prompts and responses sent to the LLM. We can enable it by setting the following environment variables:

# Setting up LangSmith for tracking
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = os.getenv("LANGCHAIN_PROJECT")

These settings allow LangSmith to trace interactions and group them under a specified project.


Invoking the Language Model

Now, let’s instantiate the ChatOpenAI model from LangChain and invoke it with a query.

from langchain_openai import ChatOpenAI

# Initialize the model
llm = ChatOpenAI(model="gpt-3.5-turbo")

# Print model details
print(llm)

# Invoke the model with a query
response = llm.invoke("What is Agentic AI?")
print(response)

The above code:

  • Instantiates OpenAI’s ChatOpenAI model.

  • Invokes it with the query “What is Agentic AI?”.

  • Prints the response from the model.

With LangSmith enabled, all requests and responses are logged, allowing you to analyze them via LangSmith’s dashboard.



Langsmith Dashboard
Langsmith Dashboard



Why Use LangSmith with LangChain?

  1. Improved Debugging: Tracks prompts and responses to help diagnose unexpected outputs.

  2. Better Observability: Logs request-response pairs for auditing and optimization.

  3. Performance Analysis: Helps identify issues like token overuse or slow response times.

  4. Experiment Tracking: Organizes LLM experiments by grouping them under projects.


Conclusion

In this tutorial, we covered how to:

  • Load environment variables securely.

  • Enable LangSmith for tracking and tracing.

  • Invoke OpenAI's ChatOpenAI model using LangChain.

Comments


Subscribe Form

Thanks for submitting!

©2021 by MLTUTOR. 

bottom of page