LangSmith is a built-in observability service and platform that integrates easily with
LangChain. We use LangSmith as an optional dependency in the LangChain Essentials
Course. We recommend using it beyond this
course for general development with LangChain. With all that in mind, we recommend
getting familiar with LangSmith.
Setting up LangSmith
LangSmith does require an API key, but it comes with a generous free tier. You can sign
up for an account and get your API key here.
When using LangSmith, we need to set our environment variables and provide our API key
like so:
python
import os
from getpass import getpass
# must enter API key
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY") or \
In most cases, this is all we need to start seeing logs and traces in the
LangSmith UI. By default, LangChain will trace LLM calls,
chains, etc. We'll take a quick example of this below.
Default Tracing
As mentioned, LangSmith traces a lot of data without us needing to do anything. Let's
see how that looks. We'll start by initializing our LLM. Again, this will need an
API key.
python
import os
from getpass import getpass
from langchain_openai import ChatOpenAI
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY") or getpass(
After this, we should see a new project (aurelioai-langchain-course-langsmith-openai)
created in the LangSmith UI. Inside that project, we should see the trace from our LLM call:
By default, LangSmith will capture plenty — however, it won't capture functions from
outside of LangChain. Let's see how we can trace those.
Tracing Non-LangChain Code
LangSmith can trace functions that are not part of LangChain. We need to add the
@traceable decorator. Let's try this for a few simple functions.
python
from langsmith import traceable
import random
import time
@traceable
def generate_random_number():
return random.randint(0, 100)
@traceable
def generate_string_delay(input_str: str):
number = random.randint(1, 5)
time.sleep(number)
return f"{input_str} ({number})"
@traceable
def random_error():
number = random.randint(0, 1)
if number == 0:
raise ValueError("Random error")
else:
return "No error"
Let's run these a few times and see what happens.
python
from tqdm.auto import tqdm
for _ in tqdm(range(10)):
generate_random_number()
generate_string_delay("Hello")
try:
random_error()
except ValueError:
pass
text
100%|██████████| 10/10 [00:25<00:00, 2.51s/it]
Those traces should now be visible in the LangSmith UI, again under the same project:
We can use various metrics here for each run. First, of course, the run name. We can see
any inputs and outputs from each run, and if the run raises any errors, we can see its
start time and latency. Inside the UI, we can also filter for specific runs, like so:
We can do various other things within the UI, but we'll leave that for you to explore.
Finally, we can modify our traceable names if we want to make them more readable inside
the UI. For example:
python
from langsmith import traceable
@traceable(name="Chitchat Maker")
def error_generation_function(question: str):
delay = random.randint(0, 3)
time.sleep(delay)
number = random.randint(0, 1)
if number == 0:
raise ValueError("Random error")
else:
return "I'm great how are you?"
Let's run this a few times and see what we get in LangSmith.
python
for _ in tqdm(range(10)):
try:
error_generation_function("How are you today?")
except ValueError:
pass
text
100%|██████████| 10/10 [00:17<00:00, 1.70s/it]
Let's filter for the Chitchat Maker traceable and see our results.
We can see our runs and their related metadata! That's it for this introduction to
LangSmith. As we work through the course, we will (optionally) refer to LangSmith to
investigate our runs.