Setting Up LangChain
LangChain is the most widely used framework for building LLM-powered applications and agents. It provides abstractions for LLMs, tools, memory, and agent executors — plus integrations with hundreds of external services. In this lesson, you'll set up a complete LangChain development environment.
Prerequisites
Before installing LangChain, ensure you have:
- Python 3.9 or later (
python --version) - pip or uv package manager
- An API key from OpenAI or Anthropic
Installation
# Create and activate a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install LangChain and the OpenAI integration
pip install langchain langchain-openai langchain-community
# Or with uv (faster)
uv pip install langchain langchain-openai langchain-community
For this course we'll use langchain-openai as the LLM provider. If you prefer Anthropic, install langchain-anthropic instead.
Environment Configuration
Never hardcode API keys in your source files. Use environment variables:
# Create a .env file (add to .gitignore immediately!)
echo "OPENAI_API_KEY=sk-your-key-here" > .env
echo ".env" >> .gitignore
Load the environment in Python:
# config.py
import os
from dotenv import load_dotenv
load_dotenv() # Reads .env file
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"] # Raises KeyError if missing
Install python-dotenv: pip install python-dotenv
Verifying the Installation
Run this script to confirm everything works:
# verify_setup.py
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Initialize the model
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# Make a test call
response = llm.invoke([HumanMessage(content="Say 'LangChain is working!' and nothing else.")])
print(response.content)
# Expected output: LangChain is working!
print(f"Model: {response.response_metadata['model_name']}")
print(f"Total tokens: {response.response_metadata['token_usage']['total_tokens']}")
Understanding LangChain's Structure
LangChain is organized into several packages:
| Package | Purpose |
|---|---|
langchain-core | Base abstractions (Runnable, BaseMessage, etc.) |
langchain | High-level chains and agents |
langchain-openai | OpenAI model integrations |
langchain-anthropic | Anthropic model integrations |
langchain-community | 300+ third-party integrations |
The langchain-core package defines the LCEL (LangChain Expression Language) — a composable interface using the | (pipe) operator to chain components:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Tell me a fact about {topic}.")
output_parser = StrOutputParser()
# Chain components with the pipe operator
chain = prompt | llm | output_parser
result = chain.invoke({"topic": "transformer architecture"})
print(result)
Project Structure for LangChain Agents
For anything beyond a quick script, use this structure:
my_agent/
├── .env # API keys (never commit!)
├── .gitignore
├── requirements.txt
├── agent/
│ ├── __init__.py
│ ├── core.py # Main agent logic
│ ├── tools/
│ │ ├── __init__.py
│ │ └── search.py # Custom tools
│ └── prompts/
│ └── system.txt # System prompt template
└── tests/
└── test_agent.py
This structure cleanly separates concerns and makes the codebase easy to navigate as it grows. In the next lesson, you'll build your first chain in this environment.