10. LangChain ๊ธฐ์ดˆ

10. LangChain ๊ธฐ์ดˆ

๋ฒ„์ „ ์ •๋ณด: ์ด ๋ ˆ์Šจ์€ LangChain 0.2+ (2024๋…„~) ๊ธฐ์ค€์œผ๋กœ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

LangChain์€ ๋น ๋ฅด๊ฒŒ ๋ฐœ์ „ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ์ฃผ์š” ๋ณ€๊ฒฝ์‚ฌํ•ญ: - LCEL (LangChain Expression Language): ๊ถŒ์žฅ ์ฒด์ธ ๊ตฌ์„ฑ ๋ฐฉ์‹ - langchain-core, langchain-community: ํŒจํ‚ค์ง€ ๋ถ„๋ฆฌ - ConversationChain ๋Œ€์‹  RunnableWithMessageHistory ๊ถŒ์žฅ

์ตœ์‹  ๋ฌธ์„œ: https://python.langchain.com/docs/

ํ•™์Šต ๋ชฉํ‘œ

  • LangChain ํ•ต์‹ฌ ๊ฐœ๋…
  • LLM ๋ž˜ํผ์™€ ํ”„๋กฌํ”„ํŠธ
  • ์ฒด์ธ๊ณผ ์—์ด์ „ํŠธ
  • ๋ฉ”๋ชจ๋ฆฌ ์‹œ์Šคํ…œ
  • LCEL (LangChain Expression Language) ์‹ฌํ™”
  • LangGraph ๊ธฐ์ดˆ

1. LangChain ๊ฐœ์š”

์„ค์น˜

# LangChain 0.2+
pip install langchain langchain-openai langchain-community

ํ•ต์‹ฌ ๊ตฌ์„ฑ์š”์†Œ

LangChain
โ”œโ”€โ”€ Models          # LLM ๋ž˜ํผ
โ”œโ”€โ”€ Prompts         # ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ
โ”œโ”€โ”€ Chains          # ์ˆœ์ฐจ์  ํ˜ธ์ถœ
โ”œโ”€โ”€ Agents          # ๋„๊ตฌ ์‚ฌ์šฉ ์—์ด์ „ํŠธ
โ”œโ”€โ”€ Memory          # ๋Œ€ํ™” ๊ธฐ๋ก
โ”œโ”€โ”€ Retrievers      # ๋ฌธ์„œ ๊ฒ€์ƒ‰
โ””โ”€โ”€ Callbacks       # ๋ชจ๋‹ˆํ„ฐ๋ง

2. LLM ๋ž˜ํผ

ChatOpenAI

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-3.5-turbo",
    temperature=0.7,
    max_tokens=500
)

# ๊ฐ„๋‹จํ•œ ํ˜ธ์ถœ
response = llm.invoke("What is the capital of France?")
print(response.content)

๋‹ค์–‘ํ•œ LLM

# OpenAI
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4")

# Anthropic
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-opus-20240229")

# HuggingFace
from langchain_huggingface import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(repo_id="mistralai/Mistral-7B-Instruct-v0.1")

# Ollama (๋กœ์ปฌ)
from langchain_community.llms import Ollama
llm = Ollama(model="llama2")

๋ฉ”์‹œ์ง€ ํƒ€์ž…

from langchain_core.messages import HumanMessage, SystemMessage, AIMessage

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is 2+2?"),
]

response = llm.invoke(messages)
print(response.content)

3. ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ

๊ธฐ๋ณธ ํ…œํ”Œ๋ฆฟ

from langchain_core.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["topic"],
    template="Write a short poem about {topic}."
)

prompt = template.format(topic="spring")
response = llm.invoke(prompt)

Chat ํ”„๋กฌํ”„ํŠธ

from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that translates {input_language} to {output_language}."),
    ("human", "{text}")
])

messages = template.format_messages(
    input_language="English",
    output_language="Korean",
    text="Hello, how are you?"
)

response = llm.invoke(messages)

Few-shot ํ”„๋กฌํ”„ํŠธ

from langchain_core.prompts import FewShotPromptTemplate

examples = [
    {"input": "happy", "output": "sad"},
    {"input": "tall", "output": "short"},
    {"input": "hot", "output": "cold"},
]

example_template = PromptTemplate(
    input_variables=["input", "output"],
    template="Input: {input}\nOutput: {output}"
)

few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_template,
    prefix="Give the antonym of every input:",
    suffix="Input: {word}\nOutput:",
    input_variables=["word"]
)

prompt = few_shot_prompt.format(word="big")

4. ์ฒด์ธ (Chains)

LCEL (LangChain Expression Language)

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# ์ฒด์ธ ๊ตฌ์„ฑ
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
llm = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

# ํŒŒ์ดํ”„ ์—ฐ์‚ฐ์ž๋กœ ์—ฐ๊ฒฐ
chain = prompt | llm | output_parser

# ์‹คํ–‰
result = chain.invoke({"topic": "programmers"})
print(result)

์ˆœ์ฐจ ์ฒด์ธ

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# ์ฒซ ๋ฒˆ์งธ ์ฒด์ธ: ์ฃผ์ œ ์ƒ์„ฑ
topic_prompt = ChatPromptTemplate.from_template(
    "Generate a random topic for a story."
)

# ๋‘ ๋ฒˆ์งธ ์ฒด์ธ: ์Šคํ† ๋ฆฌ ์ž‘์„ฑ
story_prompt = ChatPromptTemplate.from_template(
    "Write a short story about: {topic}"
)

# ์ฒด์ธ ์—ฐ๊ฒฐ
chain = (
    {"topic": topic_prompt | llm | StrOutputParser()}
    | story_prompt
    | llm
    | StrOutputParser()
)

result = chain.invoke({})

๋ณ‘๋ ฌ ์ฒด์ธ

from langchain_core.runnables import RunnableParallel

# ๋ณ‘๋ ฌ ์‹คํ–‰
parallel_chain = RunnableParallel(
    summary=summary_chain,
    keywords=keyword_chain,
    sentiment=sentiment_chain
)

results = parallel_chain.invoke({"text": "Long article here..."})
# {'summary': '...', 'keywords': '...', 'sentiment': '...'}

5. ์ถœ๋ ฅ ํŒŒ์„œ

String Parser

from langchain_core.output_parsers import StrOutputParser

parser = StrOutputParser()
chain = prompt | llm | parser  # ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜

JSON Parser

from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel, Field

class Person(BaseModel):
    name: str = Field(description="The person's name")
    age: int = Field(description="The person's age")

parser = JsonOutputParser(pydantic_object=Person)

prompt = ChatPromptTemplate.from_messages([
    ("system", "Extract person info. {format_instructions}"),
    ("human", "{text}")
]).partial(format_instructions=parser.get_format_instructions())

chain = prompt | llm | parser
result = chain.invoke({"text": "John is 25 years old"})
# {'name': 'John', 'age': 25}

๊ตฌ์กฐํ™”๋œ ์ถœ๋ ฅ

from langchain_core.output_parsers import PydanticOutputParser

class MovieReview(BaseModel):
    title: str
    rating: int
    summary: str

parser = PydanticOutputParser(pydantic_object=MovieReview)

6. ์—์ด์ „ํŠธ (Agents)

๊ธฐ๋ณธ ์—์ด์ „ํŠธ

from langchain.agents import create_react_agent, AgentExecutor
from langchain import hub
from langchain_community.tools import DuckDuckGoSearchRun

# ๋„๊ตฌ ์ •์˜
search = DuckDuckGoSearchRun()
tools = [search]

# ReAct ํ”„๋กฌํ”„ํŠธ ๋กœ๋“œ
prompt = hub.pull("hwchase17/react")

# ์—์ด์ „ํŠธ ์ƒ์„ฑ
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# ์‹คํ–‰
result = agent_executor.invoke({"input": "What is the weather in Seoul?"})

์ปค์Šคํ…€ ๋„๊ตฌ

from langchain.tools import tool

@tool
def calculate(expression: str) -> str:
    """Calculate a mathematical expression."""
    try:
        return str(eval(expression))
    except:
        return "Error in calculation"

@tool
def get_current_time() -> str:
    """Get the current time."""
    from datetime import datetime
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S")

tools = [calculate, get_current_time]

Tool ํด๋ž˜์Šค

from langchain.tools import BaseTool
from typing import Optional
from pydantic import Field

class SearchTool(BaseTool):
    name: str = "search"
    description: str = "Search for information on the internet"

    def _run(self, query: str) -> str:
        # ๊ฒ€์ƒ‰ ๋กœ์ง
        return f"Search results for: {query}"

    async def _arun(self, query: str) -> str:
        return self._run(query)

7. ๋ฉ”๋ชจ๋ฆฌ (Memory)

๊ถŒ์žฅ ๋ฐฉ์‹ ๋ณ€๊ฒฝ: LangChain 0.2+์—์„œ๋Š” ConversationChain, ConversationBufferMemory ๋“ฑ์ด deprecated ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ƒˆ ํ”„๋กœ์ ํŠธ์—์„œ๋Š” RunnableWithMessageHistory (์•„๋ž˜ ์ฐธ์กฐ)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”.

(Legacy) ๋Œ€ํ™” ๋ฒ„ํผ ๋ฉ”๋ชจ๋ฆฌ

โš ๏ธ Deprecated: ์•„๋ž˜ "LCEL์—์„œ ๋ฉ”๋ชจ๋ฆฌ" ์„น์…˜์˜ RunnableWithMessageHistory ์‚ฌ์šฉ ๊ถŒ์žฅ

from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

memory = ConversationBufferMemory()

conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

# ๋Œ€ํ™”
response1 = conversation.predict(input="Hi, I'm John")
response2 = conversation.predict(input="What's my name?")
# "Your name is John"

(Legacy) ์š”์•ฝ ๋ฉ”๋ชจ๋ฆฌ

from langchain.memory import ConversationSummaryMemory

memory = ConversationSummaryMemory(llm=llm)

# ๊ธด ๋Œ€ํ™”๋ฅผ ์š”์•ฝํ•˜์—ฌ ์ €์žฅ

(Legacy) ์œˆ๋„์šฐ ๋ฉ”๋ชจ๋ฆฌ

from langchain.memory import ConversationBufferWindowMemory

# ์ตœ๊ทผ k๊ฐœ์˜ ๋Œ€ํ™”๋งŒ ์œ ์ง€
memory = ConversationBufferWindowMemory(k=5)

LCEL์—์„œ ๋ฉ”๋ชจ๋ฆฌ (๊ถŒ์žฅ)

from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory

store = {}

def get_session_history(session_id: str):
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]

chain_with_history = RunnableWithMessageHistory(
    chain,
    get_session_history,
    input_messages_key="input",
    history_messages_key="history"
)

# ์‚ฌ์šฉ
response = chain_with_history.invoke(
    {"input": "What is my name?"},
    config={"configurable": {"session_id": "user123"}}
)

8. RAG with LangChain

๋ฌธ์„œ ๋กœ๋”

from langchain_community.document_loaders import (
    TextLoader,
    PyPDFLoader,
    WebBaseLoader
)

# ํ…์ŠคํŠธ ํŒŒ์ผ
loader = TextLoader("document.txt")
docs = loader.load()

# PDF
loader = PyPDFLoader("document.pdf")
docs = loader.load()

# ์›นํŽ˜์ด์ง€
loader = WebBaseLoader("https://example.com")
docs = loader.load()

ํ…์ŠคํŠธ ๋ถ„ํ• 

from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(
    chunk_size=500,
    chunk_overlap=50
)

chunks = splitter.split_documents(docs)

๋ฒกํ„ฐ ์Šคํ† ์–ด

from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()

vectorstore = Chroma.from_documents(
    documents=chunks,
    embedding=embeddings,
    persist_directory="./chroma_db"
)

# ๊ฒ€์ƒ‰
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
docs = retriever.invoke("What is machine learning?")

RAG ์ฒด์ธ

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

template = """Answer based on the context:
Context: {context}
Question: {question}
Answer:"""

prompt = ChatPromptTemplate.from_template(template)

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

result = rag_chain.invoke("What is machine learning?")

9. ์ŠคํŠธ๋ฆฌ๋ฐ

# ์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ
for chunk in chain.stream({"topic": "AI"}):
    print(chunk, end="", flush=True)

# ๋น„๋™๊ธฐ ์ŠคํŠธ๋ฆฌ๋ฐ
async for chunk in chain.astream({"topic": "AI"}):
    print(chunk, end="", flush=True)

10. LCEL (LangChain Expression Language) ์‹ฌํ™”

LCEL์€ LangChain 0.2+์—์„œ ์ฒด์ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ถŒ์žฅ ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค. ๋ณต์žกํ•œ LLM ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•œ ์„ ์–ธ์ ์ด๊ณ  ์กฐํ•ฉ ๊ฐ€๋Šฅํ•œ ๋ฌธ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

ํŒŒ์ดํ”„ ์—ฐ์‚ฐ์ž๋ฅผ ํ†ตํ•œ ์ฒด์ธ ๊ตฌ์„ฑ

ํŒŒ์ดํ”„ ์—ฐ์‚ฐ์ž(|)๋Š” ์ปดํฌ๋„ŒํŠธ๋ฅผ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์—ฐ๊ฒฐํ•ฉ๋‹ˆ๋‹ค:

from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# ๊ฐ ์ปดํฌ๋„ŒํŠธ๋Š” "Runnable"
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
llm = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

# ํŒŒ์ดํ”„ ์—ฐ์‚ฐ์ž๋กœ ๊ตฌ์„ฑ
chain = prompt | llm | output_parser

# ์‹คํ–‰
result = chain.invoke({"topic": "programmers"})

ํ•ต์‹ฌ Runnable ์ปดํฌ๋„ŒํŠธ

RunnablePassthrough

์ž…๋ ฅ์„ ๊ทธ๋Œ€๋กœ ์ „๋‹ฌํ•˜๋ฉฐ, ๋ฐ์ดํ„ฐ ๋ผ์šฐํŒ…์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค:

from langchain_core.runnables import RunnablePassthrough

# ์ „์ฒด ์ž…๋ ฅ ์ „๋‹ฌ
chain = RunnablePassthrough() | llm

# ํŠน์ • ํ•„๋“œ ์ „๋‹ฌ
chain = {"text": RunnablePassthrough()} | prompt | llm

RunnableParallel

์—ฌ๋Ÿฌ ์ฒด์ธ์„ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค:

from langchain_core.runnables import RunnableParallel

summary_chain = summary_prompt | llm | StrOutputParser()
keyword_chain = keyword_prompt | llm | StrOutputParser()
sentiment_chain = sentiment_prompt | llm | StrOutputParser()

# ์„ธ ์ฒด์ธ์„ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰
parallel_chain = RunnableParallel(
    summary=summary_chain,
    keywords=keyword_chain,
    sentiment=sentiment_chain
)

results = parallel_chain.invoke({"text": "Long article text here..."})
# {'summary': '...', 'keywords': [...], 'sentiment': 'positive'}

RunnableLambda

์ž„์˜์˜ ํ•จ์ˆ˜๋ฅผ Runnable๋กœ ๋ž˜ํ•‘ํ•ฉ๋‹ˆ๋‹ค:

from langchain_core.runnables import RunnableLambda

def extract_text(data):
    """์ž…๋ ฅ์—์„œ ํ…์ŠคํŠธ ํ•„๋“œ ์ถ”์ถœ."""
    return data["text"].upper()

chain = RunnableLambda(extract_text) | llm
result = chain.invoke({"text": "hello world"})

LCEL์—์„œ ์ŠคํŠธ๋ฆฌ๋ฐ

LCEL์€ ์—ฌ๋Ÿฌ ์ŠคํŠธ๋ฆฌ๋ฐ ๋ชจ๋“œ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค:

# ๋™๊ธฐ ์ŠคํŠธ๋ฆฌ๋ฐ
for chunk in chain.stream({"topic": "AI"}):
    print(chunk, end="", flush=True)

# ๋น„๋™๊ธฐ ์ŠคํŠธ๋ฆฌ๋ฐ
async for chunk in chain.astream({"topic": "AI"}):
    print(chunk, end="", flush=True)

# ์ด๋ฒคํŠธ ์ŠคํŠธ๋ฆฌ๋ฐ (์ƒ์„ธ ์ŠคํŠธ๋ฆฌ๋ฐ)
async for event in chain.astream_events({"topic": "AI"}, version="v1"):
    kind = event["event"]
    if kind == "on_chat_model_stream":
        print(event["data"]["chunk"].content, end="", flush=True)

๋น„๊ต: ๊ตฌ์‹ ์ฒด์ธ ์Šคํƒ€์ผ vs LCEL

๊ตฌ์‹ ์Šคํƒ€์ผ (Deprecated)

from langchain.chains import LLMChain

# ๊ตฌ์‹ ๋ฐฉ์‹
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(topic="AI")

LCEL ์Šคํƒ€์ผ (๊ถŒ์žฅ)

# LCEL ๋ฐฉ์‹
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"topic": "AI"})

LCEL์˜ ์žฅ์ : - ์กฐํ•ฉ์„ฑ: ์ปดํฌ๋„ŒํŠธ๋ฅผ ์‰ฝ๊ฒŒ ๊ฒฐํ•ฉํ•˜๊ณ  ์žฌ์‚ฌ์šฉ - ์ŠคํŠธ๋ฆฌ๋ฐ: ์ŠคํŠธ๋ฆฌ๋ฐ ์ถœ๋ ฅ ๊ธฐ๋ณธ ์ง€์› - ๋น„๋™๊ธฐ: 1๊ธ‰ ๋น„๋™๊ธฐ ์ง€์› - ๋ณ‘๋ ฌํ™”: ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ ์ž๋™ ๋ณ‘๋ ฌ ์‹คํ–‰ - ํƒ€์ž… ์•ˆ์ •์„ฑ: ๋” ๋‚˜์€ IDE ์ง€์› ๋ฐ ์—๋Ÿฌ ๋ฉ”์‹œ์ง€

์˜ˆ์ œ: LCEL์„ ์‚ฌ์šฉํ•œ RAG ์ฒด์ธ

from langchain_core.runnables import RunnablePassthrough, RunnableParallel
from langchain_core.output_parsers import StrOutputParser
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# ์„ค์ •
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

# ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ
template = """Answer the question based on the following context:

Context: {context}

Question: {question}

Answer:"""
prompt = ChatPromptTemplate.from_template(template)

# ํ—ฌํผ ํ•จ์ˆ˜
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# LCEL ์Šคํƒ€์ผ RAG ์ฒด์ธ
rag_chain = (
    RunnableParallel(
        context=retriever | format_docs,
        question=RunnablePassthrough()
    )
    | prompt
    | llm
    | StrOutputParser()
)

# ์‹คํ–‰
answer = rag_chain.invoke("What is machine learning?")

# ๋‹ต๋ณ€ ์ŠคํŠธ๋ฆฌ๋ฐ
for chunk in rag_chain.stream("What is deep learning?"):
    print(chunk, end="", flush=True)

์‹ฌํ™”: ๋ถ„๊ธฐ์™€ ๋ผ์šฐํŒ…

from langchain_core.runnables import RunnableBranch

# ์ž…๋ ฅ์— ๋”ฐ๋ผ ๋ผ์šฐํŒ…
branch = RunnableBranch(
    (lambda x: "code" in x["topic"], code_chain),
    (lambda x: "math" in x["topic"], math_chain),
    default_chain  # ๊ธฐ๋ณธ๊ฐ’
)

chain = {"topic": RunnablePassthrough()} | branch | llm

11. LangGraph ๊ธฐ์ดˆ

LangGraph๋Š” LLM์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒํƒœ ์œ ์ง€ํ˜• ๋‹ค์ค‘ ์—์ด์ „ํŠธ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ž˜ํ”„ ๊ธฐ๋ฐ˜ ์›Œํฌํ”Œ๋กœ์šฐ๋กœ LangChain์„ ํ™•์žฅํ•ฉ๋‹ˆ๋‹ค.

LangGraph๋ž€?

LangGraph๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ทธ๋ž˜ํ”„๋กœ ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - ๋…ธ๋“œ๋Š” ํ•จ์ˆ˜ (LLM ํ˜ธ์ถœ, ๋„๊ตฌ ์‚ฌ์šฉ, ์ปค์Šคํ…€ ๋กœ์ง) - ์—ฃ์ง€๋Š” ๋…ธ๋“œ ๊ฐ„์˜ ํ๋ฆ„ ์ •์˜ - ์ƒํƒœ๋Š” ๊ทธ๋ž˜ํ”„ ์‹คํ–‰ ์ „์ฒด์—์„œ ์œ ์ง€๋จ

LangGraph๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ (vs ์ฒด์ธ):

์ฒด์ธ(LCEL) ์‚ฌ์šฉ LangGraph ์‚ฌ์šฉ
์„ ํ˜• ์›Œํฌํ”Œ๋กœ์šฐ ์‚ฌ์ดํด, ๋ฃจํ”„
๊ฐ„๋‹จํ•œ ๋ถ„๊ธฐ ๋ณต์žกํ•œ ๋ผ์šฐํŒ…
์ƒํƒœ ์—†์Œ ์ƒํƒœ ์œ ์ง€ ์—์ด์ „ํŠธ
๋‹จ์ผ ์—์ด์ „ํŠธ ๋‹ค์ค‘ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ

์„ค์น˜

pip install langgraph

StateGraph ๊ฐœ๋…

LangGraph๋Š” ๋…ธ๋“œ๋ฅผ ํ†ต๊ณผํ•˜๋ฉด์„œ ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๋Š” StateGraph๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค:

from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
from langgraph.graph import StateGraph, END

# ์ƒํƒœ ์Šคํ‚ค๋งˆ ์ •์˜
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], "The messages in the conversation"]
    next: str

# ๊ทธ๋ž˜ํ”„ ์ƒ์„ฑ
graph = StateGraph(AgentState)

๋…ธ๋“œ์™€ ์—ฃ์ง€

from langchain_core.messages import HumanMessage, AIMessage

def agent_node(state: AgentState):
    """์—์ด์ „ํŠธ ๊ฒฐ์ • ๋…ธ๋“œ."""
    messages = state["messages"]
    response = llm.invoke(messages)
    return {"messages": messages + [response], "next": "tool"}

def tool_node(state: AgentState):
    """๋„๊ตฌ ์‹คํ–‰ ๋…ธ๋“œ."""
    # ๋„๊ตฌ ์‹คํ–‰
    result = "Tool result here"
    return {"messages": state["messages"] + [AIMessage(content=result)], "next": END}

# ๋…ธ๋“œ ์ถ”๊ฐ€
graph.add_node("agent", agent_node)
graph.add_node("tool", tool_node)

# ์—ฃ์ง€ ์ถ”๊ฐ€
graph.add_edge("agent", "tool")
graph.add_edge("tool", END)

# ์ง„์ž…์  ์„ค์ •
graph.set_entry_point("agent")

# ์ปดํŒŒ์ผ
app = graph.compile()

# ์‹คํ–‰
result = app.invoke({"messages": [HumanMessage(content="Hello")]})

๋„๊ตฌ ์‚ฌ์šฉ์ด ์žˆ๋Š” ๊ฐ„๋‹จํ•œ ์—์ด์ „ํŠธ

from langgraph.graph import StateGraph, END
from langchain.tools import tool
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage

# ๋„๊ตฌ ์ •์˜
@tool
def search(query: str) -> str:
    """์ •๋ณด ๊ฒ€์ƒ‰."""
    return f"Search results for: {query}"

tools = [search]
llm_with_tools = llm.bind_tools(tools)

# ์ƒํƒœ
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], "The messages"]

# ์—์ด์ „ํŠธ ๋…ธ๋“œ
def call_agent(state: AgentState):
    messages = state["messages"]
    response = llm_with_tools.invoke(messages)
    return {"messages": messages + [response]}

# ๋„๊ตฌ ๋…ธ๋“œ
def call_tool(state: AgentState):
    messages = state["messages"]
    last_message = messages[-1]

    # ๋„๊ตฌ ์‹คํ–‰
    tool_calls = last_message.tool_calls
    results = []
    for tool_call in tool_calls:
        tool_result = search.invoke(tool_call["args"])
        results.append(ToolMessage(content=tool_result, tool_call_id=tool_call["id"]))

    return {"messages": messages + results}

# ๊ทธ๋ž˜ํ”„ ๊ตฌ์ถ•
graph = StateGraph(AgentState)
graph.add_node("agent", call_agent)
graph.add_node("tools", call_tool)

# ์กฐ๊ฑด๋ถ€ ๋ผ์šฐํŒ…
def should_continue(state: AgentState):
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return END

graph.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END})
graph.add_edge("tools", "agent")
graph.set_entry_point("agent")

# ์ปดํŒŒ์ผ ๋ฐ ์‹คํ–‰
app = graph.compile()
result = app.invoke({"messages": [HumanMessage(content="Search for LangChain news")]})

# ๋Œ€ํ™” ์ถœ๋ ฅ
for msg in result["messages"]:
    print(f"{msg.__class__.__name__}: {msg.content}")

์กฐ๊ฑด๋ถ€ ๋ผ์šฐํŒ…

LangGraph๋Š” ๋™์  ๋ผ์šฐํŒ…์„ ์œ„ํ•œ ์กฐ๊ฑด๋ถ€ ์—ฃ์ง€๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค:

def route_decision(state: AgentState):
    """์ƒํƒœ์— ๋”ฐ๋ผ ๋‹ค์Œ ๋…ธ๋“œ ๊ฒฐ์ •."""
    if state.get("error"):
        return "error_handler"
    elif state.get("needs_review"):
        return "review"
    else:
        return "complete"

graph.add_conditional_edges(
    "process",
    route_decision,
    {
        "error_handler": "error_handler",
        "review": "review",
        "complete": END
    }
)

์‹œ๊ฐํ™”

LangGraph๋Š” ๊ทธ๋ž˜ํ”„๋ฅผ ์‹œ๊ฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:

from IPython.display import Image, display

# ๊ทธ๋ž˜ํ”„ ๊ตฌ์กฐ ์‹œ๊ฐํ™”
display(Image(app.get_graph().draw_mermaid_png()))

๋‹ค์ค‘ ์—์ด์ „ํŠธ ์˜ˆ์ œ

from langgraph.graph import StateGraph, END

class MultiAgentState(TypedDict):
    messages: Sequence[BaseMessage]
    current_agent: str

def researcher(state: MultiAgentState):
    # ์—ฐ๊ตฌ ์—์ด์ „ํŠธ
    return {"messages": [...], "current_agent": "writer"}

def writer(state: MultiAgentState):
    # ์ž‘์„ฑ ์—์ด์ „ํŠธ
    return {"messages": [...], "current_agent": "reviewer"}

def reviewer(state: MultiAgentState):
    # ๊ฒ€ํ†  ์—์ด์ „ํŠธ
    return {"messages": [...], "current_agent": END}

# ๋‹ค์ค‘ ์—์ด์ „ํŠธ ๊ทธ๋ž˜ํ”„ ๊ตฌ์ถ•
graph = StateGraph(MultiAgentState)
graph.add_node("researcher", researcher)
graph.add_node("writer", writer)
graph.add_node("reviewer", reviewer)

graph.add_edge("researcher", "writer")
graph.add_edge("writer", "reviewer")
graph.add_edge("reviewer", END)
graph.set_entry_point("researcher")

app = graph.compile()

์ฃผ์š” LangGraph ๊ฐœ๋…

  • ์ฒดํฌํฌ์ธํŒ…: ์–ธ์ œ๋“ ์ง€ ์ƒํƒœ ์ €์žฅ/๋ณต์›
  • Human-in-the-loop: ๊ณ„์†ํ•˜๊ธฐ ์ „ ์‚ฌ๋žŒ์˜ ์Šน์ธ์„ ์œ„ํ•ด ์ผ์‹œ ์ค‘์ง€
  • ํƒ€์ž„ ํŠธ๋ž˜๋ธ”: ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ์—์„œ ์žฌ์ƒ
  • ์˜์†์„ฑ: ๋Œ€ํ™” ์ƒํƒœ๋ฅผ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์— ์ €์žฅ

์ •๋ฆฌ

ํ•ต์‹ฌ ํŒจํ„ด

# ๊ธฐ๋ณธ LCEL ์ฒด์ธ
chain = prompt | llm | output_parser

# LCEL์„ ์‚ฌ์šฉํ•œ RAG ์ฒด์ธ
rag_chain = (
    RunnableParallel(context=retriever, question=RunnablePassthrough())
    | prompt | llm | parser
)

# ์—์ด์ „ํŠธ (์ „ํ†ต์  ๋ฐฉ์‹)
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

# ์—์ด์ „ํŠธ (LangGraph)
graph = StateGraph(AgentState)
graph.add_node("agent", call_agent)
graph.add_conditional_edges("agent", should_continue)
app = graph.compile()

์ปดํฌ๋„ŒํŠธ ์„ ํƒ ๊ฐ€์ด๋“œ

์ƒํ™ฉ ์ปดํฌ๋„ŒํŠธ
๋‹จ์ˆœ ํ˜ธ์ถœ LLM + Prompt
์ˆœ์ฐจ ์ฒ˜๋ฆฌ Chain (LCEL)
๋ณ‘๋ ฌ ์‹คํ–‰ RunnableParallel
๋ฌธ์„œ ๊ธฐ๋ฐ˜ Q&A RAG Chain (LCEL)
๊ฐ„๋‹จํ•œ ๋„๊ตฌ ์‚ฌ์šฉ Agent (ReAct)
๋ณต์žกํ•œ ์›Œํฌํ”Œ๋กœ์šฐ LangGraph
๋‹ค์ค‘ ์—์ด์ „ํŠธ ์‹œ์Šคํ…œ LangGraph
์ƒํƒœ ์œ ์ง€ ์—์ด์ „ํŠธ LangGraph
๋Œ€ํ™” ์œ ์ง€ RunnableWithMessageHistory

LCEL vs LangGraph

๊ธฐ๋Šฅ LCEL LangGraph
์‚ฌ์šฉ ์‚ฌ๋ก€ ์„ ํ˜•/๊ฐ„๋‹จํ•œ ๋ถ„๊ธฐ ์‚ฌ์ดํด, ๋ณต์žกํ•œ ๋ผ์šฐํŒ…
์ƒํƒœ ์ƒํƒœ ์—†์Œ ์ƒํƒœ ์œ ์ง€
๋ฌธ๋ฒ• ํŒŒ์ดํ”„ ์—ฐ์‚ฐ์ž (\|) StateGraph
๋ณต์žก๋„ ๊ฐ„๋‹จ~์ค‘๊ฐ„ ์ค‘๊ฐ„~๋ณต์žก
์ตœ์  ์šฉ๋„ RAG, ๊ฐ„๋‹จํ•œ ์—์ด์ „ํŠธ ๋‹ค์ค‘ ์—์ด์ „ํŠธ, human-in-loop

๋‹ค์Œ ๋‹จ๊ณ„

11_Vector_Databases.md์—์„œ ๋ฒกํ„ฐ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๋ฅผ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค.

to navigate between lessons