Introduction
LangChain is a powerful framework that helps developers integrate Large Language Models (LLMs) into applications with structured workflows, memory handling, and chaining mechanisms. In this blog, we will explore an example code that sets up an LLM chain with memory, breaking it down into Python concepts and OOP principles.
Code Walkthrough of basic chatbot with OOP and Python Concepts
1. Importing Required Libraries
import os
from langchain import OpenAI
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain_ollama import ChatOllama
Encapsulation: These imports bring in various modules, encapsulating functionalities like LLM handling, memory storage, and prompt management into separate classes.
2. Initializing the Language Model (LLM)
llm = ChatOllama(
model="deepseek-r1:1.5b",
base_url="http://localhost:11434",
temperature=0.3
)
Class Instantiation: ChatOllama is a class that we instantiate with specific parameters like model name, API base URL, and temperature.
Encapsulation & Abstraction: The underlying details of how the model communicates with the API are abstracted away inside the ChatOllama class.
3. Creating a Memory Object
memory = ConversationBufferMemory(memory_key="chat_history")
State Management: The ConversationBufferMemory class maintains chat history, allowing the LLM to retain context across interactions.
Encapsulation: The class hides internal memory handling, exposing only necessary methods for use.
4. Defining a Prompt Template
template = """You are a helpful assistant. Here is the conversation history:
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(input_variables=["chat_history", "human_input"], template=template)
Template Design Pattern: PromptTemplate acts as a structured format for inputs to the LLM.
Encapsulation: The PromptTemplate class encapsulates the logic of handling dynamic variables inside the prompt.
5. Creating an LLM Chain with Memory
llm_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
Chaining Pattern: LLMChain enables sequential execution of LLM calls while retaining memory.
Composition: LLMChain is composed of llm, prompt, and memory, demonstrating OOP’s composition principle.
Encapsulation & Abstraction: LLMChain hides the complexities of managing multiple interactions with memory, exposing an easy-to-use interface.
6. Generating Responses
response1 = llm_chain.predict(human_input="Hello, who are you?")
print(response1)
response2 = llm_chain.predict(human_input="What can you do?")
print(response2)
response3 = llm_chain.predict(human_input="Tell me a joke.")
print(response3)
Method Calls: predict() is a method in LLMChain that interacts with the LLM using the provided prompt and memory.
Polymorphism: Different inputs result in different outputs from the same method (predict()), showcasing method behavior adaptation.
Encapsulation: The details of token processing and inference are hidden inside LLMChain and ChatOllama.
Summary of OOP Concepts Used
OOP Concept | Explanation |
Encapsulation | Hides the internal workings of classes, such as memory management in ConversationBufferMemory and inference handling in ChatOllama. |
Abstraction | The complexity of interacting with the LLM is hidden behind the ChatOllama and LLMChain classes, providing a simple interface. |
Composition | LLMChain is composed of multiple objects (llm, prompt, memory) to form a functional workflow. |
Polymorphism | The predict() method behaves differently based on input, demonstrating method adaptability. |
State Management | ConversationBufferMemory maintains chat history, allowing the AI to retain context. |
Conclusion
This example showcases various OOP principles like encapsulation, composition, and polymorphism in action. LangChain abstracts the complexities of managing LLM interactions, making it easy for developers to integrate AI models efficiently into applications. By leveraging memory and structured prompts, developers can create context-aware AI applications effortlessly.
Comentários