From e1df664b8455ea7c5db17d3f0df2c76c5d179944 Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Wed, 8 Jan 2025 19:35:20 +0900 Subject: [PATCH 1/6] [N-1] 05-AIMemoryManagementSystem / 09-ConversationMemoryManagementSystem MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - 초안작성 - LangGraph를 사용한 간단한 예시 --- ...9-ConversationMemoryManagementSystem.ipynb | 612 ++++++++++++++++++ 1 file changed, 612 insertions(+) create mode 100644 19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb new file mode 100644 index 000000000..2985369d6 --- /dev/null +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -0,0 +1,612 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# ConversationMemoryManagementSystem\n", + "\n", + "- Author: [syshin0116](https://github.com/syshin0116)\n", + "- Design: \n", + "- Peer Review:\n", + "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", + "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/13-LlamaParse.ipynb) [![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/13-LlamaParse.ipynb)\n", + "\n", + "\n", + "## Overview\n", + "\n", + "In modern AI systems, **memory management** plays a critical role in creating personalized and efficient user experiences. Imagine interacting with an AI that remembers nothing about your previous conversations—it would be frustrating to repeat the same information every time! To address this, memory management systems are designed to handle and organize information effectively.\n", + "\n", + "This guide focuses on managing **Short-term** and **Long-term Memory** for conversational AI, specifically for building personalized chatbots. We will explore two implementation paths:\n", + "\n", + "1. **Using LangGraph**: A structured and feature-rich framework for managing AI memory.\n", + "2. **Without LangGraph**: A more manual, flexible approach for situations where LangGraph is unavailable.\n", + "\n", + "\n", + "### What is Memory?\n", + "\n", + "Memory in AI refers to the ability to **store, retrieve, and use information** during interactions. In conversational systems, memory can enhance user experience by:\n", + "\n", + "- Adapting to user preferences\n", + "- Handling repetitive queries automatically\n", + "- Providing continuity across sessions\n", + "\n", + "### Short-term vs Long-term Memory\n", + "\n", + "|**Type**|**Scope**|**Purpose**|**Example**|\n", + "|---|---|---|---|\n", + "|**Short-term Memory**|Single conversational thread|Keeps track of recent interactions to provide immediate context|Remembering the last few messages in a chat|\n", + "|**Long-term Memory**|Shared across multiple threads and sessions|Stores critical or summarized information to maintain continuity across conversations|Storing user preferences, important facts, or conversation summaries|\n", + "\n", + "- **Short-term Memory**: Ideal for maintaining the flow within a single conversation, like tracking the most recent messages.\n", + "- **Long-term Memory**: Useful for creating personalized experiences by recalling past interactions, preferences, or key facts over time.\n", + "\n", + "## Table of Contents\n", + "\n", + "두가지 안 중에 고민중입니다. 편의성을 위해 초안엔 ##, ### 모두 포함했습니다.\n", + "\n", + "1안: LangGraph 사용법 더 자세히 설명\n", + "\n", + "2안: LangGraph 사용 + 미사용\n", + "\n", + "+ VectorDatabase 사용법도 추가하고 싶은 마음이 있습니다..시간되면..\n", + "\n", + "TODO: 1안, 2안 결정 + ### 제거\n", + "\n", + "### 1안, 2안 공통\n", + "- [Overview](#overview)\n", + " - [What is Memory?](#what-is-memory)\n", + " - [Short-term vs Long-term Memory](#short-term-vs-long-term-memory)\n", + "- [Environment Setup](#environment-setup)\n", + "\n", + "### 1안\n", + "\n", + "- [Construct Tool](#construct-tool)\n", + "- [Set Nodes for Memory Storage](#set-nodes-for-memory-storage)\n", + "- [Set Nodes for Agent](#set-nodes-for-agent)\n", + "- [Conditional Edge Logic](#conditional-edge-logic)\n", + "- [Load and Compile Graph](#load-and-compile-graph)\n", + "- [Visualize Graph](#visualize-graph)\n", + "- [Run Graph](#run-graph)\n", + "\n", + "### 2안\n", + "- [Database Configuration](#database-configuration)\n", + "- [Short Term Memory](#short-term-memory)\n", + " - [Checkpointer](#checkpointer)\n", + " - [Filtering Long Conversation History](#trimming-long-conversation-history)\n", + " - [Summarizing Conversations](#summarizing-conversations)\n", + "- [Long Term Memory](#long-term-memory)\n", + " - [Storing Key Information](#storing-key-information)\n", + "- [Memory types](#memory-types)\n", + " - [Semantic Memory](#semantic-memory)\n", + " - [Episodic Memory](#episodic-memory)\n", + " - [Procedural Memory](#proceduarl-memory)\n", + "- [Usecase Examples](#usecase-examples)\n", + " - [Adapting to User Preferences](#adapting-to-user-preferences)\n", + " - [LangGraph Flow Example](#langgraph-flow-example)\n", + "\n", + "### References\n", + "\n", + "- [LangGraph: What-is-memory](https://langchain-ai.github.io/langgraph/concepts/memory/#what-is-memory)\n", + "- [LangGraph: memory-template](https://github.com/langchain-ai/memory-template)\n", + "- [LangChain-ai: memory-agent](https://github.com/langchain-ai/memory-agent)\n", + "\n", + "----" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "\n", + "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n", + "\n", + "**[Note]**\n", + "- `langchain-opentutorial` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials. \n", + "- You can checkout the [`langchain-opentutorial`](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "%%capture --no-stderr\n", + "%pip install langchain-opentutorial" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\n", + "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n", + "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" + ] + } + ], + "source": [ + "# Install required packages\n", + "from langchain_opentutorial import package\n", + "\n", + "package.install(\n", + " [\n", + " \"langsmith\",\n", + " \"langchain\",\n", + " \"langchain_core\",\n", + " \"langchain_community\",\n", + " \"langchain_openai\",\n", + " \"langgraph\",\n", + " \"SQLAlchemy\",\n", + " ],\n", + " verbose=False,\n", + " upgrade=False,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Environment variables have been set successfully.\n" + ] + } + ], + "source": [ + "# Set environment variables\n", + "from langchain_opentutorial import set_env\n", + "\n", + "set_env(\n", + " {\n", + " \"OPENAI_API_KEY\": \"\",\n", + " \"LANGCHAIN_API_KEY\": \"\",\n", + " \"LANGCHAIN_TRACING_V2\": \"true\",\n", + " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", + " \"LANGCHAIN_PROJECT\": \"09-ConversationMemoryManagementSystem\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Load API keys from .env file\n", + "from dotenv import load_dotenv\n", + "\n", + "load_dotenv(override=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Database Configuration\n", + "\n", + "In this section, we will set up an **SQLite database** to store Short-term and Long-term Memory information for our chatbot. SQLite is a lightweight, serverless database that is easy to set up and ideal for prototyping or small-scale applications.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "# SQLite Configuration" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "# SQLite Table Configuration" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Short Term Memory\n", + "\n", + "Short-term memory lets your application remember previous interactions within a single thread or conversation. A thread organizes multiple interactions in a session, similar to the way email groups messages in a single conversation.\n", + "\n", + "LangGraph manages short-term memory as part of the agent's state, persisted via thread-scoped checkpoints. This state can normally include the conversation history along with other stateful data, such as uploaded files, retrieved documents, or generated artifacts. By storing these in the graph's state, the bot can access the full context for a given conversation while maintaining separation between different threads.\n", + "\n", + "Since conversation history is the most common form of representing short-term memory, in the next section, we will cover techniques for managing conversation history when the list of messages becomes long. If you want to stick to the high-level concepts, continue on to the [Long Term Memory](#long-term-memory) section." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Checkpointer" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "from typing_extensions import TypedDict, Annotated\n", + "from langchain_core.messages import AnyMessage\n", + "from langgraph.graph.message import add_messages\n", + "from typing import Union\n", + "\n", + "\n", + "def manage_list(existing: list, updates: Union[list, dict]):\n", + " if isinstance(updates, list):\n", + " # Normal case, add to the history\n", + " return existing + updates\n", + " elif isinstance(updates, dict) and updates[\"type\"] == \"keep\":\n", + " # You get to decide what this looks like.\n", + " # For example, you could simplify and just accept a string \"DELETE\"\n", + " # and clear the entire list.\n", + " return existing[updates[\"from\"] : updates[\"to\"]]\n", + " # etc. We define how to interpret updates\n", + "\n", + "\n", + "# Define State\n", + "class FilterState(TypedDict):\n", + " messages: Annotated[list[AnyMessage], add_messages] # short term memory\n", + " summary: list[dict] # long term memory\n", + " my_list: Annotated[list, manage_list]" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "# Load Checkpointer, In-emory storate\n", + "from langgraph.store.memory import InMemoryStore\n", + "\n", + "in_memory_store = InMemoryStore()" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "import uuid\n", + "from langchain_openai import ChatOpenAI\n", + "from langchain_core.runnables import RunnableConfig\n", + "from langgraph.graph import StateGraph, MessagesState, START, END\n", + "from langgraph.store.base import BaseStore\n", + "from typing import Annotated, Optional" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Construct Tool" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.tools import InjectedToolArg, tool\n", + "\n", + "\n", + "@tool\n", + "def upsert_memory(\n", + " content: str,\n", + " context: str,\n", + " memory_id: Optional[str] = None,\n", + " *,\n", + " config: Annotated[RunnableConfig, InjectedToolArg],\n", + " store: Annotated[BaseStore, InjectedToolArg],\n", + "):\n", + " \"\"\"Upsert a memory in the database.\n", + "\n", + " If a memory conflicts with an existing one, then just UPDATE the\n", + " existing one by passing in memory_id - don't create two memories\n", + " that are the same. If the user corrects a memory, UPDATE it.\n", + "\n", + " Args:\n", + " content: The main content of the memory. For example:\n", + " \"User expressed interest in learning about French.\"\n", + " context: Additional context for the memory. For example:\n", + " \"This was mentioned while discussing career options in Europe.\"\n", + " memory_id: ONLY PROVIDE IF UPDATING AN EXISTING MEMORY.\n", + " The memory to overwrite.\n", + " \"\"\"\n", + " mem_id = memory_id or uuid.uuid4()\n", + " user_id = config[\"configurable\"][\"user_id\"]\n", + " store.put(\n", + " (\"memories\", user_id),\n", + " key=str(mem_id),\n", + " value={\"content\": content, \"context\": context},\n", + " )\n", + " return f\"Stored memory {content}\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set Nodes for Memory Storage" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "def store_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):\n", + " # Extract tool calls from the last message\n", + " tool_calls = state[\"messages\"][-1].tool_calls\n", + " saved_memories = []\n", + " for tc in tool_calls:\n", + " content = tc[\"args\"][\"content\"]\n", + " context = tc[\"args\"][\"context\"]\n", + " saved_memories.append(\n", + " [\n", + " upsert_memory.invoke(\n", + " {\n", + " \"content\": content,\n", + " \"context\": context,\n", + " \"config\": config,\n", + " \"store\": store,\n", + " }\n", + " )\n", + " ]\n", + " )\n", + " print(\"saved_memories: \", saved_memories)\n", + "\n", + " results = [\n", + " {\n", + " \"role\": \"tool\",\n", + " \"content\": mem[0],\n", + " \"tool_call_id\": tc[\"id\"],\n", + " }\n", + " for tc, mem in zip(tool_calls, saved_memories)\n", + " ]\n", + " print(results)\n", + " return {\"messages\": results[0]}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set Nodes for Agent" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "model = ChatOpenAI(model=\"gpt-4o\", temperature=0.7, streaming=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore):\n", + " user_id = config[\"configurable\"][\"user_id\"]\n", + " namespace = (\"memories\", user_id)\n", + " memories = store.search(namespace)\n", + " info = \"\\n\".join(f\"[{mem.key}]: {mem.value}\" for mem in memories)\n", + " if info:\n", + " info = f\"\"\"\n", + " \n", + " {info}\n", + " \"\"\"\n", + "\n", + " system_msg = f\"\"\"You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", + " User context info: {info}\"\"\"\n", + " print(\"system_msg:\", system_msg)\n", + " # Store new memories if the user asks the model to remember\n", + " last_message = state[\"messages\"][-1]\n", + " print([{\"type\": \"system\", \"content\": system_msg}] + state[\"messages\"])\n", + " response = model.bind_tools([upsert_memory]).invoke(\n", + " [{\"type\": \"system\", \"content\": system_msg}] + state[\"messages\"]\n", + " )\n", + " return {\"messages\": response}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conditional Edge Logic" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "def route_message(state: MessagesState):\n", + " \"\"\"Determine the next step based on the presence of tool calls.\"\"\"\n", + " msg = state[\"messages\"][-1]\n", + " if msg.tool_calls:\n", + " # If there are tool calls, we need to store memories\n", + " return \"store_memory\"\n", + " # Otherwise, finish; user can send the next message\n", + " return END" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load and Compile Graph" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "builder = StateGraph(MessagesState)\n", + "\n", + "builder.add_node(\"call_model\", call_model)\n", + "builder.add_node(store_memory)\n", + "\n", + "builder.add_edge(START, \"call_model\")\n", + "builder.add_conditional_edges(\"call_model\", route_message, [\"store_memory\", END])\n", + "builder.add_edge(\"store_memory\", \"call_model\")\n", + "\n", + "graph = builder.compile(store=in_memory_store)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize Graph" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAARoAAAD5CAIAAAB3dm11AAAAAXNSR0IArs4c6QAAIABJREFUeJzt3XdAE+fjBvD3siAkIQkrCMgSQZYCoqLixIkKKBVrxVVRaRFrbau1+m2t/jpE66h11GqdrXsgLhRnnbiooKCyxQBCQsgiIeN+f8RSKwEiXLhceD9/QcblSeDJ7fcQFEUBBEFYIOEdAILMB6wTBGEG1gmCMAPrBEGYgXWCIMzAOkEQZih4BzArYqFKLFDJxRqZRK2uJ8Y+CAoVIVMQKxbZyppiw6PSmfBfovUQYvzNTVvVS2XhI2lhjoxhTdGoUStrMoNFsaCTCPHRUiwQaY1aLtHIxWq5VENnkj0DGF2DmEwuFe9oxAPr1CZioepmmoBEAhwHmmcAw87ZAu9EbcUvqCvMkQkrlGw7Wr9xtlQaXB14B7BOrZd5TpCbKek3zrZrMAvvLNh79JfoZpqgX5Rt93AO3lkIA9aplY5tLPPtY+3b2xrvIMZ197ywVqAaNpmHdxBigLPy1ti2pLBPpK3ZdwkA0GuEjbMn/czv5XgHIQY4d3pnv35ZMHlRZ2sbGt5B2k/ePXHODfF7n7jgHcTUwTq9m6Mby/pG2jp1oeMdpL3l3KytKlMOiXPAO4hJg3V6B3fOCdi21G69zH8ZT6+75wUMa6pfWAd9+4aA606Gqq1WPb0n6bBdAgCERNhcOfwK7xQmDdbJUDfTqvuNs8M7BZ7IZKTXSJvbZwR4BzFdsE4GefVCQaGRvHow8Q6Cs14jbCpLFKp6Ld5BTBSsk0EKHsm4Du130E1OTo5SqcTr6c2zZJKLsmVGmjjRwToZpChH5hHAaJ/XSktLmzFjRl1dHS5Pb5FnAKMwB9ZJP1inltVW1zOsybad2ul4vFbPWHQbaY03X9Lx7M6srSbI0fLtDtapZbXVaoAYZcolJSWJiYnh4eGRkZHff/+9VqtNS0v78ccfAQDDhg0LDQ1NS0sDAGRlZc2bNy88PDw8PHzu3Lm5ubm6p4tEotDQ0L179y5btiw8PHz27Nl6n44tMhmpk2qlIjXmUzYD8OSWlsnEaoa1UT6olStXFhcXf/bZZzKZ7N69eyQSqX///vHx8fv27Vu/fj2TyXR1dQUA8Pl8pVKZkJBAIpEOHz48f/78tLQ0S0tL3UR27NgxceLErVu3kslkHo/X+OmYY1iTZWINC57B0QisU8uMVyc+n9+tW7fx48cDAOLj4wEANjY2Li4uAICAgAAO5/Wh3KNHj46MjNT97Ofnl5iYmJWVFRYWprslMDAwKSmpYZqNn445Bpsiq4VzJz1gnQyAAgrNKEt7kZGRu3btSklJSUhIsLGxaephCIJcvnx53759RUVFVlZWAACB4N+dP7179zZGtmbQLEmoFq476QHXnVpGZ5ElQqN8GSclJS1cuPD8+fNRUVGHDh1q6mHbt2//4osv/Pz81q5du2DBAgCAVvvvnh86vb0PIKytVlkZZ3ZNdLBOLWNYU2Rio9QJQZAPPvggNTV10KBBKSkpWVlZDXc1bDpTKpU7d+6MiYn57LPPgoKCAgMDDZmyUbe8GW/pl+hgnVrG4lBolkZZ2NNt1GYwGImJiQCAvLy8hrlNVVWV7jF1dXVKpdLX11f3q0gkemvu9Ja3nm4MLC6VySYbb/rEBb9jWmbrZFFRrBQLVdY2GG/LWrx4MZPJDAsLu379OgBA15kePXqQyeQ1a9ZERUUplcrY2FgvL68DBw7Y2tpKpdJt27aRSKT8/Pymptn46dhmLsmVkSkImQq/iPUgL1++HO8MBFBbrVLKNTw3S2wnW1ZWdv369XPnztXV1SUnJw8ePBgAYG1tzePxLly48Ndff4nF4rFjx4aEhNy4cePQoUMlJSXJyclubm5Hjx6dMmWKSqXas2dPeHi4n59fwzQbPx3bzA+viFy60B1cMf4ozAM838kgL57J87Ok8OQ5AEDaNv6QOHsmB+500gMu7Bmks7fVnbPC8qK6Th76N6OJRKKYmBi9d7m4uJSVlTW+fdCgQd9++y3WSd+WkJCgd8nQ19e34eiKNwUFBa1fv76pqeXcqmWyKbBLTYFzJ0OVF9XdOCloarwEjUZTWVmp9y4E0f8h0+l0LpeLdcy3VVVVqVQqw1PRaDQ7uyZP6/ptaeHUr9wsGXA7hH6wTu/g6pEqj0ArV592OrTc1OTcqlXINKHDmtzdDMHtM+9g0Hv2GX+8MtI+KBNX9lz+/IEUdql5sE7vZvIi1/2rSvFO0d4kNar0PZXjk5zxDmLq4MLeO6uv0+z9oWTK4o6yClFZqji/t3LKElcSyTinqZgRWKfWkIrU+1eXjk3o1NSGPrPx/KHk4WVR3MLOeAchBlin1rt04JVcou43zs7G0QxHhC17Lr95SuDUhR4e1aHHb3onsE5tUvRYdjOt2sOfwXOz9AhgmMHikEKuKcqRlRcpaqtV/cbaYn4giHmDdcJAfpbk2QNpUY7Mt481hYYwrCkMa7IFnUyIT5ZMRmRitVysltVqJDWq8iKFRwDDuyfL1ccK72jEA+uEpeJcWe0rlUyslok1GhWq0WD52arV6pycnKCgIAynCQCgM8goilpZUxhssl0niw44/DqGYJ0IQyQSxcbGXrx4Ee8gUJPgficIwgysEwRhBtaJMBAE8fHxwTsF1BxYJ8JAUfTp06d4p4CaA+tEGAiCsNlsvFNAzYF1IgwURWtra/FOATUH1okwEARxcnLCOwXUHFgnwkBRlM/n450Cag6sE2EgCBIQEIB3Cqg5sE6EgaJoTk4O3img5sA6QRBmYJ0IA0GQZsYYgkwBrBNhoChaXV2NdwqoObBOhIEgiL29Pd4poObAOhEGiqJGvS4G1HawThCEGVgnwkAQpEuXLningJoD60QYKIoWFBTgnQJqDqwTBGEG1okwEARpuKQnZJpgnQgDRVG9V2SCTAesEwRhBtaJSPz9/fGOADUH1olIHj9+jHcEqDmwThCEGVgnwoADg5k+WCfCgAODmT5YJwjCDKwTYcBx9kwfrBNhwHH2TB+sE2EgCNK1a1e8U0DNgXUiDBRFnz9/jncKqDmwThCEGVgnwkAQhMfj4Z0Cag6sE2GgKFpZWYl3Cqg5sE6EgSAIPATWxME6EQaKovAQWBMH60QYcO5k+mCdCAPOnUwfrBNhIAji4uKCdwqoOQiKonhngJqTkJBQXl5OJpO1Wm1NTY2NjQ2CICqV6uzZs3hHg94G506mbtKkSWKxmM/nV1RUKJXK8vJyPp9PJpPxzgXpAetk6oYPH+7l5fXWjUFBQTjFgZoD60QA8fHxVlZWDb/yeLwpU6bgmgjSD9aJACIiItzc3HRruSiK9uzZE45faZpgnYhh2rRpTCZTN2uaPHky3nEg/WCdiGH48OFubm4AgJCQEDhrMlkUvAOYIo0aFb2qF9eoTWonwviRiUB+fOSAaYU5Mryz/ItEAhx7KseehncQkwD3O70t+0ZtbqZYpUTtXSwVMg3ecUwdi0t58UzO5FKCB3E8Ahh4x8EZrNN//H1VxC9S9o9xQBAE7yxEolFrM/bxQ4dz3f06dKPgutO/Ht+qLStQhI/nwS69KzKFNHKGy51zQn5hHd5Z8ATr9JpWgz6+Le4f7YB3EALrO87hwSUR3inwBOv0mlioUsi0ZAr8QFqPbUcryTWhzSTtD/73vCYWqu1dLPFOQWwkEuLQmS4VqfEOghtYp3+gAG7HaztJjQrvCHiCdYIgzMA6QRBmYJ0gCDOwThCEGVgnCMIMrBMEYQbWCYIwA+sEQZiBdYIgzMA6QRBmYJ0gCDOwTu3nytWMIRGhpaXFul83/Lxqwnsj2j/G6TMnhkSECgTVzT9s5qy4FSuXtFcoMwHrBEGYgXWCIMzAkYzaJDs7a/eebU9yswEAPXr0nDkj0btrt+zsrL37tmfnZAEAuvn4JyYu8PFu/VBeR47+ee2vSyOGj9m9Z1ttrahLF+9ZH36ckXH2xo0rFCp1xPAxc2Yn64YsFwiqt2xddyfzhlqtDgwISpy7wNPz9WjMz/Ofbvxl9dOnT2xt7Dp3dntz+g+z7v22/ZeCgmdcrk1wUK+EWUm2tnZt/mA6KDh3ar27925/+tlciUScOHfBnNnztRqNRq0GAFRU8JX1yqnxCdOnzamo4H+5ZL5CoWjLC2VnZ126lL7861VfLv62tLToi0VJNBptzZotMdFxhw7vO5eeBgBQKBQLP0+8/yBzzuz5Cxd8VS2oWvh5okQqAQCUlhZ/unCOoLpqdsK8iRPjnz3Pa5jy/QeZixbPc3fz/Pyz/8W9F//o0YOFnye2MW1HBudOrffLpjWOjk4bf/6dRqMBAGKiJ+puHzZs9PDhkbqffXz8Fn6WmJ2T1Ss0rC2v9fX/fuBwuP7+3TPv3rx9+/qnC5YgCOLj7Xv+/KkHDzLHRMZcyDhTWlr805otIcG9AACBgcEfxEcdO3Zg+rTZW7dtICGkTb/s4nC4AAASibR+w4+6yW78ZfW4sRPmJy/S/RoaGjZ95nt3790aED6kbZ9NBwXr1EoCQXVpaXHCrCRdl96EIMhf1y8fOryvpKRIN1R/jVDQxpej0Sxe/0ClUanUhrGW7OwdamtFAIC//77PZDB1XQIAODp2cnV1f/rsiUKhuHv3VlTUe7ouAQAolNd/9IqK8pKSopcvX5w6ffzN13r1Cl4fvpVgnVpJIhEDABzseY3v2rN3+85dW2MnTJ6TkCwQVn+74kstqjVSDAR5PVKiVCZl/1MYHWtrtqC6SiCsVqvVnRydGj+3pkYAAJg+bc7AAUPfvN3GBq47tRKsUyvR6VYAAGHN27MdpVL55/6dYyJj5iV91p7f9PZ2Dk+eZL95i1Ao4Dk4cthcAEBNjbDxU5hMFgBAqVS4urq3T0izBzdFtJKDA8/e3iH9/Cm1+vXAPSiKarVahaJOqVR6/7Mpr1YsAgBotVrdchoAQCyu1d1FpdLq6uQNT28jf//uEok4NzdH92tBwfOXL18EBgYxGAxn585XrmaoVG8PiuLi4srjOZ49d7Ku7vVYk2q1uuFhNCpNNweGDAfr1EoIgsyZPb+oqCBp3oxjxw+eSD2clDzz4sVzbDbH09Pr2PED129cSU8/9c03X5BIpMLCfACAh6cXiURat+GHh1n3AABdvXwUCsXyFYtf8svanmdYxGgXF9flKxafOn38zNnUZf9byOFwo6Mm6hbn+Pyyeckzj584lHryyMFDexveQtLHnwkE1UnJM06kHj527EDSvBmpJw/r7vXy8rl3/86mzWvhsNuGg3VqvWERo1auWIOi6Jat6/b9sYPD4Tq7uAIA/rf0e7olfcXKJQcP7/3oo0+nxs9KT09TqVSdHJ0Wf/GNUqm8ffs6ACAiYlTcxPi8vMfFRQVtD0OhUFav2uTj7bdl67qNv6x2dXXfsO43LtcGADB82Oj5yYvE4tpft204ezbVzy+w4VkDwof88N16KoW6afNPe/Zt5/E6de8eorsrYVbSgPAh586d1M1aIUPAIf9fK82T378oGhavZ5UdMtzhtcVxn7owOR10nbyDvm2TIpVKJ08Zq/euuXM+GTtmfLsngloJ1gl/VlZW2379U+9d1ix2u8eBWg/WCX8kEknvfiGIcOCmCAjCDKwTBGEG1gmCMAPrBEGYgXWCIMzAOkEQZmCdIAgzsE4QhBlYJwjCDKwTBGEG1uk1MgVhWJPxTkF4NjwaiYzgnQI3sE6v2TnTih7L8E5BbHKJWlihtGJ13G8lWKfXLOhk125WAn4d3kEIrLKkzrsnE+8UeIJ1+tfgifZXD1eqVfDk09ao5iseXhKER9vjHQRP8Gzc/5BL1LtXlvQeZcfiUq3taAB+Ni1CgLBCKa1RPb1b+8FiVzKl4644wTrpl3lO8LJAodWiEqH+YYYUCgWNRiOR2nXejqKoQqGg0+nt+aKNqdVqtVptaWmp+9XGkYYgwMWbHjyY29JTzR+s0zvbs2ePi4vL0KFDDXgslrZs2bJ///4vvvhi3Lhx7fzSb0lNTeVwOIMGDcI3hgmC606GKi0tXbJkCQBg2rRp7d+lioqKy5cvy+XyQ4cOtfNLNxYdHa3r0tixYy9evIh3HBMC62SoVatWJSQk4PXqR44cKSws1LX65MmTeMV4S0OqiooKvLOYBLiw14Lr16+Xl5dPnDgRxwwVFRVJSUklJSW6X319fffu3YtjnsauXr168uTJVatWNVxPoGOCc6cmoSiqVCoPHz6M+7rKkSNHGroEACgpKTl+/Hizz2hvgwYNmjhxYnV1dXl5eUe+PBSsk34nTpzIzMykUCgbNmxo2IqFCz6ff/ny5TdvkcvlBw4cwC+RfmFhYY6OjnQ6PSIi4s6dO3jHwQeskx7Hjh3Lzs7u06eP7iKZ+Dpw4IBu1vTm4MZvzqxMCofDuXHjhm4GVVRUhHec9gbXnf7j8uXLQ4YM4fP5Tk4mN/BdTU3NxIkTMzIy8A5iqPT09JMnT27YsKHjrFB1lPdpiMWLF4eEhAAATLBLurmTr2/rL1nd/kaOHMlms/l8PovF4nI7xE5euLAHAAB5eXkAgOnTp0+aNAnvLE2SyWRlZRhcuqY9hYWFubq6kkikiIiIggIMLhRi4mCdwPz580UiEQDAz88P7yzNkUgknTp1wjtFa7DZ7KNHj2ZnZwMAGl+1zZx06DopFIrc3NxJkyaFhbXpsurto7KyksFg4J2ilTgcTkxMDABg6dKlprMbGnMdt07r16+Xy+W+vr79+/fHO4tBKisru3TpgneKtkpJSXn48KHuOFq8s2Cvg9bp+PHjtra2NjY2eAd5Bzk5Oe7u5nBN6G+++YZEIt27d+/MmTN4Z8FYh6tTTk4OACA8PHzq1Kl4Z3k3KpWqW7dueKfABolECgsLu3Xr1qNHj/DOgqWOVaebN2/+/vvvAAB7e4KdNFpeXv7kyRPzmDs1WLlypYODQ319fX5+Pt5ZsNGx6lRTU7N27Vq8U7TGvXv3IiIi8E6BPUdHRxqNtnTp0qysLLyzYKCj1Gn58uUAgDFjxuAdpJVSU1MHDx6MdwpjOXjwoFAoxDsFBjpEnVasWBEXF4d3itYrLi6uqakJDg7GO4gR6c7InDdvHt5B2qRDHLNXVVVFuJWlN+3cuZPD4Ywfb/6XcC8pKVm7du2GDRvwDtJKZl6nRYsWLV++3MrKCu8grScSiWJjYzvOOeRarZZEIpWUlLi5ueGd5Z2Z88LesmXLli5dSuguAQA2bdqUlJSEd4r2oxsf6sCBA0Q8xs+c6/R///d/bDYb7xRtUlRUVF5ePmHCBLyDtLfFixcTcSeveS7s7dq1q2vXrkQ5eqgZEyZMWLduHREXezDx6tUrNpttYWGBdxBDmeHc6fr163K53Ay6tHHjxqioqA7bJQCAg4NDXFxcVVUV3kEMZZ5zJzNw+/bt/fv3E3cbF1bUavW1a9faf2DD1jG3Ot24ccPFxYXo3+gVFRWzZs06ffo03kGgd2NWC3uFhYXr168nepcAADNnzjSF0V5NR0pKyokTJ/BO0TKzqpNKpdq6dSveKdoqOjr6t99+I+6ZgsawaNGiW7duyeVyvIO0wNwW9oguNjb2p59+MrMjxzsO85k7paenE33F/cMPP1y7di3sUlN2795dWlqKd4rmmE+dLl265O/vj3eK1hszZox5rPgZj7e3d0pKCt4pmmM+C3tCoZDL5SII8a5+p1Qqhw4devToUUdHR7yzmLrKykoul0uj0fAOop/5zJ3YbDYRu5Sfnz9nzpyLFy/CLhmCx+OZbJfMp04FBQWTJ0/GO8U7u3jx4tKlS3fv3o3vVQUIRKPRjBw5Eu8UTTKTOkkkEhaLhXeKd7Nr16709PSDBw/iHYRIyGRycHDwtWvX8A6in5msO6nVapVKhftlmA331VdfOTs7d6gzLzoCM5k7USgUonRJLBbrri0Lu9Q6arXaZE+FMpM6lZWVxcbG4p2iZZmZmdHR0Zs2bTLlFQATR6FQkpOTKysr8Q6ih5lckMbR0dH0ry6xefPmysrKty4lCLXCwIEDi4uLeTwe3kHeZibrTrqFKBaLZbLbymfPnh0WFjZr1iy8g0BGZCZzJ902n+joaJlMJpFIHB0dTecyDXfu3NmyZcv8+fN112KD2k4ul9fV1dna2uId5G2Er9PAgQPlcrluHqubNaEo6uPjg3eu137++ee8vLxdu3bhHcSs5Obmbtu27ddff8U7yNsIvyli2LBhJBIJQZCGxTwymdynTx+8cwGhUDhv3jw2m71582a8s5gbZ2dnV1dXvFPoYQ7rTjNmzMjOzm6ok6Oj4/r16728vHCMlJGRsWrVqk2bNnl7e+MYA2pnhJ87AQBWr17dcFIDiqIsFgvfLi1dujQ3N/fChQuwS0aCoqhpjmluDnWyt7dfuHBhwyGkgYGBeCV58uRJRETEgAEDkpOT8crQESiVynHjxuGdQg/Cb4rQ6d+/f2xs7M6dO3FccdqxY8eVK1eOHj3K4XBwCdBxkMlk0zy3zaB1J7VKWyfVtkueNklJSXn27FlKSko7X6VTo9EsWbIkMDAQqysaoihqbUPFZFJQe2qhTrmZ4kd/1Qor6ulMcjumaiUURXHZjatSqUgkEpmM2UfEsae9LJB7BjJ6DbexcybMGKjG9tVXX507d45EIjX80yIIotVqHzx4gHe015pb2Ms8L6zmqwZMcGTBb8p2p9WitVX1Z/dURLzPc/KAZ0MBAMCcOXOys7PLy8vf/NL09PTENdR/NLkp4s45YW2VesB4HuwSLkgkhMuziPnY7cqhVxXFCrzjmAR3d/e3rhmHIEh4eDh+id6mv041r+qrXyrDxjq0ex7obUMmd7p3oQbvFKZi5syZDg7//lu6uLiY1HUl9dep+qUSRU30WNKOhsmm8gvrlHUavIOYBA8Pjz59+ujWnVAU7du3r5OTE96h/qW/TtJajX1nuLxuKtz8mIKKerxTmIpp06bpZlAuLi7x8fF4x/kP/XVSKbUqBQG2jHcQYkE9AhcW/uHh4dG3b18URcPDw01q1mQ+u3Ehk4Vq0ZI8ubRGLROr1Sq0TobBUmsPp3h5UJdutv0z9mNwTq6lFZlmSbJikVlciptvm4aGh3WCjCU3U/zsgfTFU7mTt7VahZKpZBKNAgAWG4oRyz79xqpRIMHiGgASGapRqbUqFYWKpG0rd/NleIcwfEKtWzEpWCcIe7mZ4uupAltXJsWKFTDC5E5BbwbXzUZSJX98T3n9ZNGAGDvvkHcbbQ7WCcKSQqY5s7OyXkXy6OVMsSDAkTRvQRDE2oEBAIPFs75/RZh7VzrmQx6FauiR4uZwRDlkIl7m1+1eWcLoxHXsZk/ELr2Jaknp5OtgacPZtqSossTQ3eiwThA2BPz6i4eqfQa5WViZ7hji78qCSfOLcD+355WoyqAdFbBOEAZK8+RndlW6BpvWZmusuIU6p26tKC+qa/GRsE5QW8nE6vQ9lZ3NtEs6bqHOJ7bw65Ut7IyFdYLa6tyeV+69zblLOl36OJ/ZWdH8Y2CdoDZ5cEmk1lCoFua/iZhmRa2rI+fcrG3mMbBOUJvcOl1t79Wu5z7jyL4L98ZJQTMPMG6dpFLps+d5Rn0JCEcPLtc4+diQSKZ4POGKlLFHUn/EdpoUKtnOg53d9AzKuHVKmPP+2bOpRn0JCEe5mRJLdsc688CSaZGXKWnqXuPWqb6+lacVmMFgmjpm80Yak4rUcrGGbt2xhrJg2NAF5cqmTj/DbA3yz/27TqQekkjEXl4+M6bP7RnS+/0PxtbUCE+kHj6RepjHczzw5ykAgEBQvWXrujuZN9RqdWBAUOLcBZ6eXgCAK1czvl3x5cpv1xw8vDcv7/Hk96d/OPMjhUKxfcemi5fO1dcrO7u4xcVNHTpkRDMZnuc/XfDp7P8t/f63Hb+UlhbzHBynTPlQKBScTDsilUqCg3t9vnAZh8PVPTj15JFDh/dVV79ydHSKGDpqUtxUCwsLw6egVqt37tqafv5Uba3Izc1jxvS54f0HN34jE9+bkpZ2NDIy5qPEBbrXfckvi58a8+Wi5SNHjsXqw8dF6VM518VYF1DNL7x/5sJmfsUzFtPGyyN09PCPrFl2AIBl30XEjluck3vlydMbdEtmWK/xI4Yk6J6i0Wgyruy4fe9EfX1dF8+eKpWxRgSwc7MuzZN3Ddbz3snLly9vfOvLgjqNGji6G3o9v/sPMn9c9U3fvgMmxn5QWytydurs6uoeEBB07drFPr37fb5wWUTEKDs7e4VCkfzJh8XFhQmz5g0IH5J59+aJ1MNjxoy3oFkUlxRevZqRnfPw/bhpMTFxvUL70un0L5fMz8vLiYuLHzJ4RH19/fYdmxwceF27dmsqhlAoOHb8YM7jrKSPFo4aNe7+g8zTp0+o1KqFC77q3j3k2LH9FZXlAwcMBQDs2r1t777fIkdHR0bG2HBtDh/ZV/byxYDwIYZPIWX1irRTR9+L/SBq3Huvqip37/ktJLgXj9fprTfSp3d/qUxy9VpG7ITJJBIJAHDq1LHHj//+4vOvDb8CeX6W2NXHisU1ra1nT+9LJGKEwcV+Ye95wd3tez/p2qXXwL7vOzl6/52T8eDRuV7B48hkyqW/9jx6fDEocMSoYYkkEvni1Z0uzn72dq4AgGNpKddu/tkjYFi/3rE1oooyfq6Lk69fN+xHkpAIFJaWqLOXnnZg8xeqqOADAMZHx/n7dx8+PFJ3YzcfPwqFYmtrFxgYpLvlQsaZ0tLin9ZsCQnuBQAIDAz+ID7q2LED06fN1j1gfMykhu/sK1czHmU/3P9Hmp2dPQBgWMSoujr50WP7I0dHNx8mce6CsLBwAEDcxPhVKd9++skSD48uAaDH/ft37mTeAABUV1f98efvy5Z+N2hghO4ptrb269b/MC/pcwOnUFpanH7+1LSpCTOmzwUADBoYET9t/K7dv679aWvjNzJy5LjUk0fu3rsd1qc/AODq1Yy+YQMYjDadV2MKpCINxcIoS3onTv8UFjp+/NjXfw6Oo577AAAIfElEQVRvrz6rf570NP92oN9gAEDvkKiIQTMAAE6O3pn3U5/l3/bz6V/Gz7t973jEoJmjhyUCAEKDxxQUGWu0MKoFWVKj0nsXNnUK6xPOYll//8P/kud9oftH1Ovvv+8zGUxdlwAAjo6dXF3dnz570vCAkJDeDT/fvn1drVZ/EB/VcItGo2EwmC2GsaC9/htTqTQAAPWfmYC9vUNtrQgAcP/+HbVa/d33y777fpnuLt0aTnXVKwOn8PejBwCA8PAhutsRBOkVGnYh44zeN+Lbzd/d3fP8+VNhffrzy18+e543dWpCi+/C9NXJNBQW9se5CmvKK6uKqoUvbt878ebtotrXZwrSaK9nC2QymW3tUCuuAgBkP7kCABjYb3LD4xHEWNsFyBZkmVj/AUfY1MnW1u6Xn3/ftGXtkqULAgJ6fL3sB3t7PaMgSWVS9j+rLjrW1mxBdVXDr1Z0q4afa2oEtrZ2a9dsffPxZErrAyPI6zE6BcJqAMD33613sP/PqThOTi5Fxc1dw7hhCjKZFADA5fy7v8Xami2Xy2UyWeM3AgAYPSpqx++bJVLJ1asZTAazT+/+rX4XpgM1zugHEqkAADB8SEJ3vyFv3s5i2TV+MIlE0Wo1AACRqMLSksmwYhsl038hKABNjDWA2eK4q6v7qh9+fvDw7tfffL4qZfma1a8vavTmpi17O4cnT7LffJZQKOA5OOqdIItlLRLV8HidLLBeomCxrBsyt24KdnYOAACxuFa3IKp7IxQKxdJS/4rE8GGR237bePny+atXMwYOjKBSzWHoQgabLFdiP74S3ZIFAFCplA727/DXYTC4CoVUpa6nUox+PLuqXsPl6J8tYzZD1G0TDwnuFRY2oGHXLd2SLhBUNzzG37+7RCLOzc3R/VpQ8PzlyxcNa1ZvCQnprdFoTqYdabilrq7lQ3oNERzcC0GQ4ycOtnrKvr4BCILcvnNd92t9ff3tO9f9/bs3Na4yl2sTFhZ+8NDep89yIyJGtS2+qWByyOp67Otkb+fKYTvefZCmrH/9R9Fo1Gq1/nWVBi7O3QAADx+lY56nMbVS3dRmIWzmTrl5j79dsTgmOo5Ot8rMvNnNx093e2Bg8MVL5/7cv4vFsvb36z4sYvQff+5cvmLx1PgEEom0d+92DocbHTVR7zSHD4tMO3Vs668byiv43l275ec/u37j8q7fjzQ1BzCci3PnCePfP3ps/1fLPg3vP1ggqD6ReuiH7zd4N73N8C3OTi4jR4zdtftXjUbj5ORy+vRxoVDw1ZKVzTwlYuioFSuX2NraBfXo2cb8JsLWkVZRpsR8sgiCREd+unv/4o2/zurbe4JWq7n38EzPoFFvrhc11sN/WMaV34+m/lhRWejcybv4RbZYUtXM49uCTEa5DvqXL7CpE41Kc3P1+PPPnSiK9gjqOX/eIt3tc+fMFwqr9+7bzmFzP/54oaen1+pVmzZvWbtl6zqtVts9MDjp48+4XP1HfFGp1NWrNv22feOlS+mnTh1zcXGNGvcepQ3rTm9K+nihgwPv+PGDd+/esrW1GxA+xN7u3Ya8XfDJlwwG8/iJgxKJ2MO9y/f/t65hE4tefr6BAIAhg0foNpebATdfxrVj1bbu2B+wF+g3+MP4tekXt508s87SkunhHuTpHtz8U8hkcsLU9cdPrb5196ilBbO7/1CGlbEuC/SqUOKeoGdFrskraGSmC+sVoMfgjnJoYzsoKHieMGfyls17Gmbdhju3syw8yq6Tp8kdznNgzQtrZxsrjskFMx6poE5ZUxub7Kz3XtPaM2iI+QsSioryG9/er9+gJYu/xSNRCyorK1JPHj5zNjU4KLQVXTJlfn1Yzx8rmqnT84K7uw982fh2uiWrTqH/yLexI5PDQmOwSpj79MYfR75ufDuKogCgejemz53xS2dn36YmqJQq/Ho3eSwI8er09bIfVPpWTOmWhh7D0c5KXxSfv3A6ImLUrJkf450FY90HcG6dKuQ4schU/dtg3DoHLvx4b+PbURQ0dSEuKzqWG7u7ePTUG0Cr1aIoqnfTkd4t8joqhbrmpcQ30aOpBxCvTg3bpomiV2jYkUPn8E5hLP2ibB/frXH00f8vSKNZ2tDwPFEX2wDVhcIBMU2WDZ4+CLVVYH82na5VylvYkG0GFFIli4P49GzuqF9YJ6itxnzIK7j1Eu8UxoWiaP4t/thZ+g85aADrBLUVhUqKTXYuyizDO4gRFd4q+2CRa4sPg3WCMMBzs3xvvlPhnRfmd7qkVq3Nv/li8ucuNo4tH74E6wRhg8WlRs/t9PhCsbzWfK7kK6tRPL1WOnGBsxXboI12sE4QZmw7Wcxb54XKJS9zKhRSYl8usU6sLPu7gqyWfrS6C9vW0EOWibehHDJxYz50LHos++vEKzrbkmJpYe1g1dReKROkrteIX8k1SmW9VDlwvJ2rj5UBT/oXrBOEPQ9/hoc/ozBb+jxLln9TaONspVJqyTQKhUYFJjiIGIqqlWqNSk21INWUyz38GV37M939WnNZKlgnyFg8A5megUwAQEVxnVSkkYnV9QqtQmZy11y2tEIsrGhW1lZMDtnRrYVN4c2DdYKMzvAxfIhOf51olojWFOfKHZS1Ha2pI9wgk6J/yx6LS60qwebUV6jtirIlNk7mcw0yM6a/Tg6dLeDXoYmoFdS7drOiWcBdGgTQ5NzJ2cvy2tEWrmYDtYOL+/hho23xTgEZRP/ZuDqPb9U+z5L2GGTL5dHIFPjt2K7qZOraKtVfRyvGJzlzeXBJjxiaqxMAoOixLOuqqKJIQabAhb/2Y9OJJnql8gxg9B5lw+TAra+E0UKdGijrTG53gRlDUWBpBRcHiMfQOkEQ1CL4FQhBmIF1giDMwDpBEGZgnSAIM7BOEIQZWCcIwsz/A3v0G+V4UCycAAAAAElFTkSuQmCC", + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from IPython.display import Image, display\n", + "from langchain_core.runnables.graph import CurveStyle, MermaidDrawMethod, NodeStyles\n", + "\n", + "display(\n", + " Image(\n", + " graph.get_graph().draw_mermaid_png(\n", + " draw_method=MermaidDrawMethod.API,\n", + " )\n", + " )\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Run graph" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "Hi! My name is LangChain. I love keep updated on Latest Tech\n", + "system_msg: You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", + " User context info: \n", + "[{'type': 'system', 'content': 'You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\\n User context info: '}, HumanMessage(content='Hi! My name is LangChain. I love keep updated on Latest Tech', additional_kwargs={}, response_metadata={}, id='2e3fa22d-fb38-496b-9fc1-355cbb716077')]\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "Tool Calls:\n", + " upsert_memory (call_HzLinSecG8ZvN2JY2Ul9GzE3)\n", + " Call ID: call_HzLinSecG8ZvN2JY2Ul9GzE3\n", + " Args:\n", + " content: User loves staying updated on the latest tech.\n", + " context: User introduced themselves as LangChain and expressed their interest in technology.\n", + "saved_memories: [['Stored memory User loves staying updated on the latest tech.']]\n", + "[{'role': 'tool', 'content': 'Stored memory User loves staying updated on the latest tech.', 'tool_call_id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3'}]\n", + "=================================\u001b[1m Tool Message \u001b[0m=================================\n", + "\n", + "Stored memory User loves staying updated on the latest tech.\n", + "system_msg: You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", + " User context info: \n", + " \n", + " [0056b832-4cfc-42f5-8da6-da0212ed38c1]: {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}\n", + " \n", + "[{'type': 'system', 'content': \"You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\\n User context info: \\n \\n [0056b832-4cfc-42f5-8da6-da0212ed38c1]: {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}\\n \"}, HumanMessage(content='Hi! My name is LangChain. I love keep updated on Latest Tech', additional_kwargs={}, response_metadata={}, id='2e3fa22d-fb38-496b-9fc1-355cbb716077'), AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3', 'function': {'arguments': '{\"content\":\"User loves staying updated on the latest tech.\",\"context\":\"User introduced themselves as LangChain and expressed their interest in technology.\"}', 'name': 'upsert_memory'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_5f20662549'}, id='run-fc760f0e-afa9-48c0-b64a-48eefc62f4bf-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}, 'id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3', 'type': 'tool_call'}]), ToolMessage(content='Stored memory User loves staying updated on the latest tech.', id='54ab0feb-5e6f-44db-9609-94ee4100322f', tool_call_id='call_HzLinSecG8ZvN2JY2Ul9GzE3')]\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Hello LangChain! It's great to meet someone who's passionate about staying updated on the latest tech. How can I assist you today?\n" + ] + } + ], + "source": [ + "config = {\"configurable\": {\"thread_id\": \"1\", \"user_id\": \"1\"}}\n", + "input_message = {\n", + " \"type\": \"user\",\n", + " \"content\": \"Hi! My name is LangChain. I love keep updated on Latest Tech\",\n", + "}\n", + "for chunk in graph.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", + " chunk[\"messages\"][-1].pretty_print()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "langchain-opentutorial-F0L5SJfm-py3.11", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.5" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 8f85d3ae412afb4b357b9b76b409d05766ddc5bb Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Wed, 8 Jan 2025 23:29:32 +0900 Subject: [PATCH 2/6] [N-1] 05-AIMemoryManagementSystem / 09-ConversationMemoryManagementSystem - remove table of content descriptions --- .../09-ConversationMemoryManagementSystem.ipynb | 8 -------- 1 file changed, 8 deletions(-) diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb index 2985369d6..9609769e0 100644 --- a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -44,14 +44,6 @@ "\n", "## Table of Contents\n", "\n", - "두가지 안 중에 고민중입니다. 편의성을 위해 초안엔 ##, ### 모두 포함했습니다.\n", - "\n", - "1안: LangGraph 사용법 더 자세히 설명\n", - "\n", - "2안: LangGraph 사용 + 미사용\n", - "\n", - "+ VectorDatabase 사용법도 추가하고 싶은 마음이 있습니다..시간되면..\n", - "\n", "TODO: 1안, 2안 결정 + ### 제거\n", "\n", "### 1안, 2안 공통\n", From ddb47c5f1de3fdcf7d7f397ca312518325f75340 Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Sun, 12 Jan 2025 15:55:37 +0900 Subject: [PATCH 3/6] [N-1] 05-AIMemoryManagementSystem / 09-ConversationMemoryManagementSystem - refactored code for better system integration - added comments - added markdown explanations --- ...9-ConversationMemoryManagementSystem.ipynb | 758 ++++++++++++------ 1 file changed, 529 insertions(+), 229 deletions(-) diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb index 9609769e0..3319a9bb1 100644 --- a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -16,74 +16,104 @@ "\n", "## Overview\n", "\n", - "In modern AI systems, **memory management** plays a critical role in creating personalized and efficient user experiences. Imagine interacting with an AI that remembers nothing about your previous conversations—it would be frustrating to repeat the same information every time! To address this, memory management systems are designed to handle and organize information effectively.\n", + "In modern AI systems, **memory management** is essential for crafting **personalized and context-aware** user experiences. Without the ability to recall prior messages, an AI assistant would quickly become repetitive and less engaging. This updated code demonstrates a robust approach to handling both **short-term** and **long-term** memory in a conversational setting, by integrating:\n", "\n", - "This guide focuses on managing **Short-term** and **Long-term Memory** for conversational AI, specifically for building personalized chatbots. We will explore two implementation paths:\n", - "\n", - "1. **Using LangGraph**: A structured and feature-rich framework for managing AI memory.\n", - "2. **Without LangGraph**: A more manual, flexible approach for situations where LangGraph is unavailable.\n", + "- A central `Configuration` class for managing runtime parameters (such as `user_id` and model name)\n", + "- An `upsert_memory` function for **storing** or **updating** user data in a memory store\n", + "- A `call_model` function that **retrieves** context-relevant memories and incorporates them into the system prompt for the model\n", + "- A `store_memory` function that **persists** newly identified memories and tool calls\n", + "- A `StateGraph` that orchestrates the entire conversation flow, connecting nodes like `call_model` and `store_memory` to streamline user interactions\n", "\n", + "By leveraging these components, your conversation agent can maintain **deep context** over multiple turns, provide more accurate and engaging responses, and seamlessly update its memory when new information arises. This design illustrates a scalable way to build conversational AI systems that dynamically **remember**, **reason**, and **respond** according to user needs.\n", "\n", "### What is Memory?\n", "\n", - "Memory in AI refers to the ability to **store, retrieve, and use information** during interactions. In conversational systems, memory can enhance user experience by:\n", + "Memory refers to the capability of an AI system to **store**, **retrieve**, and **use** information. In conversational AI, this typically involves recalling the user’s previous statements, preferences, or relevant context—leading to more **personalized** and **adaptive** interactions.\n", + "\n", + "### Short-term Memory\n", + "\n", + "Short-term memory lets your application remember previous interactions within a single thread or conversation. A thread organizes multiple interactions in a session, similar to the way email groups messages in a single conversation.\n", + "\n", + "LangGraph manages short-term memory as part of the agent's state, persisted via thread-scoped checkpoints. This state can normally include the conversation history along with other stateful data, such as uploaded files, retrieved documents, or generated artifacts. By storing these in the graph's state, the bot can access the full context for a given conversation while maintaining separation between different threads.\n", + "\n", + "Since conversation history is the most common form of representing short-term memory, in the next section, we will cover techniques for managing conversation history when the list of messages becomes long.\n", + "\n", + "### Long-term Memory\n", + "\n", + "Long-term memory extends an AI system's capability to recall information across multiple conversations or sessions. Unlike short-term memory, which focuses on maintaining the context of a single thread, long-term memory stores **persistent information** such as user preferences, key facts, and important events. This enables the system to create a **seamless and personalized user experience** over time.\n", + "\n", + "Long-term memory is typically used to:\n", + "\n", + "- **Recall User Preferences**: For example, remembering a user prefers movie recommendations in a specific genre.\n", + "- **Track Progress**: Storing milestones or progress made in ongoing projects or discussions.\n", + "- **Adapt Over Time**: Learning about the user’s changing interests or requirements.\n", + "\n", + "In this system, long-term memory is managed through **memory upserts** that save critical user information to a persistent data store. This allows the agent to access, update, and retrieve information beyond a single conversational thread.\n", + "\n", + "### How Long-term Memory Works in LangGraph\n", + "\n", + "In LangGraph, long-term memory is implemented as part of a **persistent data layer**, decoupled from the conversational state. Key components include:\n", + "\n", + "1. **Memory Store**: A database or key-value store where long-term memory records are saved.\n", + "2. **Memory Retrieval**: Mechanisms to fetch relevant memories based on the current conversation context.\n", + "3. **Memory Updates**: Processes to modify or append new data to existing memory entries when user information evolves.\n", "\n", - "- Adapting to user preferences\n", - "- Handling repetitive queries automatically\n", - "- Providing continuity across sessions\n", + "By linking long-term memory with `call_model` and `store_memory` functions, the system can ensure that the language model has access to **relevant, long-term context**. This makes interactions more coherent and reduces repetitive queries.\n", + "\n", + "#### Challenges with Long-term Memory\n", + "\n", + "- **Scalability**: As the number of users and stored records grows, querying and retrieving relevant memories efficiently becomes a challenge.\n", + "- **Relevance Filtering**: Not all past information is useful for every interaction. Filtering out irrelevant data while retaining critical context is key.\n", + "- **Data Privacy**: Long-term memory must comply with privacy regulations, ensuring sensitive data is securely handled and stored.\n", + "\n", + "#### Example Use Cases\n", + "\n", + "|**Scenario**|**How Long-term Memory Helps**|\n", + "|---|---|\n", + "|**Personalized Assistant**|Storing user preferences for better recommendations over time.|\n", + "|**Customer Support**|Remembering prior issues and solutions to streamline support.|\n", + "|**Learning Systems**|Tracking progress in an educational setting for continuity.|\n", + "\n", + "By incorporating long-term memory, your system can provide **intelligent, context-aware, and adaptive responses** that enhance the overall user experience. In the next sections, we will implement this functionality and explore how it integrates into the overall conversation flow.\n", "\n", "### Short-term vs Long-term Memory\n", "\n", "|**Type**|**Scope**|**Purpose**|**Example**|\n", "|---|---|---|---|\n", - "|**Short-term Memory**|Single conversational thread|Keeps track of recent interactions to provide immediate context|Remembering the last few messages in a chat|\n", - "|**Long-term Memory**|Shared across multiple threads and sessions|Stores critical or summarized information to maintain continuity across conversations|Storing user preferences, important facts, or conversation summaries|\n", + "|**Short-term Memory**|Single conversational thread|Maintains recent interactions to provide immediate context|Last few user prompts in the current conversation|\n", + "|**Long-term Memory**|Multiple threads & sessions|Stores key information and summarized data to maintain continuity across broader conversations|User preferences, important facts, or conversation history|\n", "\n", - "- **Short-term Memory**: Ideal for maintaining the flow within a single conversation, like tracking the most recent messages.\n", - "- **Long-term Memory**: Useful for creating personalized experiences by recalling past interactions, preferences, or key facts over time.\n", + "- **Short-term Memory**: Helps the system focus on the latest messages for immediate context.\n", + "- **Long-term Memory**: Enables the agent to recall **past sessions** and user-specific details, creating a more **persistent** experience over time.\n", "\n", "## Table of Contents\n", "\n", - "TODO: 1안, 2안 결정 + ### 제거\n", - "\n", - "### 1안, 2안 공통\n", "- [Overview](#overview)\n", - " - [What is Memory?](#what-is-memory)\n", - " - [Short-term vs Long-term Memory](#short-term-vs-long-term-memory)\n", + " \n", + "- [Table of Contents](#table-of-contents)\n", + " \n", "- [Environment Setup](#environment-setup)\n", - "\n", - "### 1안\n", - "\n", - "- [Construct Tool](#construct-tool)\n", - "- [Set Nodes for Memory Storage](#set-nodes-for-memory-storage)\n", - "- [Set Nodes for Agent](#set-nodes-for-agent)\n", - "- [Conditional Edge Logic](#conditional-edge-logic)\n", - "- [Load and Compile Graph](#load-and-compile-graph)\n", - "- [Visualize Graph](#visualize-graph)\n", - "- [Run Graph](#run-graph)\n", - "\n", - "### 2안\n", - "- [Database Configuration](#database-configuration)\n", - "- [Short Term Memory](#short-term-memory)\n", - " - [Checkpointer](#checkpointer)\n", - " - [Filtering Long Conversation History](#trimming-long-conversation-history)\n", - " - [Summarizing Conversations](#summarizing-conversations)\n", - "- [Long Term Memory](#long-term-memory)\n", - " - [Storing Key Information](#storing-key-information)\n", - "- [Memory types](#memory-types)\n", - " - [Semantic Memory](#semantic-memory)\n", - " - [Episodic Memory](#episodic-memory)\n", - " - [Procedural Memory](#proceduarl-memory)\n", - "- [Usecase Examples](#usecase-examples)\n", - " - [Adapting to User Preferences](#adapting-to-user-preferences)\n", - " - [LangGraph Flow Example](#langgraph-flow-example)\n", + " \n", + "- [Define System Prompt and Configuration](#define-system-prompt-and-configuration)\n", + " \n", + "- [Initialize LLM and Define State Class](#initialize-llm-and-define-state-class)\n", + " \n", + "- [Memory Upsert Function](#memory-upsert-function)\n", + " \n", + "- [Implement Conversation Flow (call_model, store_memory)](#implement-conversation-flow-call_model-store_memory)\n", + " \n", + "- [Define Conditional Edge Logic](#define-conditional-edge-logic)\n", + " \n", + "- [Build and Execute StateGraph](#build-and-execute-stategraph)\n", + " \n", + "- [Verify Results and View Stored Memories](#verify-results-and-view-stored-memories)\n", + " \n", "\n", "### References\n", "\n", "- [LangGraph: What-is-memory](https://langchain-ai.github.io/langgraph/concepts/memory/#what-is-memory)\n", "- [LangGraph: memory-template](https://github.com/langchain-ai/memory-template)\n", "- [LangChain-ai: memory-agent](https://github.com/langchain-ai/memory-agent)\n", - "\n", "----" ] }, @@ -116,12 +146,10 @@ "metadata": {}, "outputs": [ { - "name": "stderr", + "name": "stdout", "output_type": "stream", "text": [ - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n" + "No packages to install.\n" ] } ], @@ -130,15 +158,7 @@ "from langchain_opentutorial import package\n", "\n", "package.install(\n", - " [\n", - " \"langsmith\",\n", - " \"langchain\",\n", - " \"langchain_core\",\n", - " \"langchain_community\",\n", - " \"langchain_openai\",\n", - " \"langgraph\",\n", - " \"SQLAlchemy\",\n", - " ],\n", + " [],\n", " verbose=False,\n", " upgrade=False,\n", ")" @@ -167,7 +187,7 @@ " \"LANGCHAIN_API_KEY\": \"\",\n", " \"LANGCHAIN_TRACING_V2\": \"true\",\n", " \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n", - " \"LANGCHAIN_PROJECT\": \"09-ConversationMemoryManagementSystem\",\n", + " \"LANGCHAIN_PROJECT\": \"ConversationMemoryManagementSystem\",\n", " }\n", ")" ] @@ -196,21 +216,25 @@ ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": 5, "metadata": {}, + "outputs": [], "source": [ - "## Database Configuration\n", + "# import for asynchronous tasks\n", + "import asyncio\n", + "import nest_asyncio\n", "\n", - "In this section, we will set up an **SQLite database** to store Short-term and Long-term Memory information for our chatbot. SQLite is a lightweight, serverless database that is easy to set up and ideal for prototyping or small-scale applications.\n" + "nest_asyncio.apply()" ] }, { - "cell_type": "code", - "execution_count": 5, + "cell_type": "markdown", "metadata": {}, - "outputs": [], "source": [ - "# SQLite Configuration" + "## Define System Prompt and Configuration\n", + "\n", + "This section introduces the `SYSTEM_PROMPT` and the `Configuration` class. They are essential for setting up the system’s behavior and managing environment variables (for example, choosing which language model to use). You can think of `Configuration` as the single source of truth for any settings your application might need." ] }, { @@ -219,108 +243,149 @@ "metadata": {}, "outputs": [], "source": [ - "# SQLite Table Configuration" + "# Define simple system prompt template used by the chatbot.\n", + "SYSTEM_PROMPT = \"\"\"You are a helpful and friendly chatbot. Get to know the user! \\\n", + "Ask questions! Be spontaneous! \n", + "{user_info}\n", + "\n", + "System Time: {time}\"\"\"" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": 7, "metadata": {}, + "outputs": [], "source": [ - "## Short Term Memory\n", - "\n", - "Short-term memory lets your application remember previous interactions within a single thread or conversation. A thread organizes multiple interactions in a session, similar to the way email groups messages in a single conversation.\n", + "import os\n", + "from dataclasses import dataclass, field, fields\n", + "from typing import Any, Optional\n", "\n", - "LangGraph manages short-term memory as part of the agent's state, persisted via thread-scoped checkpoints. This state can normally include the conversation history along with other stateful data, such as uploaded files, retrieved documents, or generated artifacts. By storing these in the graph's state, the bot can access the full context for a given conversation while maintaining separation between different threads.\n", + "from langchain_core.runnables import RunnableConfig\n", + "from typing_extensions import Annotated\n", + "\n", + "\n", + "# Define the Configuration class to handle runtime settings, including user ID, model name, and system prompt\n", + "@dataclass(kw_only=True)\n", + "class Configuration:\n", + " \"\"\"Main configuration class for the memory graph system.\"\"\"\n", + "\n", + " user_id: str = \"default\"\n", + " \"\"\"The ID of the user to remember in the conversation.\"\"\"\n", + " model: Annotated[str, {\"__template_metadata__\": {\"kind\": \"llm\"}}] = field(\n", + " default=\"openai/gpt-4o\",\n", + " metadata={\n", + " \"description\": \"The name of the language model to use for the agent. \"\n", + " \"Should be in the form: provider/model-name.\"\n", + " },\n", + " )\n", + " system_prompt: str = SYSTEM_PROMPT\n", + "\n", + " @classmethod\n", + " def from_runnable_config(\n", + " cls, config: Optional[RunnableConfig] = None\n", + " ) -> \"Configuration\":\n", + " \"\"\"Create a Configuration instance from a RunnableConfig.\"\"\"\n", + " configurable = (\n", + " config[\"configurable\"] if config and \"configurable\" in config else {}\n", + " )\n", + " values: dict[str, Any] = {\n", + " f.name: os.environ.get(f.name.upper(), configurable.get(f.name))\n", + " for f in fields(cls)\n", + " if f.init\n", + " }\n", "\n", - "Since conversation history is the most common form of representing short-term memory, in the next section, we will cover techniques for managing conversation history when the list of messages becomes long. If you want to stick to the high-level concepts, continue on to the [Long Term Memory](#long-term-memory) section." + " return cls(**{k: v for k, v in values.items() if v})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Checkpointer" + "## Initialize LLM and Define State Class\n", + "\n", + "In this part, we configure the `ChatOpenAI` model (using `model` and `temperature` settings) and introduce a `State` class. The `State` class holds the conversation messages, ensuring that **context** is retained and can be easily passed around. This lays the **foundation** for a conversational agent that genuinely “remembers” what has been said." ] }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ - "from typing_extensions import TypedDict, Annotated\n", - "from langchain_core.messages import AnyMessage\n", - "from langgraph.graph.message import add_messages\n", - "from typing import Union\n", - "\n", - "\n", - "def manage_list(existing: list, updates: Union[list, dict]):\n", - " if isinstance(updates, list):\n", - " # Normal case, add to the history\n", - " return existing + updates\n", - " elif isinstance(updates, dict) and updates[\"type\"] == \"keep\":\n", - " # You get to decide what this looks like.\n", - " # For example, you could simplify and just accept a string \"DELETE\"\n", - " # and clear the entire list.\n", - " return existing[updates[\"from\"] : updates[\"to\"]]\n", - " # etc. We define how to interpret updates\n", - "\n", - "\n", - "# Define State\n", - "class FilterState(TypedDict):\n", - " messages: Annotated[list[AnyMessage], add_messages] # short term memory\n", - " summary: list[dict] # long term memory\n", - " my_list: Annotated[list, manage_list]" + "# Import and initialize the OpenAI-based LLM\n", + "from langchain.chat_models import init_chat_model\n", + "\n", + "llm = init_chat_model()" ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 9, "metadata": {}, "outputs": [], "source": [ - "# Load Checkpointer, In-emory storate\n", - "from langgraph.store.memory import InMemoryStore\n", + "from langchain_core.messages import AnyMessage\n", + "from langgraph.graph import add_messages\n", + "from typing_extensions import Annotated\n", + "from dataclasses import dataclass\n", + "\n", "\n", - "in_memory_store = InMemoryStore()" + "# Define the State class to store the list of messages in the conversation\n", + "@dataclass(kw_only=True)\n", + "class State:\n", + " \"\"\"Main graph state.\"\"\"\n", + "\n", + " messages: Annotated[list[AnyMessage], add_messages]\n", + " \"\"\"The messages in the conversation.\"\"\"" ] }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ - "import uuid\n", - "from langchain_openai import ChatOpenAI\n", - "from langchain_core.runnables import RunnableConfig\n", - "from langgraph.graph import StateGraph, MessagesState, START, END\n", - "from langgraph.store.base import BaseStore\n", - "from typing import Annotated, Optional" + "# Define a utility function to split model provider and model name from a string\n", + "def split_model_and_provider(fully_specified_name: str) -> dict:\n", + " \"\"\"Initialize the configured chat model.\"\"\"\n", + " if \"/\" in fully_specified_name:\n", + " provider, model = fully_specified_name.split(\"/\", maxsplit=1)\n", + " else:\n", + " provider = None\n", + " model = fully_specified_name\n", + " return {\"model\": model, \"provider\": provider}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Construct Tool" + "## Memory Upsert Function\n", + "\n", + "Here, we focus on the `upsert_memory` function. This function is responsible for storing or updating (**upserting**) user-specific data. By preserving user context across conversations—like interests, preferences, or corrections—you can give your application a more **persistent and personalized** feel." ] }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ - "from langchain_core.tools import InjectedToolArg, tool\n", + "import uuid\n", + "from typing import Annotated, Optional\n", + "\n", + "from langchain_core.runnables import RunnableConfig\n", + "from langchain_core.tools import InjectedToolArg\n", + "from langgraph.store.base import BaseStore\n", "\n", "\n", - "@tool\n", - "def upsert_memory(\n", + "# Define a function to upsert (create or update) memory in the database\n", + "async def upsert_memory(\n", " content: str,\n", " context: str,\n", - " memory_id: Optional[str] = None,\n", " *,\n", + " memory_id: Optional[uuid.UUID] = None,\n", " config: Annotated[RunnableConfig, InjectedToolArg],\n", " store: Annotated[BaseStore, InjectedToolArg],\n", "):\n", @@ -339,66 +404,27 @@ " The memory to overwrite.\n", " \"\"\"\n", " mem_id = memory_id or uuid.uuid4()\n", - " user_id = config[\"configurable\"][\"user_id\"]\n", - " store.put(\n", + " user_id = Configuration.from_runnable_config(config).user_id\n", + " await store.aput(\n", " (\"memories\", user_id),\n", " key=str(mem_id),\n", " value={\"content\": content, \"context\": context},\n", " )\n", - " return f\"Stored memory {content}\"" + " return f\"Stored memory {mem_id}\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Set Nodes for Memory Storage" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": {}, - "outputs": [], - "source": [ - "def store_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):\n", - " # Extract tool calls from the last message\n", - " tool_calls = state[\"messages\"][-1].tool_calls\n", - " saved_memories = []\n", - " for tc in tool_calls:\n", - " content = tc[\"args\"][\"content\"]\n", - " context = tc[\"args\"][\"context\"]\n", - " saved_memories.append(\n", - " [\n", - " upsert_memory.invoke(\n", - " {\n", - " \"content\": content,\n", - " \"context\": context,\n", - " \"config\": config,\n", - " \"store\": store,\n", - " }\n", - " )\n", - " ]\n", - " )\n", - " print(\"saved_memories: \", saved_memories)\n", + "## Implement Conversation Flow (call_model, store_memory)\n", "\n", - " results = [\n", - " {\n", - " \"role\": \"tool\",\n", - " \"content\": mem[0],\n", - " \"tool_call_id\": tc[\"id\"],\n", - " }\n", - " for tc, mem in zip(tool_calls, saved_memories)\n", - " ]\n", - " print(results)\n", - " return {\"messages\": results[0]}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Set Nodes for Agent" + "Next, we implement two important functions for our conversation flow:\n", + "\n", + "1. `call_model`: Takes the current conversation `State`, retrieves relevant memories, and then sends them along with user messages to the LLM.\n", + "2. `store_memory`: Processes the model’s **tool calls**—in this case, requests to store data—and updates the memory store accordingly.\n", + "\n", + "By combining these two functions, the model not only uses past **context** but also augments it with new information in real time." ] }, { @@ -407,7 +433,46 @@ "metadata": {}, "outputs": [], "source": [ - "model = ChatOpenAI(model=\"gpt-4o\", temperature=0.7, streaming=True)" + "from datetime import datetime\n", + "from langgraph.graph import StateGraph, START, END\n", + "from langchain_core.runnables import RunnableConfig\n", + "from langgraph.store.base import BaseStore\n", + "\n", + "\n", + "# Define function to process the user's state and update memory based on conversation context\n", + "async def call_model(state: State, config: RunnableConfig, *, store: BaseStore) -> dict:\n", + " \"\"\"Extract the user's state from the conversation and update the memory.\"\"\"\n", + " configurable = Configuration.from_runnable_config(config)\n", + "\n", + " # Retrieve the most recent memories for context\n", + " memories = await store.asearch(\n", + " (\"memories\", configurable.user_id),\n", + " query=str([m.content for m in state.messages[-3:]]),\n", + " limit=10,\n", + " )\n", + "\n", + " # Format memories for inclusion in the prompt\n", + " formatted = \"\\n\".join(\n", + " f\"[{mem.key}]: {mem.value} (similarity: {mem.score})\" for mem in memories\n", + " )\n", + " if formatted:\n", + " formatted = f\"\"\"\n", + "\n", + "{formatted}\n", + "\"\"\"\n", + "\n", + " # Prepare the system prompt with user memories and current time\n", + " sys = configurable.system_prompt.format(\n", + " user_info=formatted, time=datetime.now().isoformat()\n", + " )\n", + " print(\"system_msg:\", sys)\n", + "\n", + " # Invoke the language model with the prepared prompt and tools\n", + " msg = await llm.bind_tools([upsert_memory]).ainvoke(\n", + " [{\"role\": \"system\", \"content\": sys}, *state.messages],\n", + " {\"configurable\": split_model_and_provider(configurable.model)},\n", + " )\n", + " return {\"messages\": [msg]}" ] }, { @@ -416,34 +481,33 @@ "metadata": {}, "outputs": [], "source": [ - "def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore):\n", - " user_id = config[\"configurable\"][\"user_id\"]\n", - " namespace = (\"memories\", user_id)\n", - " memories = store.search(namespace)\n", - " info = \"\\n\".join(f\"[{mem.key}]: {mem.value}\" for mem in memories)\n", - " if info:\n", - " info = f\"\"\"\n", - " \n", - " {info}\n", - " \"\"\"\n", - "\n", - " system_msg = f\"\"\"You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", - " User context info: {info}\"\"\"\n", - " print(\"system_msg:\", system_msg)\n", - " # Store new memories if the user asks the model to remember\n", - " last_message = state[\"messages\"][-1]\n", - " print([{\"type\": \"system\", \"content\": system_msg}] + state[\"messages\"])\n", - " response = model.bind_tools([upsert_memory]).invoke(\n", - " [{\"type\": \"system\", \"content\": system_msg}] + state[\"messages\"]\n", + "# Define function to process tool calls and store memories in the memory store\n", + "async def store_memory(state: State, config: RunnableConfig, *, store: BaseStore):\n", + " # Extract tool calls from the last message\n", + " tool_calls = state.messages[-1].tool_calls\n", + "\n", + " # Concurrently execute all upsert_memory calls\n", + " saved_memories = await asyncio.gather(\n", + " *(upsert_memory(**tc[\"args\"], config=config, store=store) for tc in tool_calls)\n", " )\n", - " return {\"messages\": response}" + "\n", + " # Format the results of memory storage operations\n", + " results = [\n", + " {\n", + " \"role\": \"tool\",\n", + " \"content\": mem,\n", + " \"tool_call_id\": tc[\"id\"],\n", + " }\n", + " for tc, mem in zip(tool_calls, saved_memories)\n", + " ]\n", + " return {\"messages\": results}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Conditional Edge Logic" + "## Define Conditional Edge Logic" ] }, { @@ -452,13 +516,14 @@ "metadata": {}, "outputs": [], "source": [ - "def route_message(state: MessagesState):\n", + "# Define a function to determine the next step in the conversation flow\n", + "def route_message(state: State):\n", " \"\"\"Determine the next step based on the presence of tool calls.\"\"\"\n", - " msg = state[\"messages\"][-1]\n", + " msg = state.messages[-1]\n", " if msg.tool_calls:\n", - " # If there are tool calls, we need to store memories\n", + " # Route to store_memory if there are tool calls\n", " return \"store_memory\"\n", - " # Otherwise, finish; user can send the next message\n", + " # Otherwise, finish\n", " return END" ] }, @@ -466,7 +531,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Load and Compile Graph" + "## Build and Execute StateGraph\n", + "\n", + "In this section, we construct a `StateGraph` to define the flow of the conversation. We specify which node (for instance, `call_model`) leads to which next step (for example, `store_memory`). Once the graph is set, we run sample conversations to see how the system **dynamically** manages user input, retrieves relevant memories, and updates them when necessary." ] }, { @@ -475,23 +542,20 @@ "metadata": {}, "outputs": [], "source": [ - "builder = StateGraph(MessagesState)\n", + "# Initialize and define the StateGraph, specifying nodes and edges for conversation flow\n", + "builder = StateGraph(State, config_schema=Configuration)\n", "\n", - "builder.add_node(\"call_model\", call_model)\n", + "# Define the flow of the memory extraction process\n", + "builder.add_node(call_model)\n", + "builder.add_edge(\"__start__\", \"call_model\")\n", "builder.add_node(store_memory)\n", - "\n", - "builder.add_edge(START, \"call_model\")\n", "builder.add_conditional_edges(\"call_model\", route_message, [\"store_memory\", END])\n", + "# Right now, we're returning control to the user after storing a memory\n", + "# Depending on the model, you may want to route back to the model\n", + "# to let it first store memories, then generate a response\n", "builder.add_edge(\"store_memory\", \"call_model\")\n", - "\n", - "graph = builder.compile(store=in_memory_store)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Visualize Graph" + "graph = builder.compile()\n", + "graph.name = \"MemoryAgent\"" ] }, { @@ -514,6 +578,7 @@ "from IPython.display import Image, display\n", "from langchain_core.runnables.graph import CurveStyle, MermaidDrawMethod, NodeStyles\n", "\n", + "# Visualize the compiled StateGraph as a Mermaid diagram\n", "display(\n", " Image(\n", " graph.get_graph().draw_mermaid_png(\n", @@ -527,13 +592,54 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Run graph" + "## Verify Results and View Stored Memories\n", + "\n", + "Finally, we examine the stored memories to confirm that our system has correctly captured the user’s context. You can look into the final conversation state (using `graph.get_state`) and see how messages and memories have been organized. This is a great point to do some **debugging** if anything seems amiss, ensuring that your memory mechanism works just as intended." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, + "outputs": [], + "source": [ + "# Prepare a sample conversation to test the memory agent\n", + "\n", + "conversation = [\n", + " \"Hello, I'm Charlie. I work as a software engineer and I'm passionate about AI. Remember this.\",\n", + " \"I specialize in machine learning algorithms and I'm currently working on a project involving natural language processing.\",\n", + " \"My main goal is to improve sentiment analysis accuracy in multi-lingual texts. It's challenging but exciting.\",\n", + " \"We've made some progress using transformer models, but we're still working on handling context and idioms across languages.\",\n", + " \"Chinese and English have been the most challenging pair so far due to their vast differences in structure and cultural contexts.\",\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [], + "source": [ + "from langgraph.checkpoint.memory import MemorySaver\n", + "from langgraph.store.memory import InMemoryStore\n", + "\n", + "# Initialize an in-memory store and compile the graph with a memory saver checkpoint\n", + "mem_store = InMemoryStore()\n", + "\n", + "graph = builder.compile(store=mem_store, checkpointer=MemorySaver())\n", + "user_id = \"test-user\" # temporary user ID for testing\n", + "config = {\n", + " \"configurable\": {\n", + " \"thread_id\": 1, # temporary thread ID for testing\n", + " },\n", + " \"user_id\": user_id,\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -541,43 +647,237 @@ "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", - "Hi! My name is LangChain. I love keep updated on Latest Tech\n", - "system_msg: You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", - " User context info: \n", - "[{'type': 'system', 'content': 'You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\\n User context info: '}, HumanMessage(content='Hi! My name is LangChain. I love keep updated on Latest Tech', additional_kwargs={}, response_metadata={}, id='2e3fa22d-fb38-496b-9fc1-355cbb716077')]\n", + "Hello, I'm Charlie. I work as a software engineer and I'm passionate about AI. Remember this.\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:11.951177\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "Tool Calls:\n", - " upsert_memory (call_HzLinSecG8ZvN2JY2Ul9GzE3)\n", - " Call ID: call_HzLinSecG8ZvN2JY2Ul9GzE3\n", + " upsert_memory (call_f3b7nFwOGZZtCzSfqcRyUqVo)\n", + " Call ID: call_f3b7nFwOGZZtCzSfqcRyUqVo\n", " Args:\n", - " content: User loves staying updated on the latest tech.\n", - " context: User introduced themselves as LangChain and expressed their interest in technology.\n", - "saved_memories: [['Stored memory User loves staying updated on the latest tech.']]\n", - "[{'role': 'tool', 'content': 'Stored memory User loves staying updated on the latest tech.', 'tool_call_id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3'}]\n", + " content: Charlie is a software engineer and passionate about AI.\n", + " context: Charlie introduced themselves and shared their profession and interest.\n", "=================================\u001b[1m Tool Message \u001b[0m=================================\n", "\n", - "Stored memory User loves staying updated on the latest tech.\n", - "system_msg: You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\n", - " User context info: \n", - " \n", - " [0056b832-4cfc-42f5-8da6-da0212ed38c1]: {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}\n", - " \n", - "[{'type': 'system', 'content': \"You are a helpful assistant talking to the user. You must decide whether to store information as memory from list of messages and then answer the user query or directly answer the user query\\n User context info: \\n \\n [0056b832-4cfc-42f5-8da6-da0212ed38c1]: {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}\\n \"}, HumanMessage(content='Hi! My name is LangChain. I love keep updated on Latest Tech', additional_kwargs={}, response_metadata={}, id='2e3fa22d-fb38-496b-9fc1-355cbb716077'), AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3', 'function': {'arguments': '{\"content\":\"User loves staying updated on the latest tech.\",\"context\":\"User introduced themselves as LangChain and expressed their interest in technology.\"}', 'name': 'upsert_memory'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_5f20662549'}, id='run-fc760f0e-afa9-48c0-b64a-48eefc62f4bf-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'User loves staying updated on the latest tech.', 'context': 'User introduced themselves as LangChain and expressed their interest in technology.'}, 'id': 'call_HzLinSecG8ZvN2JY2Ul9GzE3', 'type': 'tool_call'}]), ToolMessage(content='Stored memory User loves staying updated on the latest tech.', id='54ab0feb-5e6f-44db-9609-94ee4100322f', tool_call_id='call_HzLinSecG8ZvN2JY2Ul9GzE3')]\n", + "Stored memory 83b21302-d28d-4a7d-b0a9-45f6ab790e37\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:13.275082\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Hello Charlie! It's great to meet you. I've noted that you're a software engineer with a passion for AI. What kind of AI projects are you working on or interested in?\n", + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "I specialize in machine learning algorithms and I'm currently working on a project involving natural language processing.\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:14.199902\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "Tool Calls:\n", + " upsert_memory (call_Ti9pY6RDSywIrTEw50FX1tBA)\n", + " Call ID: call_Ti9pY6RDSywIrTEw50FX1tBA\n", + " Args:\n", + " content: Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.\n", + " context: Charlie shared their area of specialization and current project focus.\n", + "=================================\u001b[1m Tool Message \u001b[0m=================================\n", + "\n", + "Stored memory 1d086522-0510-43df-b119-96e66bc8f9d0\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:15.287356\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "That sounds fascinating! Natural language processing is such an intriguing field. Are there any particular challenges or goals you're focusing on in your current project?\n", + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "My main goal is to improve sentiment analysis accuracy in multi-lingual texts. It's challenging but exciting.\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:16.151716\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "Tool Calls:\n", + " upsert_memory (call_bSldwiluscIOp2RmsiHacmHc)\n", + " Call ID: call_bSldwiluscIOp2RmsiHacmHc\n", + " Args:\n", + " content: Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\n", + " context: Charlie described the primary goal and challenge of their current project involving natural language processing.\n", + "=================================\u001b[1m Tool Message \u001b[0m=================================\n", + "\n", + "Stored memory 427e0614-8768-44f4-8a01-f01c3cf933d3\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "[427e0614-8768-44f4-8a01-f01c3cf933d3]: {'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:18.056567\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Improving sentiment analysis accuracy in multi-lingual texts sounds like a challenging yet rewarding task. The nuances of different languages can definitely make it complex. How do you tackle the challenges of working with multiple languages? Are there any particular techniques or tools you find especially helpful?\n", + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "We've made some progress using transformer models, but we're still working on handling context and idioms across languages.\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "[427e0614-8768-44f4-8a01-f01c3cf933d3]: {'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:19.171699\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", - "Hello LangChain! It's great to meet someone who's passionate about staying updated on the latest tech. How can I assist you today?\n" + "Transformer models are indeed powerful for capturing context, but idioms can be tricky since they often don't translate directly. It's impressive that you're tackling such a complex issue! Are there any specific languages you're focusing on, or are you working with a broad range?\n", + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "Chinese and English have been the most challenging pair so far due to their vast differences in structure and cultural contexts.\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "[427e0614-8768-44f4-8a01-f01c3cf933d3]: {'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:20.297475\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Chinese and English do pose a unique set of challenges due to their linguistic and cultural differences. It's intriguing to see how you're addressing these complexities. Have you found any particular strategies effective in bridging these differences, or are there cultural nuances that you're still exploring?\n" + ] + } + ], + "source": [ + "# Process each message in the conversation asynchronously using the compiled graph\n", + "for content in conversation:\n", + " async for chunk in graph.astream(\n", + " {\"messages\": [(\"user\", content)]},\n", + " config=config,\n", + " stream_mode=\"values\",\n", + " ):\n", + " chunk[\"messages\"][-1].pretty_print()" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[Item(namespace=['memories', 'test-user'], key='83b21302-d28d-4a7d-b0a9-45f6ab790e37', value={'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'}, created_at='2025-01-12T06:54:13.274005+00:00', updated_at='2025-01-12T06:54:13.274006+00:00', score=None),\n", + " Item(namespace=['memories', 'test-user'], key='1d086522-0510-43df-b119-96e66bc8f9d0', value={'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'}, created_at='2025-01-12T06:54:15.285755+00:00', updated_at='2025-01-12T06:54:15.285763+00:00', score=None),\n", + " Item(namespace=['memories', 'test-user'], key='427e0614-8768-44f4-8a01-f01c3cf933d3', value={'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'}, created_at='2025-01-12T06:54:18.054776+00:00', updated_at='2025-01-12T06:54:18.054780+00:00', score=None)]\n" + ] + } + ], + "source": [ + "from pprint import pprint\n", + "\n", + "# Search and check stored memories for the user\n", + "namespace = (\"memories\", user_id)\n", + "memories = mem_store.search(namespace)\n", + "pprint(memories)" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "================================\u001b[1m Human Message \u001b[0m=================================\n", + "\n", + "Whats my name?\n", + "system_msg: You are a helpful and friendly chatbot. Get to know the user! Ask questions! Be spontaneous! \n", + "\n", + "\n", + "[83b21302-d28d-4a7d-b0a9-45f6ab790e37]: {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'} (similarity: None)\n", + "[1d086522-0510-43df-b119-96e66bc8f9d0]: {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'} (similarity: None)\n", + "[427e0614-8768-44f4-8a01-f01c3cf933d3]: {'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'} (similarity: None)\n", + "\n", + "\n", + "System Time: 2025-01-12T15:54:21.511105\n", + "==================================\u001b[1m Ai Message \u001b[0m==================================\n", + "\n", + "Your name is Charlie.\n" ] } ], "source": [ - "config = {\"configurable\": {\"thread_id\": \"1\", \"user_id\": \"1\"}}\n", - "input_message = {\n", - " \"type\": \"user\",\n", - " \"content\": \"Hi! My name is LangChain. I love keep updated on Latest Tech\",\n", - "}\n", - "for chunk in graph.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", + "# Test memory recall by asking a question. Notice the memory the agent recalls.\n", + "async for chunk in graph.astream(\n", + " {\"messages\": [(\"user\", \"Whats my name?\")]},\n", + " config=config,\n", + " stream_mode=\"values\",\n", + "):\n", " chunk[\"messages\"][-1].pretty_print()" ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[HumanMessage(content=\"Hello, I'm Charlie. I work as a software engineer and I'm passionate about AI. Remember this.\", additional_kwargs={}, response_metadata={}, id='58ecf148-30bf-402e-8554-707cda5df3e5'),\n", + " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f3b7nFwOGZZtCzSfqcRyUqVo', 'function': {'arguments': '{\"content\":\"Charlie is a software engineer and passionate about AI.\",\"context\":\"Charlie introduced themselves and shared their profession and interest.\"}', 'name': 'upsert_memory'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 233, 'total_tokens': 270, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-c10af4fa-75c9-4a8d-8acc-c9ad89343b42-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'Charlie is a software engineer and passionate about AI.', 'context': 'Charlie introduced themselves and shared their profession and interest.'}, 'id': 'call_f3b7nFwOGZZtCzSfqcRyUqVo', 'type': 'tool_call'}], usage_metadata={'input_tokens': 233, 'output_tokens': 37, 'total_tokens': 270, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " ToolMessage(content='Stored memory 83b21302-d28d-4a7d-b0a9-45f6ab790e37', id='e33883d4-617f-4263-8ce5-2f452cec0839', tool_call_id='call_f3b7nFwOGZZtCzSfqcRyUqVo'),\n", + " AIMessage(content=\"Hello Charlie! It's great to meet you. I've noted that you're a software engineer with a passion for AI. What kind of AI projects are you working on or interested in?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 374, 'total_tokens': 411, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-71394dc8-9891-4991-a712-4687219040dc-0', usage_metadata={'input_tokens': 374, 'output_tokens': 37, 'total_tokens': 411, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " HumanMessage(content=\"I specialize in machine learning algorithms and I'm currently working on a project involving natural language processing.\", additional_kwargs={}, response_metadata={}, id='8f044596-eeff-40f4-92a8-24fa751af3ba'),\n", + " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ti9pY6RDSywIrTEw50FX1tBA', 'function': {'arguments': '{\"content\":\"Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.\",\"context\":\"Charlie shared their area of specialization and current project focus.\"}', 'name': 'upsert_memory'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 46, 'prompt_tokens': 435, 'total_tokens': 481, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-8f6e1dab-c3f7-4a06-9af6-819a76d1177f-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'Charlie specializes in machine learning algorithms and is currently working on a project involving natural language processing.', 'context': 'Charlie shared their area of specialization and current project focus.'}, 'id': 'call_Ti9pY6RDSywIrTEw50FX1tBA', 'type': 'tool_call'}], usage_metadata={'input_tokens': 435, 'output_tokens': 46, 'total_tokens': 481, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " ToolMessage(content='Stored memory 1d086522-0510-43df-b119-96e66bc8f9d0', id='563e8710-170b-446e-bee5-a0bdd1c91cce', tool_call_id='call_Ti9pY6RDSywIrTEw50FX1tBA'),\n", + " AIMessage(content=\"That sounds fascinating! Natural language processing is such an intriguing field. Are there any particular challenges or goals you're focusing on in your current project?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 30, 'prompt_tokens': 582, 'total_tokens': 612, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-5abf4334-5fa2-4dbe-93bd-4cb6b3110e58-0', usage_metadata={'input_tokens': 582, 'output_tokens': 30, 'total_tokens': 612, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " HumanMessage(content=\"My main goal is to improve sentiment analysis accuracy in multi-lingual texts. It's challenging but exciting.\", additional_kwargs={}, response_metadata={}, id='c6a9869a-3d21-491f-9fd4-59eed893ab4c'),\n", + " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_bSldwiluscIOp2RmsiHacmHc', 'function': {'arguments': '{\"content\":\"Charlie\\'s main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\",\"context\":\"Charlie described the primary goal and challenge of their current project involving natural language processing.\"}', 'name': 'upsert_memory'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 639, 'total_tokens': 693, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4a927456-ea41-4ec6-af8a-936f340bc8e8-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': \"Charlie's main goal in their current project is to improve sentiment analysis accuracy in multi-lingual texts.\", 'context': 'Charlie described the primary goal and challenge of their current project involving natural language processing.'}, 'id': 'call_bSldwiluscIOp2RmsiHacmHc', 'type': 'tool_call'}], usage_metadata={'input_tokens': 639, 'output_tokens': 54, 'total_tokens': 693, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " ToolMessage(content='Stored memory 427e0614-8768-44f4-8a01-f01c3cf933d3', id='1cc43a3f-bd34-4cc4-a0d9-e12b241a92be', tool_call_id='call_bSldwiluscIOp2RmsiHacmHc'),\n", + " AIMessage(content='Improving sentiment analysis accuracy in multi-lingual texts sounds like a challenging yet rewarding task. The nuances of different languages can definitely make it complex. How do you tackle the challenges of working with multiple languages? Are there any particular techniques or tools you find especially helpful?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 56, 'prompt_tokens': 804, 'total_tokens': 860, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-6d35cedc-258d-4154-b6d3-8f2745e05602-0', usage_metadata={'input_tokens': 804, 'output_tokens': 56, 'total_tokens': 860, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " HumanMessage(content=\"We've made some progress using transformer models, but we're still working on handling context and idioms across languages.\", additional_kwargs={}, response_metadata={}, id='b9ac5c3b-ad4d-4e60-bced-ec0de8a71024'),\n", + " AIMessage(content=\"Transformer models are indeed powerful for capturing context, but idioms can be tricky since they often don't translate directly. It's impressive that you're tackling such a complex issue! Are there any specific languages you're focusing on, or are you working with a broad range?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 52, 'prompt_tokens': 887, 'total_tokens': 939, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-858ab2a9-9b1c-4a69-a311-3eb59ac4e358-0', usage_metadata={'input_tokens': 887, 'output_tokens': 52, 'total_tokens': 939, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " HumanMessage(content='Chinese and English have been the most challenging pair so far due to their vast differences in structure and cultural contexts.', additional_kwargs={}, response_metadata={}, id='e7975af6-0cef-4fba-b164-b50bfd9a2b9e'),\n", + " AIMessage(content=\"Chinese and English do pose a unique set of challenges due to their linguistic and cultural differences. It's intriguing to see how you're addressing these complexities. Have you found any particular strategies effective in bridging these differences, or are there cultural nuances that you're still exploring?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 52, 'prompt_tokens': 967, 'total_tokens': 1019, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-a65311a7-e301-4422-b2b0-ceaf5b8f230a-0', usage_metadata={'input_tokens': 967, 'output_tokens': 52, 'total_tokens': 1019, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n", + " HumanMessage(content='Whats my name?', additional_kwargs={}, response_metadata={}, id='21b8072f-ee35-45f3-8558-63a726605f67'),\n", + " AIMessage(content='Your name is Charlie.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 1029, 'total_tokens': 1036, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_703d4ff298', 'finish_reason': 'stop', 'logprobs': None}, id='run-15b1800c-2b57-445f-8d6a-5bfdd2e17424-0', usage_metadata={'input_tokens': 1029, 'output_tokens': 7, 'total_tokens': 1036, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]" + ] + }, + "execution_count": 22, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Retrieve and print the final state of the conversation\n", + "graph.get_state(config).values[\"messages\"]" + ] } ], "metadata": { From ec0d89b2323b14e58d87f6770145c9a63993ae7a Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Wed, 15 Jan 2025 22:49:29 +0900 Subject: [PATCH 4/6] =?UTF-8?q?LangGraph=EC=9D=98=20checkpointer=20?= =?UTF-8?q?=EC=82=AC=EC=9A=A9=20=EC=9D=B4=EC=9C=A0=20=EC=B6=94=EA=B0=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...9-ConversationMemoryManagementSystem.ipynb | 28 +++++++++++++------ 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb index 3319a9bb1..6765260cf 100644 --- a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -66,15 +66,6 @@ "- **Relevance Filtering**: Not all past information is useful for every interaction. Filtering out irrelevant data while retaining critical context is key.\n", "- **Data Privacy**: Long-term memory must comply with privacy regulations, ensuring sensitive data is securely handled and stored.\n", "\n", - "#### Example Use Cases\n", - "\n", - "|**Scenario**|**How Long-term Memory Helps**|\n", - "|---|---|\n", - "|**Personalized Assistant**|Storing user preferences for better recommendations over time.|\n", - "|**Customer Support**|Remembering prior issues and solutions to streamline support.|\n", - "|**Learning Systems**|Tracking progress in an educational setting for continuity.|\n", - "\n", - "By incorporating long-term memory, your system can provide **intelligent, context-aware, and adaptive responses** that enhance the overall user experience. In the next sections, we will implement this functionality and explore how it integrates into the overall conversation flow.\n", "\n", "### Short-term vs Long-term Memory\n", "\n", @@ -86,6 +77,25 @@ "- **Short-term Memory**: Helps the system focus on the latest messages for immediate context.\n", "- **Long-term Memory**: Enables the agent to recall **past sessions** and user-specific details, creating a more **persistent** experience over time.\n", "\n", + "### Why use LangGraph's checkpointer?\n", + "\n", + "1. Session Memory & Error Recovery\n", + " \n", + " - Lets you roll back to a previous checkpoint if an error occurs or if you want to resume from a saved state\n", + " - Maintains context across conversations for a more seamless user experience\n", + "2. Flexible Database Options & Scalability\n", + " \n", + " - Supports in-memory, SQLite, Postgres, and more, allowing easy scaling as your user base grows\n", + " - Choose the storage method that best fits your project’s needs\n", + "3. Human-in-the-Loop & Time Travel\n", + " \n", + " - Pause workflows for human review, then resume where you left off\n", + " - Go back to earlier states (“time travel”) to debug or create alternative paths\n", + "4. Ecosystem & Customization\n", + " \n", + " - `LangGraph` v0.2 offers separate checkpointer libraries (e.g., `MemorySaver`, `SqliteSaver`, `PostgresSaver`)\n", + " - Easily build or adapt custom solutions for specific databases or workflows\n", + "\n", "## Table of Contents\n", "\n", "- [Overview](#overview)\n", From 50083ef2925dba3a94faee5364a5109e1253459c Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Sat, 18 Jan 2025 22:58:36 +0900 Subject: [PATCH 5/6] add 'You can alternatively set API keys such as ...' block --- .../09-ConversationMemoryManagementSystem.ipynb | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb index 6765260cf..d9b67eeec 100644 --- a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -202,6 +202,15 @@ ")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can alternatively set API keys such as `OPENAI_API_KEY` in a `.env` file and load them.\n", + "\n", + "[Note] This is not necessary if you've already set the required API keys in previous steps." + ] + }, { "cell_type": "code", "execution_count": 4, From 74b88251273d3ae411697f84830e3834216007e0 Mon Sep 17 00:00:00 2001 From: syshin0116 Date: Sun, 19 Jan 2025 17:04:20 +0900 Subject: [PATCH 6/6] docs: fix links for google colab, github badge --- .../09-ConversationMemoryManagementSystem.ipynb | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb index d9b67eeec..c9f85a3e5 100644 --- a/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb +++ b/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb @@ -11,8 +11,7 @@ "- Peer Review:\n", "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n", "\n", - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/13-LlamaParse.ipynb) [![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/13-LlamaParse.ipynb)\n", - "\n", + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb) [![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/19-Cookbook/05-AIMemoryManagementSystem/09-ConversationMemoryManagementSystem.ipynb)\n", "\n", "## Overview\n", "\n",