From 55e7853ce4345e3edaa9aa4a688d085771ccebbb Mon Sep 17 00:00:00 2001 From: Mikita Makiej <157150795+mmikita95@users.noreply.github.com> Date: Mon, 14 Oct 2024 10:49:29 +0300 Subject: [PATCH] docs: update ai-module.mdx Documentation for Knowledge Graph / Function tools usage --- docs/framework/ai-module.mdx | 139 +++++++++++++++++++++++++++++++++++ 1 file changed, 139 insertions(+) diff --git a/docs/framework/ai-module.mdx b/docs/framework/ai-module.mdx index b156bb340..857d6147a 100644 --- a/docs/framework/ai-module.mdx +++ b/docs/framework/ai-module.mdx @@ -99,6 +99,145 @@ Instance-wide configuration parameters can be complemented or overriden on indiv response = conversation.complete(config={'max_tokens': 200, 'temperature': 0.5}) ``` +### Using Graphs with Conversation +A `Graph` is a collection of files meant to provide their contents to the LLM during conversations. Framework allows you to create, retrieve, update, and delete graphs, as well as manage the files within them. + +#### Creating and Managing Graphs + +To create and manipulate graphs, use the following methods: +```python +from writer.ai import create_graph, retrieve_graph, list_graphs, delete_graph + +# Create a new graph +graph = create_graph(name="Financial Data", description="Quarterly reports") + +# Retrieve an existing graph by ID +graph = retrieve_graph("d90a632b-5c1f-42b8-8748-5b7f769d9a36") + +# Update a graph +graph.update(name="Updated Financial Data", description="Updated description") + +# Retrieve a list of created graphs +graphs = list_graphs() +for graph in graphs: + # Delete a graph + delete_graph(graph) +``` + +#### Adding and Removing Files from Graphs + +You can upload files, associate them with graphs, and download or remove them. +```python +from writer.ai import upload_file + +# Upload a file +file = upload_file(data=b"file content", type="application/pdf", name="Report.pdf") + +# Add the file to a graph +graph.add_file(file) + +# Remove the file from the graph +graph.remove_file(file) +```` + +#### Applying Graphs to Conversation completion + +You can utilize graphs within conversations. For instance, you may want to provide the LLM access to a collection of files during an ongoing conversation to query or analyze the file content. When passing a graph to the conversation, the LLM can query the graph to retrieve relevant data. + +```python +# Retrieve a graph +graph = retrieve_graph("d90a632b-5c1f-42b8-8748-5b7f769d9a36") + +# Pass the graph to the conversation for completion +response = conversation.complete(tools=graph) +``` + +Alternatively, you can define a graph using JSON: + +```python +tool = { + "type": "graph", + "graph_ids": ["d90a632b-5c1f-42b8-8748-5b7f769d9a36"] +} + +response = conversation.complete(tools=tool) +``` + +### Using Function Calls with Conversations + +Function tools are only available with `palmyra-x-004` model + +Framework allows you to register Python functions that can be called automatically during conversations. When the LLM determines a need for specific information or processing, it issues a request to use the local code (your function), and Framework handles that request automatically. + +#### Defining Function Tools + +Function tools are defined using either a Python class or a JSON configuration. +```python +from writer.ai import FunctionTool + +# Define a function tool with Python callable +def calculate_interest(principal: float, rate: float, time: float): + return principal * rate * time + +tool = FunctionTool( + name="calculate_interest", + callable=calculate_interest, + parameters={ + "principal": {"type": "float", "description": "Loan principal"}, + "rate": {"type": "float", "description": "Interest rate"}, + "time": {"type": "float", "description": "Time in years"} + } +) + +response = conversation.complete(tools=tool) +``` + +Alternatively, you can define a function tool in JSON format, but the callable function must still be passed: +```python +tool = { + "type": "function", + "name": "calculate_interest", + "callable": calculate_interest, + "parameters": { + "principal": {"type": "float", "description": "Loan principal"}, + "rate": {"type": "float", "description": "Interest rate"}, + "time": {"type": "float", "description": "Time in years"} + } +} + +response = conversation.complete(tools=tool) +``` +Function tools require the following properties: +- **`name: str`**: A string that defines how the function is referenced by the LLM. It should describe the function’s purpose. +- **`callable: Callable`**: A Python function that will be called automatically when needed by the LLM. +- **`parameters: dict`**: A dictionary that specifies what input the function expects. The keys should match the function’s parameter names, and each parameter should have a `type`, and an optional `description`. + Supported types are: `string`, `number`, `integer`, `float`, `boolean`, `array`, `object` and `null`. + +#### Automated Function Calling + +When a conversation involves a tool (either a graph or a function), Framework automatically handles the requests from LLM to use the tools during interactions. If the tool needs multiple steps (for example, querying data and processing it), Framework will handle those steps recursively, calling functions as needed until the final result is returned. + +### Providing a Tool or a List of Tools + +You can pass either a single tool or a list of tools to the `complete()` or `stream_complete()` methods. The tools can be a combination of FunctionTool, Graph, or JSON-defined tools. + +```python +from writer.ai import FunctionTool, retrieve_graph + +# Define a function tool +tool1 = FunctionTool( + name="get_data", + callable=lambda x: f"Data for {x}", + parameters={"x": {"type": "string", "description": "Input value"}} +) + +# Retrieve a graph +graph = retrieve_graph("d90a632b-5c1f-42b8-8748-5b7f769d9a36") + +# Provide both tools in a list +response = conversation.complete(tools=[tool1, graph]) +``` + ## Text generation without a conversation state These `complete` and `stream_complete` methods are designed for one-off text generation without the need to manage a conversation state. They return the model's response as a string. Each function accepts a `config` dictionary allowing call-specific configurations.