Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
KillianLucas committed Oct 9, 2023
2 parents a294b34 + 11200b2 commit 55344ba
Show file tree
Hide file tree
Showing 7 changed files with 103 additions and 50 deletions.
19 changes: 0 additions & 19 deletions .vscode/launch.json

This file was deleted.

3 changes: 0 additions & 3 deletions .vscode/settings.json

This file was deleted.

7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,13 @@
</a>
<a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
<a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
<a href="README_IN.md"><img src="https://img.shields.io/badge/Document-Hindi-white.svg" alt="IN doc"/></a>
<a href="README_IN.md"><img src="https://img.shields.io/badge/Hindi-white.svg" alt="IN doc"/></a>
<img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License"/>
<br><br>
<br>
<br>
<b>Let language models run code on your computer.</b><br>
An open-source, locally running implementation of OpenAI's Code Interpreter.<br>
<br><a href="https://openinterpreter.com">Get early access to the desktop application.</a><br>
<br><a href="https://openinterpreter.com">Get early access to the desktop app</a>‎ ‎ |‎ ‎ <b><a href="https://docs.openinterpreter.com/">Read our new docs</a></b><br>
</p>

<br>
Expand Down
26 changes: 16 additions & 10 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
<br>
<br>
<b>让语言模型在您的计算机上运行代码。</b><br>
在本地实现开源的OpenAI的代码解释器。<br>
在本地实现的开源OpenAI的代码解释器。<br>
<br><a href="https://openinterpreter.com">登记以提前获取Open Interpreter(开放解释器)桌面应用程序</a><br>
</p>

Expand Down Expand Up @@ -57,7 +57,7 @@ pip install open-interpreter

### 终端

安装后,简单地运行 `interpreter`
安装后,运行 `interpreter`

```shell
interpreter
Expand Down Expand Up @@ -151,10 +151,14 @@ print(interpreter.system_message)

### 更改模型

`gpt-3.5-turbo` 下,使用快速模式:
Open Interpreter使用[LiteLLM](https://docs.litellm.ai/docs/providers/)连接到语言模型。

您可以通过设置模型参数来更改模型:

```shell
interpreter --fast
interpreter --model gpt-3.5-turbo
interpreter --model claude-2
interpreter --model command-nightly
```

在 Python 环境下,您需要手动设置模型:
Expand Down Expand Up @@ -202,12 +206,14 @@ interpreter.azure_api_type = "azure"

为了帮助贡献者检查和调试 Open Interpreter,`--debug` 模式提供了详细的日志。

您可以使用 `interpreter --debug` 来激活调试模式,或者直接在聊天时输入
您可以使用 `interpreter --debug` 来激活调试模式,或者直接在终端输入

```shell
$ interpreter
...
> %debug # <- 开启调试模式
> %debug true <- 开启调试模式

> %debug false <- 关闭调试模式
```

### 使用 .env 配置
Expand All @@ -230,13 +236,13 @@ INTERPRETER_CLI_USE_AZURE=False

由于生成的代码是在本地环境中运行的,因此会与文件和系统设置发生交互,从而可能导致本地数据丢失或安全风险等意想不到的结果。

**⚠️ 所以在执行任何代码之前,Open Interpreter 都会要求用户确认是否运行**
**⚠️ 所以在执行任何代码之前,Open Interpreter 都会询问用户是否运行**

您可以运行 `interpreter -y` 或设置 `interpreter.auto_run = True` 来绕过此确认,此时:

- 在运行请求修改本地文件或系统设置的命令时要谨慎。
- 请像驾驶自动驾驶汽车一样留意 Open Interpreter,并随时做好通过关闭终端来结束进程的准备。
- 考虑在 Google Colab 或 Replit 等受限环境中运行 Open Interpreter。主要是这些环境更加独立,从而降低执行任意代码导致出现问题的风险。
- 请像驾驶自动驾驶汽车一直握着方向盘一样留意 Open Interpreter,并随时做好通过关闭终端来结束进程的准备。
- 考虑在 Google Colab 或 Replit 等受限环境中运行 Open Interpreter的主要原因是这些环境更加独立,从而降低执行任意代码导致出现问题的风险。

## 它是如何工作的?

Expand All @@ -258,6 +264,6 @@ Open Interpreter 采用 MIT 许可授权。您可以使用、复制、修改、

> 拥有一个像您的指尖一样快速工作的初级程序员...可以使新的工作流程变得轻松高效,同时也能让新的受众群体享受到编程的好处。
>
> _OpenAI 的代码解释器发布_
> _OpenAI 的代码解释器发布宣传语_
<br>
2 changes: 1 addition & 1 deletion interpreter/llm/setup_local_text_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def setup_local_text_llm(interpreter):
DEFAULT_CONTEXT_WINDOW = 2000
DEFAULT_MAX_TOKENS = 1000

repo_id = interpreter.model.split("huggingface/")[1]
repo_id = interpreter.model.replace("huggingface/", "")

if "TheBloke/CodeLlama-" not in repo_id:
# ^ This means it was prob through the old --local, so we have already displayed this message.
Expand Down
4 changes: 2 additions & 2 deletions interpreter/llm/setup_text_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def base_llm(messages):
system_message += "\n\nTo execute code on the user's machine, write a markdown code block *with a language*, i.e ```python, ```shell, ```r, ```html, or ```javascript. You will recieve the code output."

# TODO swap tt.trim for litellm util

messages = messages[1:]
if interpreter.context_window and interpreter.max_tokens:
trim_to_be_this_many_tokens = interpreter.context_window - interpreter.max_tokens - 25 # arbitrary buffer
messages = tt.trim(messages, system_message=system_message, max_tokens=trim_to_be_this_many_tokens)
Expand Down Expand Up @@ -118,4 +118,4 @@ def base_llm(messages):

return litellm.completion(**params)

return base_llm
return base_llm
92 changes: 80 additions & 12 deletions tests/test_interpreter.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,94 @@
from random import randint
import interpreter
import time

interpreter.auto_run = True
interpreter.model = "gpt-3.5-turbo"
interpreter.temperature = 0

def test_hello_world():

# this function will run before each test
# we're clearing out the messages Array so we can start fresh and reduce token usage
def setup_function():
interpreter.reset()
messages = interpreter.chat("""Please reply with just the words "Hello, World!" and nothing else. Do not run code.""")
assert messages == [{'role': 'user', 'message': 'Please reply with just the words "Hello, World!" and nothing else. Do not run code.'}, {'role': 'assistant', 'message': 'Hello, World!'}]


# this function will run after each test
# we're introducing some sleep to help avoid timeout issues with the OpenAI API
def teardown_function():
time.sleep(5)


def test_system_message_appending():
ping_system_message = (
"Respond to a `ping` with a `pong`. No code. No explanations. Just `pong`."
)

ping_request = "ping"
pong_response = "pong"

interpreter.system_message += ping_system_message

messages = interpreter.chat(ping_request)

assert messages == [
{"role": "user", "message": ping_request},
{"role": "assistant", "message": pong_response},
]


def test_reset():
# make sure that interpreter.reset() clears out the messages Array
assert interpreter.messages == []


def test_hello_world():
hello_world_response = "Hello, World!"

hello_world_message = f"Please reply with just the words {hello_world_response} and nothing else. Do not run code. No confirmation just the text."

messages = interpreter.chat(hello_world_message)

assert messages == [
{"role": "user", "message": hello_world_message},
{"role": "assistant", "message": hello_world_response},
]


def test_math():
interpreter.reset()
messages = interpreter.chat("""Please perform the calculation 27073*7397 then reply with just the integer answer with no commas or anything, nothing else.""")
assert "200258981" in messages[-1]["message"]
# we'll generate random integers between this min and max in our math tests
min_number = randint(1, 99)
max_number = randint(1001, 9999)

n1 = randint(min_number, max_number)
n2 = randint(min_number, max_number)

test_result = n1 + n2 * (n1 - n2) / (n2 + n1)

order_of_operations_message = f"""
Please perform the calculation `{n1} + {n2} * ({n1} - {n2}) / ({n2} + {n1})` then reply with just the answer, nothing else. No confirmation. No explanation. No words. Do not use commas. Do not show your work. Just return the result of the calculation. Do not introduce the results with a phrase like \"The result of the calculation is...\" or \"The answer is...\"
Round to 2 decimal places.
"""

messages = interpreter.chat(order_of_operations_message)

assert round(float(messages[-1]["message"]), 2) == round(test_result, 2)


def test_delayed_exec():
interpreter.reset()
interpreter.chat("""Can you write a single block of code and run_code it that prints something, then delays 1 second, then prints something else? No talk just code. Thanks!""")
interpreter.chat(
"""Can you write a single block of code and run_code it that prints something, then delays 1 second, then prints something else? No talk just code. Thanks!"""
)


def test_nested_loops_and_multiple_newlines():
interpreter.reset()
interpreter.chat("""Can you write a nested for loop in python and shell and run them? Also put 1-3 newlines between each line in the code. Thanks!""")
interpreter.chat(
"""Can you write a nested for loop in python and shell and run them? Don't forget to properly format your shell script and use semicolons where necessary. Also put 1-3 newlines between each line in the code. Only generate and execute the code. No explanations. Thanks!"""
)


def test_markdown():
interpreter.reset()
interpreter.chat("""Hi, can you test out a bunch of markdown features? Try writing a fenced code block, a table, headers, everything. DO NOT write the markdown inside a markdown code block, just write it raw.""")
interpreter.chat(
"""Hi, can you test out a bunch of markdown features? Try writing a fenced code block, a table, headers, everything. DO NOT write the markdown inside a markdown code block, just write it raw."""
)

0 comments on commit 55344ba

Please sign in to comment.