Releases: InternLM/lagent
V0.5.0rc2
What's Changed
- Update dependencies by @braisedpork1964 in #268
- [Feat] Add
Sequential
andAsyncSequential
agents by @braisedpork1964 in #270 - [Fix] Alternative for
aioify
by @braisedpork1964 in #274 - [Feat] Support resetting agent memory recursively or by keypath by @braisedpork1964 in #271
- Fix: http agent server by @liujiangning30 in #276
- [Feat] Add streaming agents by @braisedpork1964 in #277
- 【Version】v0.5.0rc2 by @vansin in #278
Full Changelog: v0.5.0rc1...v0.5.0rc2
v0.5.0rc1
Abstract
The current landscape of agent frameworks predominantly focuses on low-code development (using static diagrams or pipelines) or addressing specific domain tasks, which often leads to difficulties in debugging and rigid workflows. \textbf{L}anguage \textbf{Agent} addresses these challenges by offering an imperative and Pythonic programming style that treats code as an agent. This approach facilitates easier debugging and streamlines the development of agent workflows. Additionally, Lagent allows for straightforward deployment as an HTTP service, supporting the construction of distributed multi-agent applications through centralized programming. This enhances development efficiency while maintaining effectiveness.
In this paper, we detail the principles that drove the implementation of Lagent and how they are reflected in its architecture. We emphasize that every aspect of Lagent is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Usability centric design
Consequently, the agents themselves evolved rapidly from a single Plan-Action-Iteration agent or Plan-Then-Act agent\cite{} into incredibly varied numerical programs often composed of many loops and recursive functions
To support this growing complexity, Lagent foregoes the potential benefits of a graph-metaprogramming-based or event-driven-based\cite{} approach to preserve the imperative programming model of Python. PyTorch inspired this design. Lagent extends this to all aspects of agent workflows. Defining LLMs, tools, and memories, deploying an HTTP service, distributing multi-agents, and making the inference process asynchronous are all expressed using the familiar concepts developed for general-purpose programming.
This solution ensures that any new potential agent architecture can be easily implemented with Lagent. For instance, agents (which in agent learning commonly be understood as Instruction + LLM + Memory + Plan/Action based on current state) are typically expressed as Python classes whose constructors create and initialize their parameters, and whose forward methods process an input. Similarly, multi-agents are usually represented as classes that compose single agents, but let us state again that nothing forces the user to structure their code in that way. Listing demonstrates how ReAcT(a common used agent) and TranslateAgentv(translation agent pipeline) can be created by lagent. Note that ReACT is of course part of the library, but we show an example implementation to highlight how simple it is.
class ReACT(Agent):
def __init__(self):
llm = LLM()
self.tools = tools
react_instruction = react_instruction.format(
action_info=get_tools_description(self.tools)
self.select_agent = Agent(
llm=llm, template=react_instruction)
self.finish_condition = lambda m:
'FinalAnswer' in m.content
super().__init__()
def forward(
self, message: AgentMessage, **kwargs
) -> AgentMessage:
for _ in range(self.max_turn):
message = self.select_agent(message)
if self.finish_condition(message):
return message
message = self.tools(message)
return message
class TranslateAgent(Agent):
def __init__(self):
self.llm = LLM()
self.initial_agent = Agent(
template=initial_trans_template, llm=llm)
self.reflection_agent = Agent(
template=reflection_template, llm=llm)
self.improve_agent = Agent(
tempalte=improve_translation_template, llm=llm)
super().__init__()
def forward(
self, message: AgentMessage, **kwargs
) -> AgentMessage:
initial_message = self.initial_agent(message)
reflection_message = self.reflection_agent(message,
initial_message)
response_message = self.improve_agent(
message,
initial_message,
reflecion_message)
return response_message
V0.2.4
What's Changed
- Pop invalid gen params for openai api by @liujiangning30 in #217
- Fix: event loop for DuckDuckGoSearch by @liujiangning30 in #220
- Feat: GPTAPI supports qwen by @liujiangning30 in #218
- Ensure completeness of responses of qwen model by @liujiangning30 in #225
- Fix unclosed event loop by @liujiangning30 in #235
- Fix: timeout for ddgs by @liujiangning30 in #236
- [Fix] Fix griffe version by @fanqiNO1 in #237
- Fix KeyError by @liujiangning30 in #226
- [feature] support brave search api and refractor google serper api in BingBroswer by @tackhwa in #233
- 增加对商汤科技商汤商量系列模型的支持(已经和MindSearch项目一起测试OK) Add support for SenseTime's SenseNova series LLM (Has been tested OK with MindSearch project) by @winer632 in #234
- [docs] fix some bugs in docs.md by @MING-ZCH in #249
- Update requirements by @jamiechoi1995 in #245
- update requirement by @Harold-lkk in #257
- Compatible with lmdeploy by @lvhan028 in #258
- [Version] v0.2.4 by @Harold-lkk in #261
New Contributors
- @winer632 made their first contribution in #234
- @MING-ZCH made their first contribution in #249
- @jamiechoi1995 made their first contribution in #245
- @lvhan028 made their first contribution in #258
Full Changelog: v0.2.3...v0.2.4
v0.2.3
What's Changed
- Fix chat return of
GPTAPI
by @braisedpork1964 in #166 - Fix bug of ppt and googlescholar by @liujiangning30 in #167
- fix typo "ablility " in overview.md by @tackhwa in #175
- Fix errmsg: cast dict to str by @liujiangning30 in #172
- feat: support vllm by @RangiLyu in #177
- support demo with hf by @liujiangning30 in #179
- fix bug of Internlm2Protocol.parse by @liujiangning30 in #180
- Fix generation parameters in API models by @braisedpork1964 in #181
- support batch by @Harold-lkk in #182
- fix deprecated top_k for GPTAPI by @Iiji in #185
- support json mode and proxy by @Harold-lkk in #189
- Allow access to code from interpreter results by @braisedpork1964 in #191
- Fix: typo for lmdeploy_wrapper by @fanqiNO1 in #171
- Feat: stream chat for GPTAPI by @liujiangning30 in #194
- align streaming return format for GPTAPI by @liujiangning30 in #196
- stream chat for GPTAPI by @liujiangning30 in #197
- Mind search by @Harold-lkk in #208
- Fix: update requirements by @Liqu1d-G in #214
- role with name by @liujiangning30 in #215
- Bump to v0.3.0 by @liujiangning30 in #213
- Bump to v0.2.3 by @liujiangning30 in #216
New Contributors
- @tackhwa made their first contribution in #175
- @Iiji made their first contribution in #185
- @fanqiNO1 made their first contribution in #171
- @Liqu1d-G made their first contribution in #214
Full Changelog: v0.2.2...v0.2.3
v0.2.2
What's Changed
- Fix bug of LMDeployClient by @liujiangning30 in #140
- fix bug of TritonClient by @liujiangning30 in #141
- update readme demo by @Harold-lkk in #143
- Fix type annotation by @braisedpork1964 in #144
- [Enchance] lazy import for actions by @Harold-lkk in #146
- Fix: skip start_token by @liujiangning30 in #145
- Fix: filter_suffix in TritonClient by @liujiangning30 in #150
- Fix: gen_config in lmdeploypipeline updated by input gen_params by @liujiangning30 in #151
- max_tokens to max_new_tokens by @liujiangning30 in #149
- support inference for pad_token & chatglm chat by @zehuichen123 in #157
- Feat: no_skip_speicial_token by @liujiangning30 in #148
- fix batch generate by @Harold-lkk in #158
- fix bug caused by static model_name by @liujiangning30 in #156
- update version by @liujiangning30 in #161
Full Changelog: v0.2.1...v0.2.2
v0.2.1
What's Changed
- Fix docstring format of
GoogleScholar
by @braisedpork1964 in #138 - [Version] Bump v0.2.1 by @braisedpork1964 in #139
Full Changelog: v0.2.0...v0.2.1
v0.2.0
What's Changed
-
Stream Output: Provides the stream_chat interface for streaming output, allowing cool streaming demos right at your local setup.
-
Interfacing is unified, with a comprehensive design upgrade for enhanced extensibility, including:
- Model: Whether it's the OpenAI API, Transformers, or LMDeploy inference acceleration framework, you can seamlessly switch between models.
- Action: Simple inheritance and decoration allow you to create your own personal toolkit, adaptable to both InternLM and GPT.
- Agent: Consistent with the Model's input interface, the transformation from model to intelligent agent only takes one step, facilitating the exploration and implementation of various agents.
-
Documentation has been thoroughly upgraded with full API documentation coverage.
Welcome to watch our demo at https://www.youtube.com/watch?v=YAelRLi0Zak
Lagent Release v0.1.3
The last release for version 0.1
What's Changed
- update: add citation by @liujiangning30 in #56
- added korean readme since most of the koreans dont at all speakkorean by @bhargavshirin in #57
- Update README.md by @Killer2OP in #59
- [ADDED]: Back-to-Top Button in README by @Killer2OP in #65
- changed "Twitter" to "𝕏 (Twitter)" in README.md by @BandhiyaHardik in #67
- Update README.md by @apu52 in #62
- Update README.md by @Aryan4884 in #58
- Update README.md by @VinayKokate22 in #68
- update README.md and header-logo by @liujiangning30 in #70
- update header-logo by @liujiangning30 in #72
- Fixed: Added contributors section to readme.md . by @Kalyanimhala in #63
- [Fix] lmdeploy bc by @Harold-lkk in #74
- 【Bug】Fix templateparser by @Harold-lkk in #77
- [Fix]: fix turbomind by @RangiLyu in #81
- update ReAct example for internlm2 by @liujiangning30 in #85
- [Version] v0.1.3 by @Harold-lkk in #110
New Contributors
- @bhargavshirin made their first contribution in #57
- @Killer2OP made their first contribution in #59
- @BandhiyaHardik made their first contribution in #67
- @apu52 made their first contribution in #62
- @Aryan4884 made their first contribution in #58
- @VinayKokate22 made their first contribution in #68
- @Kalyanimhala made their first contribution in #63
- @RangiLyu made their first contribution in #81
v0.1.2
Bump version v0.1.2 (#55)
Lagent Release v0.1.1
Main Features
- Support multiple kinds of agents out of the box. Lagent now supports ReAct, AutoGPT, and ReWOO, which can drive the large language models(LLMs) for multiple trials of reasoning and function calling.
- Extremely simple and easy to extend. The framework is quite simple with a clear structure. With only 20 lines of code, you are able to construct your own agent. It also supports three typical tools: Python interpreter, API call, and Google search.
- Support various large language models. We support different LLMs, including API-based (GPT-3.5/4) and open-source (LLaMA 2, InternLM) models.