Skip to content

Releases: InternLM/lagent

V0.5.0rc2

29 Nov 11:39
4db8ea8
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.5.0rc1...v0.5.0rc2

v0.5.0rc1

05 Nov 05:11
248103b
Compare
Choose a tag to compare

Abstract

The current landscape of agent frameworks predominantly focuses on low-code development (using static diagrams or pipelines) or addressing specific domain tasks, which often leads to difficulties in debugging and rigid workflows. \textbf{L}anguage \textbf{Agent} addresses these challenges by offering an imperative and Pythonic programming style that treats code as an agent. This approach facilitates easier debugging and streamlines the development of agent workflows. Additionally, Lagent allows for straightforward deployment as an HTTP service, supporting the construction of distributed multi-agent applications through centralized programming. This enhances development efficiency while maintaining effectiveness.

In this paper, we detail the principles that drove the implementation of Lagent and how they are reflected in its architecture. We emphasize that every aspect of Lagent is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.

Usability centric design

Consequently, the agents themselves evolved rapidly from a single Plan-Action-Iteration agent or Plan-Then-Act agent\cite{} into incredibly varied numerical programs often composed of many loops and recursive functions
To support this growing complexity, Lagent foregoes the potential benefits of a graph-metaprogramming-based or event-driven-based\cite{} approach to preserve the imperative programming model of Python. PyTorch inspired this design. Lagent extends this to all aspects of agent workflows. Defining LLMs, tools, and memories, deploying an HTTP service, distributing multi-agents, and making the inference process asynchronous are all expressed using the familiar concepts developed for general-purpose programming.

This solution ensures that any new potential agent architecture can be easily implemented with Lagent. For instance, agents (which in agent learning commonly be understood as Instruction + LLM + Memory + Plan/Action based on current state) are typically expressed as Python classes whose constructors create and initialize their parameters, and whose forward methods process an input. Similarly, multi-agents are usually represented as classes that compose single agents, but let us state again that nothing forces the user to structure their code in that way. Listing demonstrates how ReAcT(a common used agent) and TranslateAgentv(translation agent pipeline) can be created by lagent. Note that ReACT is of course part of the library, but we show an example implementation to highlight how simple it is.

class ReACT(Agent):
   def __init__(self):
        llm = LLM()
        self.tools = tools
        react_instruction = react_instruction.format(
            action_info=get_tools_description(self.tools)
        self.select_agent = Agent(
            llm=llm, template=react_instruction)
        self.finish_condition = lambda m: 
                                'FinalAnswer' in m.content
        super().__init__()

    def forward(
        self, message: AgentMessage, **kwargs
    ) -> AgentMessage:
        for _ in range(self.max_turn):
            message = self.select_agent(message)
            if self.finish_condition(message):
                return message
            message = self.tools(message)
        return message
class TranslateAgent(Agent):
   def __init__(self):
       self.llm = LLM()
       self.initial_agent = Agent(
            template=initial_trans_template, llm=llm)
       self.reflection_agent = Agent(
            template=reflection_template, llm=llm)
       self.improve_agent = Agent(
            tempalte=improve_translation_template, llm=llm)
        super().__init__()
       
    def forward(
        self, message: AgentMessage, **kwargs
    ) -> AgentMessage:
        initial_message = self.initial_agent(message)
        reflection_message = self.reflection_agent(message, 
                                                   initial_message)
        response_message = self.improve_agent(
                                message, 
                                initial_message, 
                                reflecion_message)
        return response_message

V0.2.4

21 Oct 08:15
b41ade6
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.3...v0.2.4

v0.2.3

30 Jul 12:31
47f8661
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.2.2...v0.2.3

v0.2.2

26 Feb 03:08
4fd014b
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.1...v0.2.2

v0.2.1

01 Feb 05:15
e20a768
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.2.0...v0.2.1

v0.2.0

31 Jan 13:47
990828c
Compare
Choose a tag to compare

What's Changed

  • Stream Output: Provides the stream_chat interface for streaming output, allowing cool streaming demos right at your local setup.

  • Interfacing is unified, with a comprehensive design upgrade for enhanced extensibility, including:

    • Model: Whether it's the OpenAI API, Transformers, or LMDeploy inference acceleration framework, you can seamlessly switch between models.
    • Action: Simple inheritance and decoration allow you to create your own personal toolkit, adaptable to both InternLM and GPT.
    • Agent: Consistent with the Model's input interface, the transformation from model to intelligent agent only takes one step, facilitating the exploration and implementation of various agents.
  • Documentation has been thoroughly upgraded with full API documentation coverage.

Welcome to watch our demo at https://www.youtube.com/watch?v=YAelRLi0Zak

Lagent Release v0.1.3

30 Jan 03:34
85b91cc
Compare
Choose a tag to compare

The last release for version 0.1

What's Changed

New Contributors

v0.1.2

24 Oct 14:48
5061645
Compare
Choose a tag to compare
Bump version v0.1.2 (#55)

Lagent Release v0.1.1

22 Aug 03:46
3ba9abf
Compare
Choose a tag to compare

Main Features

  • Support multiple kinds of agents out of the box. Lagent now supports ReAct, AutoGPT, and ReWOO, which can drive the large language models(LLMs) for multiple trials of reasoning and function calling.
  • Extremely simple and easy to extend. The framework is quite simple with a clear structure. With only 20 lines of code, you are able to construct your own agent. It also supports three typical tools: Python interpreter, API call, and Google search.
  • Support various large language models. We support different LLMs, including API-based (GPT-3.5/4) and open-source (LLaMA 2, InternLM) models.