Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: 能不能增加自定义模型使用上传文件功能 #289

Open
baizhougod opened this issue Jun 29, 2024 · 11 comments
Open
Labels
enhancement New feature or request

Comments

@baizhougod
Copy link

Problem Description

自定义的模型只能传图片,能不能支持传文件

Solution Description

自定义的模型只能传图片,能不能支持传文件

Alternatives Considered

No response

Additional Context

No response

@baizhougod baizhougod added the enhancement New feature or request label Jun 29, 2024
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: [Feature Request]: Can you add a custom model to use the upload file function?

Problem Description

Customized models can only upload pictures, can they support file uploading?

Solution Description

Customized models can only upload pictures, can they support file uploading?

Alternatives Considered

No response

Additional Context

No response

@Hk-Gosuto
Copy link
Owner

这个倒是也可以,不过实现方式会与 gpt 模型不同,gpt 模型是插件形式进行检索的,其他的模型倒是可以使用上下文的方式。

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is possible, but the implementation method will be different from the gpt model. The gpt model is retrieved in the form of a plug-in, while other models can use context.

@GrayXu
Copy link

GrayXu commented Jul 7, 2024

这个倒是也可以,不过实现方式会与 gpt 模型不同,gpt 模型是插件形式进行检索的,其他的模型倒是可以使用上下文的方式。

好奇对于像Yi-large、glm-4、3.5 sonnet等支持langchain的模型,直接加入到ChatGPT-Next-Web-LangChain里需要改的东西多吗?

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is also possible, but the implementation method will be different from the gpt model. The gpt model is retrieved in the form of a plug-in, while other models can use the context method.

I'm curious. For models that support langchain, such as Yi-large, glm-4, 3.5 sonnet, etc., are there many things that need to be changed if they are directly added to ChatGPT-Next-Web-LangChain?

@baizhougod
Copy link
Author

这个倒是也可以,不过实现方式会与 gpt 模型不同,gpt 模型是插件形式进行检索的,其他的模型倒是可以使用上下文的方式。

像nio那样是插件调用文件进行内容检索,本质还是他们自己的,不是gpt去读文件,像逆向那样给直链让gpt下载,就是gpt分析文件,这样理解对嘛

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is also possible, but the implementation method will be different from the gpt model. The gpt model is retrieved in the form of a plug-in, while other models can use the context method.

Like nio, plug-ins call files for content retrieval. The essence is their own. It is not gpt reading the file. It is a direct link for gpt to download like reverse reverse. It is gpt analyzing the file. Is this correct?

@baizhougod
Copy link
Author

这个倒是也可以,不过实现方式会与 gpt 模型不同,gpt 模型是插件形式进行检索的,其他的模型倒是可以使用上下文的方式。

nio那种不知道怎么分割文本的,LangChain还是调用模型分割的吧

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is also possible, but the implementation method will be different from the gpt model. The gpt model is retrieved in the form of a plug-in, while other models can use the context method.

Nio doesn't know how to segment text. LangChain should use model segmentation.

@Hk-Gosuto
Copy link
Owner

这个倒是也可以,不过实现方式会与 gpt 模型不同,gpt 模型是插件形式进行检索的,其他的模型倒是可以使用上下文的方式。

nio那种不知道怎么分割文本的,LangChain还是调用模型分割的吧

目前gpt的做法是把检索功能作为插件让gpt调用,包括gpt官方目前也是类似的做法。
其他模型只能是在每次用户发送文本内容时进行一次检索,然后将检索到的内容作为上下文发过去。
可以参考 langchian 的这篇文章:https://python.langchain.com/v0.2/docs/tutorials/rag/

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This is also possible, but the implementation method will be different from the gpt model. The gpt model is retrieved in the form of a plug-in, while other models can use context.

Nio doesn’t know how to segment text. LangChain should use model segmentation.

The current approach of gpt is to use the search function as a plug-in for gpt to call, including the gpt official, which is currently using a similar approach.
Other models can only retrieve text content once every time the user sends it, and then send the retrieved content as context.
You can refer to this article by langchian: https://python.langchain.com/v0.2/docs/tutorials/rag/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants