Skip to content

Commit

Permalink
660 : Support for onRetrieved for TokenStream (langchain4j#1527)
Browse files Browse the repository at this point in the history
  • Loading branch information
LangChain4j committed Aug 26, 2024
1 parent 3d1c7b8 commit 9842ab6
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 4 deletions.
14 changes: 14 additions & 0 deletions docs/docs/tutorials/7-rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,20 @@ String answer = result.content();
List<Content> sources = result.sources();
```

When streaming, a `Consumer<List<Content>>` can be specified using the `onRetrieved()` method:
```java
interface Assistant {

TokenStream chat(String userMessage);
}

assistant.chat("How to do Easy RAG with LangChain4j?")
.onRetrieved(sources -> ...)
.onNext(token -> ...)
.onError(error -> ...)
.start();
```

## RAG APIs
LangChain4j offers a rich set of APIs to make it easy for you to build custom RAG pipelines,
ranging from simple ones to advanced ones.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.rag.RetrievalAugmentor;
import dev.langchain4j.rag.content.Content;

import java.util.List;
Expand All @@ -16,10 +17,10 @@ public interface TokenStream {

/**
* The provided consumer will be invoked when/if contents have been retrieved using {@link RetrievalAugmentor}.
*
* This method is invoked before any call is made to the language model.
* <p>
* The invocation happens before any call is made to the language model.
*
* @param contentHandler lambda that consumes all matching contents
* @param contentHandler lambda that consumes all retrieved contents
* @return token stream instance used to configure or start stream processing
*/
TokenStream onRetrieved(Consumer<List<Content>> contentHandler);
Expand Down Expand Up @@ -58,7 +59,7 @@ public interface TokenStream {

/**
* Completes the current token stream building and starts processing.
*
* <p>
* Will send a request to LLM and start response streaming.
*/
void start();
Expand Down

0 comments on commit 9842ab6

Please sign in to comment.