Skip to content

Commit

Permalink
chore: Fix typos in proposals (#8837)
Browse files Browse the repository at this point in the history
  • Loading branch information
triplechecker-com authored Feb 11, 2025
1 parent f9e6e48 commit ad90e10
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion proposals/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,5 +116,5 @@ is welcome to post an implementation for review after the proposal has been acce

## Inspiration

Haystack's proposals design process process owes its inspiration to the [React](https://github.com/reactjs/rfcs) and
Haystack's proposals design process owes its inspiration to the [React](https://github.com/reactjs/rfcs) and
[Rust](https://github.com/rust-lang/rfcs) RFC processes. We're open to changing it if needed.
2 changes: 1 addition & 1 deletion proposals/text/4084-agent-demo.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ sufficient to translate into clinical use. There were other recent demos but the
See https://twitter.com/GlassHealthHQ/status/1620092094034620421 for more details.

- Public Healthcare QA: a bit less risky proposal than the medical QA. We can build a demo that answers questions about
healthy diet, cooking recipes, vitamines etc. This demo would use almost exactly the same tools as the main demo proposal
healthy diet, cooking recipes, vitamins etc. This demo would use almost exactly the same tools as the main demo proposal
and we can potentially switch to this demo if needed.

- Financial Domain (earnings transcript): we can build a demo that answers questions about earnings transcripts. However,
Expand Down
8 changes: 4 additions & 4 deletions proposals/text/4284-drop-basecomponent.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Pipelines are the fundamental component of Haystack and one of its most powerful

However, as it currently stands, the `Pipeline` object is also imposing a number of limitations on its use, most of which are likely to be unnecessary. Some of these include:

- DAGs. DAGs are safe, but loops could enable many more usecases, like `Agents`.
- DAGs. DAGs are safe, but loops could enable many more use-cases, like `Agents`.

- `Pipeline` can select among branches, but cannot run such branches in parallel, except for some specific and inconsistent corner cases. For further reference and discussions on the topic, see:
- https://github.com/deepset-ai/haystack/pull/2593
Expand All @@ -26,7 +26,7 @@ However, as it currently stands, the `Pipeline` object is also imposing a number

- `Pipeline`s are forced to have one single input and one single output node, and the input node has to be called either `Query` or `Indexing`, which softly forbids any other type of pipeline.

- The fixed set of allowed inputs (`query`, `file_paths`, `labels`, `documents`, `meta`, `params` and `debug`) blocks several usecases, like summarization pipelines, translation pipelines, even some sort of generative pipelines.
- The fixed set of allowed inputs (`query`, `file_paths`, `labels`, `documents`, `meta`, `params` and `debug`) blocks several use-cases, like summarization pipelines, translation pipelines, even some sort of generative pipelines.

- `Pipeline`s are often required to have a `DocumentStore` _somewhere_ (see below), even in situation where it wouldn't be needed.
- For example, `Pipeline` has a `get_document_store()` method which iterates over all nodes looking for a `Retriever`.
Expand Down Expand Up @@ -728,7 +728,7 @@ Other features planned for addition are:

Parameters can be passed to nodes at several stages, and they have different priorities. Here they're listed from least priority to top priority.

1. **Node's default `__init__` parameters**: nodes's `__init__` can provide defaults. Those are used only if no other parameters are passed at any stage.
1. **Node's default `__init__` parameters**: node's `__init__` can provide defaults. Those are used only if no other parameters are passed at any stage.
2. **Node's `__init__` parameters**: at initialization, nodes might be given values for their parameters. These are stored within the node instance and, if the instance is reused in the pipeline several times, they will be the same on all of them
3. **Pipeline's `add_node()`**: When added to the pipeline, users can specify some parameters that have to be given only to that node specifically. They will override the node instance's parameters, but they will be applied only in that specific location of the pipeline and not be applied to other instances of the same node anywhere else in the graph.
4. **Pipeline's `run()`**: `run()` also accepts a dictionary of parameters that will override all conflicting parameters set at any level below, quite like Pipeline does today.
Expand Down Expand Up @@ -947,4 +947,4 @@ These changes are going to be release with Haystack 1.x in a hidden internal pac
We will progressively add nodes to this `haystack.v2` package and build a folder structure under it (`haystack.v2.nodes`, `haystack.v2.stores`, ...) version after version, until we believe the content of the package is usable. Documentation will be built in parallel and we will progressively start pushing users towards the 2.0 API.
Power users like dC and other Haystack experts will be able to test out these changes from the start and provide feedback while still in Haystack 1.x.

Once we're confident that the v2 version covers all of Haystack v1.x usecases, Haystack 2.0 will be released and the packages are going to be switched: the content of `haystack` will be moved into `haystack.v1` and deprecated, and the content of `haystack.v2` will me moved under `haystack`. A few 2.x versions later, `haystack.v1` will then be dropped.
Once we're confident that the v2 version covers all of Haystack v1.x use-cases, Haystack 2.0 will be released and the packages are going to be switched: the content of `haystack` will be moved into `haystack.v1` and deprecated, and the content of `haystack.v2` will me moved under `haystack`. A few 2.x versions later, `haystack.v1` will then be dropped.
4 changes: 2 additions & 2 deletions proposals/text/4370-documentstores-and-retrievers.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

Haystack's Document Stores are a very central component in Haystack and, as the name suggest, they were initially designed around the concept of `Document`.

As the framework grew, so did the number of Document Stores and their API, until the point where keeping them aligned aligned on the same feature set started to become a serious challenge.
As the framework grew, so did the number of Document Stores and their API, until the point where keeping them aligned on the same feature set started to become a serious challenge.

In this proposal we outline a reviewed design of the same concept.

Expand Down Expand Up @@ -101,7 +101,7 @@ class MyDocumentStore:

The contract is quite narrow to encourage the use of specialized nodes. `DocumentStore`s' primary focus should be storing documents: the fact that most vector stores also support retrieval should be outside of this abstraction and made available through methods that do not belong to the contract. This allows `Retriever`s to carry out their tasks while avoiding clutter on `DocumentStore`s that do not support some features.

Note also how the concept of `index` is not present anymore, as it it mostly ES-specific.
Note also how the concept of `index` is not present anymore, as it is mostly ES-specific.

For example, a `MemoryDocumentStore` could offer the following API:

Expand Down
2 changes: 1 addition & 1 deletion proposals/text/5540-llm-support-2.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ Such template names **cannot be changed at runtime**.

The design above derives from one Canals limitation: component’s sockets need to be all known the latest at `__init__` time, in order for the connections to be made and validated. Therefore, we need to know all the prompt variables before building the pipelines, because the prompt variables are inputs of the `run()` method.

However, earlier iterations of Canals did support so-called “true variadic” components: components that do not need to know what they will be connected to, and build the input sockets at need. Such components of course lack input validation, but enable usecases like the above.
However, earlier iterations of Canals did support so-called “true variadic” components: components that do not need to know what they will be connected to, and build the input sockets at need. Such components of course lack input validation, but enable use-cases like the above.

If we decide that Canals should support again such components, we would be able to rewrite `PromptBuilder` to take a prompt as its input parameter and just accept any other incoming input, on the assumption that users knows that they’re doing.

Expand Down
2 changes: 1 addition & 1 deletion proposals/text/6784-integrations-for-eval-framworks.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Other LLM application frameworks such as LlamaIndex already provide support for

As with evaluation in Haystack 1.x, we reaffirm the core idea of implementing different pipelines for different concerns. We consider evaluation a separate process and consequently separate the execution of RAG and the metric calculation into two different pipelines. This allows for greater flexibility - for instance, the evaluation pipeline could contain an additional component that routes the inputs to different evaluator components based on certain criteria, etc. Another example would be the ability to convert the inputs from/to different formats before passing them to the evaluator.

A further advantage of this approach is that any tool we develop in the future to facilitate introspection and observability of pipelines can transparently be appled to evaluation as well.
A further advantage of this approach is that any tool we develop in the future to facilitate introspection and observability of pipelines can transparently be applied to evaluation as well.

The implementation of the three evaluator components should follow the general guidelines for custom component development. There are two approaches we could take:

Expand Down

0 comments on commit ad90e10

Please sign in to comment.