You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have looked for existing issues (including closed) about this
Feature Request
Following on from #268, we should try to support sparse text model embeddings.
This will enable users to carry out things like hybrid search for RAG.
Motivation
Sparse embeddings are quite good for some types of NLP and information search.
They are also quite efficient and small which makes them good storage-wise.
Proposal
Unfortunately, Fastembed does not support sparse text models by default despite apparently having it in the library - you have to create your own.
This issue however, points to the bm42-rs library which does support it. So we can implement it this way if we are OK with adding another library. (we'll likely need to do some exploratory work to figure this part out)
Alternatives
Not sure?
The text was updated successfully, but these errors were encountered:
joshua-mo-143
changed the title
feat: support sparse text model embeddings
feat: support sparse text embeddings
Feb 5, 2025
Feature Request
Following on from #268, we should try to support sparse text model embeddings.
This will enable users to carry out things like hybrid search for RAG.
Motivation
Sparse embeddings are quite good for some types of NLP and information search.
They are also quite efficient and small which makes them good storage-wise.
Proposal
Unfortunately, Fastembed does not support sparse text models by default despite apparently having it in the library - you have to create your own.
This issue however, points to the bm42-rs library which does support it. So we can implement it this way if we are OK with adding another library. (we'll likely need to do some exploratory work to figure this part out)
Alternatives
Not sure?
The text was updated successfully, but these errors were encountered: