Vector embeddings for each word in word list.
It uses text-embedding-ada-002
from Open AI to do the embeddings.
lookup.json
is a simple mapping from word to filename
and index
. This can be useful if you want to be able to compare an embedding with specific words.
You can use a Vector DB like chroma to handle search.
I would not recommend vectra for such a large embeddings list as it must all be in memory at once.
To compare two embeddings you can calculate their cosine similarity with libraries: compute cosine similarity.
For more efficient comparisons you can use algorithms like these:
For visualising parts of the space you can use t-sne or related algorithms, e.g.
You can interpolate between different embedding spaces to find words which are between other words. You can get the mean of embedding spaces to try to isolate features.
There are also implementations of reversing embedding spaces in python:
There are a series of embeddings files in /embeddings
which are in order & contain an array of objects in the following format:
[
{ word: '', embedding: [] },
]
wordpos can be used to attach metadata such as noun, verb, adjective, adverb, synonyms, definition
etc.
In the /metadata
folder is noun, verb, adjective, adverbs & the word's lookup. Not all words are known by wordpos.
I think some interesting things can be done by filtering words based on their properties.
knn can be used for classification. knn