Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revised VSS RFC #15

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

allenss-amazon
Copy link
Member

Replaces and extends the previous PR for VSS module.

Signed-off-by: Allen Samuels <[email protected]>
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other open questions that would document for completeness:

  1. Will ACLs be supported?
  2. Is active defragmentation supported?
  3. Is the RDB compatible with redis search.
  4. I assume this a subset of redis search functionality. but would be good to document that along with what is missing.
  5. How will slot migration work, if at all. Will we also copy indexes between nodes when using the cli?
  6. Are any core changes required? We talked about memory sharing and some new keysspace notifications.
  7. How does redirecting read requests to the primary work in cluster mode? Can you force strongly consistent reads? By the consistency model do we care?
  8. Are we allowing searcg in Lua and multi-exec? Will it cause latency issues? Does redis allow it?

VSS.md Show resolved Hide resolved
VSS.md Outdated

The command returns either an array if successful or an error.

If `NOCONTENT` is specified, then the output is .....
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a mystery?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested changes:

On success, the first entry in the response array represents the total number of qualified matching elements, followed by one array entry for each matching element. Note that the amount of response entries may differ from the total number of response elements, captured in the response first entry, just if the LIMIT option is specified.

When NOCONTENT is specified, each entry in the response contains only the matching keys. Otherwise, each entry includes the matching key, followed by an array of the returned fields.

VSS.md Outdated

If `NOCONTENT` is specified, then the output is .....

If `NOCONTENT` is not specified, then the output is an array of (2\*N)+1 entries, where N is the number of keys output from the search. The first entry in the array is the value N which is followed by N pairs of entries, one per key found. Each pair of entries consists of the key name followed by an array which is the result value for that key.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The protocol has n built in, why is it returned as the first argument?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, see my suggested changes above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compatibility with Redisearch

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, we have no idea why RedisSearch added it?


1\. **reader-threads:** (Integer) Controls the amount of threads executing queries.
2\. **writer-threads:** (Integer) Controls the amount of threads processing index mutations.
3\. **use-coordinator:** (boolean) Cluster mode enabler.
Copy link
Member

@madolson madolson Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There isn't any discussion of the coordinator here, is that in scope for the initial version?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's part of the initial version and it's the way to enable cluster mode.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. will add.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Followup, is there a reason to disable the coordinator for cluster mode? Should it just automatically get enabled for cluster mode?

VSS.md Outdated
- **search\_total\_indexed\_hash\_keys** (Integer) Total count of HASH keys for all indexes
- **search\_number\_of\_indexes** (Integer) Index schema total count
- **search\_number\_of\_attributes** (Integer) Total count of attributes for all indexes
- **search\_failure\_requests\_count** (Integer) A count of all failed requests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the definition of a failed request? One that errors?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This captures any ft.search runtime errors including invalid syntax.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think typically those are called errors throughout the codebase.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right but this ones is specific to ft.search

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's not my point. My point is that at least in Valkey we call errors equivalent to 400s (You send a bad request) and failure equivalent to a 500 (we couldn't handle it). Are these errors or failures? Secondly, do we really need errors for each search request? Why are these requests special?

I ask because none of the other modules do this (or the core). If we are going to be a singular project, we should have some consistently between core commands and module commands.

- **search\_add\_subscription\_successful\_count** (Integer) Count of successfully added subscriptions
- **search\_add\_subscription\_failure\_count** (Integer) Count of failures of adding subscriptions
- **search\_add\_subscription\_skipped\_count** (Integer) Count of skipped subscription adding processes
- **search\_modify\_subscription\_failure\_count** (Integer) Count of failed subscription modifications
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just making one comment, I don't really understand what most of these info fields are supposed to tell end users. Maybe document them like it's for our public documentation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not sure what these subscriptions are, they are only mentioned here.

VSS.md Show resolved Hide resolved
VSS.md Outdated

The command returns either an array if successful or an error.

If `NOCONTENT` is specified, then the output is .....
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested changes:

On success, the first entry in the response array represents the total number of qualified matching elements, followed by one array entry for each matching element. Note that the amount of response entries may differ from the total number of response elements, captured in the response first entry, just if the LIMIT option is specified.

When NOCONTENT is specified, each entry in the response contains only the matching keys. Otherwise, each entry includes the matching key, followed by an array of the returned fields.


- **\<index\>** (required): This index name you want to query.
- **\<query\>** (required): The query string, see below for details.
- **NOCONTENT** (optional): When present, only the resulting key names are returned, no key values are included.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing:
RETURN (optional): Specifies the fields you want to retrieve from your documents, along with any aliases for the returned values. By default, all fields are returned unless the NOCONTENT option is set, in which case no fields are returned. If num is set to 0, it behaves the same as NOCONTENT.

VSS.md Outdated

If `NOCONTENT` is specified, then the output is .....

If `NOCONTENT` is not specified, then the output is an array of (2\*N)+1 entries, where N is the number of keys output from the search. The first entry in the array is the value N which is followed by N pairs of entries, one per key found. Each pair of entries consists of the key name followed by an array which is the result value for that key.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, see my suggested changes above.


1\. **reader-threads:** (Integer) Controls the amount of threads executing queries.
2\. **writer-threads:** (Integer) Controls the amount of threads processing index mutations.
3\. **use-coordinator:** (boolean) Cluster mode enabler.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's part of the initial version and it's the way to enable cluster mode.

VSS.md Outdated
- **search\_total\_indexed\_hash\_keys** (Integer) Total count of HASH keys for all indexes
- **search\_number\_of\_indexes** (Integer) Index schema total count
- **search\_number\_of\_attributes** (Integer) Total count of attributes for all indexes
- **search\_failure\_requests\_count** (Integer) A count of all failed requests
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This captures any ft.search runtime errors including invalid syntax.

VSS.md Outdated Show resolved Hide resolved
@yairgott
Copy link

yairgott commented Dec 3, 2024

Other open questions that would document for completeness:

  1. Will ACLs be supported?

ACLs are not supported in the current version.

  1. Is active defragmentation supported?

Memory allocated by the module is not subject to active defragmentation.

  1. Is the RDB compatible with redis search.

No, the RDB format is not compatible with RediSearch. The module utilizes a proprietary RDB format based on protobuf, which serializes both index metadata and content. In contrast, RediSearch's RDB format only serializes the index metadata.

  1. I assume this a subset of redis search functionality. but would be good to document that along with what is missing.

Yes, we should have a section which enumerates key missing functionality as bullet points to provide clarity.

  1. How will slot migration work, if at all. Will we also copy indexes between nodes when using the cli?

Slot migration is supported. Indexes are cluster level concepts so they are already replicated across the cluster, and not contained within a slot. Keys from the migrated slot trigger keyspace mutation events, which are processed in the same manner as client-originated mutations, except that they do not block the client during execution.

  1. Are any core changes required? We talked about memory sharing and some new keysspace notifications.

While the module can function without engine changes, two engine changes have been introduced to enhance user experience:

  1. Blocking client on keyspace mutation events: Ensures visibility of mutations in consecutive queries on the same connection. This change extends the behavior of the module API RedisModule_BlockClient.
  2. Memory deduplication: Adds new module APIs to enable memory sharing between the module and the engine, reducing duplication.
  1. How does redirecting read requests to the primary work in cluster mode? Can you force strongly consistent reads? By the consistency model do we care?

Valkey doesn't support distributed transactions so I don't see any consistency guarantee with scatter-gather across multiple shards.
Additional context: In cluster mode, a query can be received by any node, which fans out the request to all shards. Each shard processes the query locally and returns results to the initiating node, which selects the top responses across shards. Currently, forcing reads from primary nodes only are not supported. The current logic for instance selection is designed to improve the performance, increase throughput and lower latency, by load balancing between the instances.

  1. Are we allowing searcg in Lua and multi-exec? Will it cause latency issues? Does redis allow it?
    Will ACLs be supported?
  • Multi-exec: Supported, but it introduces significant performance impacts. Mutation processing cannot be parallelized, and queries inside a multi-exec context must wait for preceding mutations to complete.
  • Lua scripts: The main constrain is that the engine doesn't allow the client to be blocked when issued by a LUA script. Long term, it makes sense to address this in the engine but in the meantime the current state is as follow:
    ** Mutation processing: mutations are processed but clients are not blocked.
    ** Queries: issuing ft.search is the context of LUA script would result in an error due to the fact that the client cannot be blocked.

@yairgott
Copy link

yairgott commented Dec 3, 2024

Made some inline editing to bullet #8 regarding to LUA.

allenss-amazon and others added 3 commits December 5, 2024 06:39
Adding gaps relative to RediSearch section

Signed-off-by: yairgott <[email protected]>
Fixing heading of the 'Unsupported knobs and control' section

Signed-off-by: yairgott <[email protected]>
**Search Operations CUJ** \- as a user, I want to perform search operations using commonly available clients.

- Requirement: support index insertion/mutation/deletion/query operations
- Requirement: maintain compatibility with RediSearch VSS APIs, Memorystore APIs, and MemoryDB APIs as much as possible
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RedisSearch API is weird. Pinecone/Milvus API is more natural

# create
pinecone.create_index(name=index_name, dimension=dimension)
# insert
index = pinecone.Index(index_name)
index.upsert(vectors)
# search
results = index.query(queries=[query_vector], top_k=5)

I believe developers prefer the latter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to clarify the difference:

  1. In the current implementation, you insert keys with other commands and then create an index ontop of a prefix structure.
  2. In the pinecone world, you create an index and then insert objects into the index explicitly.

?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the proposed implementation, once an index is created, it always reflects the current state of the keys that the index covers. This means that the order of key insertion vs index creation is arbitrary, i.e., you can do them in any order.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the API is not intuitive and has room for improvement. However, we've opted for RediSearch compatibility to facilitate easier adoption. In the long run, though, we should explore designing a more user-friendly API.


The following metrics are added to the INFO command.

- **search\_total\_indexed\_hash\_keys** (Integer) Total count of HASH keys for all indexes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there be a corresponding JSON variant?

Comment on lines +416 to +420
- **search\_hnsw\_create\_exceptions\_count** (Integer) Count of HNSW creation exceptions.
- **search\_hnsw\_search\_exceptions\_count** (Integer) Count of HNSW search exceptions
- **search\_hnsw\_remove\_exceptions\_count** (Integer) Count of HNSW removal exceptions.
- **search\_hnsw\_add\_exceptions\_count** (Integer) Count of HNSW addition exceptions.
- **search\_hnsw\_modify\_exceptions\_count** (Integer) Count of HNSW modification exceptions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still unclear, what are exceptions? How do these fails and what are end users supposed to be doing about these. Are these just syntax errors?

- **search\_add\_subscription\_successful\_count** (Integer) Count of successfully added subscriptions
- **search\_add\_subscription\_failure\_count** (Integer) Count of failures of adding subscriptions
- **search\_add\_subscription\_skipped\_count** (Integer) Count of skipped subscription adding processes
- **search\_modify\_subscription\_failure\_count** (Integer) Count of failed subscription modifications
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not sure what these subscriptions are, they are only mentioned here.


1\. **reader-threads:** (Integer) Controls the amount of threads executing queries.
2\. **writer-threads:** (Integer) Controls the amount of threads processing index mutations.
3\. **use-coordinator:** (boolean) Cluster mode enabler.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Followup, is there a reason to disable the coordinator for cluster mode? Should it just automatically get enabled for cluster mode?

This means for that for data mutation operations, no cross-cluster communication is required, each node simply updates it's local index to reflect the mutation of the local key.

Query operations are accepted by any node in the cluster and that node is responsible for broadcasting the query across the cluster and merging the results for delivery to the client.
Cross-client communication uses gRPC and protobufs and does not require mainthread interaction.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this on a separate port? Does this need to be configurable so that end users can slot it into their system?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This communication needs to be explained in some more depth. How do the nodes exchange the grpc-port with each other, over the cluster bus?

Is there already some module API for modules to use the cluster bus to exchange this kind of information or do we have to add it?

removal of links to redis.io

Signed-off-by: yairgott <[email protected]>
Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read the RFC and it's interesting. I don't know much about vector similarity search, so the introduction and motivation was very useful.

I didn't review the individual commands. I assume those are similar to the redis and other APIs and I don't know enough about it, but I put some comments about the specification, especially the communication with other nodes and data persistence, etc. I think it needs some more details.

I'm also missing some discussion about what happens in case of failover, corner cases like failover during a distributed search, outdated data, etc. whatever can happen. Are there race conditions regarding if keys are modified at the same time as a search is started? What about consistency in a distributed search, are there any guarantees?

Is a global cross-slot search possible or is everything per cluster slot or per shard? Implicitly, it seems to be global search, but I think this should be mentioned because with most of the existing commands, like SUNION, cross-slot operations is not allowed, while in other commands like SCAN, keys from multiple slots can be returned in the same call.

Regarding the gRPC communication, can you explain the RCP calls you're using, if not in detail then at least in some pseudo-code style?

Comment on lines +1 to +3
## RFC: 8

## Status: Proposed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be markdown metadata (frontmatter) and the RFC number should be the PR number.

Suggested change
## RFC: 8
## Status: Proposed
---
RFC: 15
Status: Proposed
---


The creation of an index initiates a backfill process in the background that will scan the keyspace and insert all covered keys into the newly created index. Mutations of a key are automatically reflected in the index. This process is automatic after the creation of the index. Mutation of keys is explicitly allowed during the backfill process. The current state of a backfill may be interrogated by a client using the FT.INFO command.

A query operation is a two step process. In the first step, a query is executed that generates a set of keys and distances. In the second step, that list of keys and distances is used to generate a response.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is about how the data is represented and stored. I miss information about how all of this is stored in RDB files, AOF files, replicated to replicas and stored in slot migration.

What are "keys and distances"?

  • Are they temporary data just only produce one response and then discarded?
  • Are they as actual keys in the database? If not, it should be clarified, because when the word "keys" is used, it can be confusing what it is.
  • Are these keys and distances stored in RDB, AOF and replicated?
  • Are they sharded or local to one node?

This means for that for data mutation operations, no cross-cluster communication is required, each node simply updates it's local index to reflect the mutation of the local key.

Query operations are accepted by any node in the cluster and that node is responsible for broadcasting the query across the cluster and merging the results for delivery to the client.
Cross-client communication uses gRPC and protobufs and does not require mainthread interaction.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This communication needs to be explained in some more depth. How do the nodes exchange the grpc-port with each other, over the cluster bus?

Is there already some module API for modules to use the cluster bus to exchange this kind of information or do we have to add it?

Comment on lines +102 to +103
In cluster mode, indexes are sharded just like data, meaning that each node maintains an index only for the keys that are present on that node.
This means for that for data mutation operations, no cross-cluster communication is required, each node simply updates it's local index to reflect the mutation of the local key.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that indexes don't need to be replicated? Please elaborate and mention something about these cases:

  • Are indexes replicated and stored in RDB and AOF files?
  • Do the indexes not need to be replicated, because the replica builds its own index from the keys it receives from the primary?
  • The same questions about slot migration: Do the indexes need to be moved with slot migration or does the importing node just build its own index with the keys it receives from the migrating node?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commands that manipulate the index metadata (FT.CREATE, FT.DROP, ...) are replicated like any other mutation command. The contents of the index are not replicated, each node is expected to maintain it's own indexes derived from it's own local copy of the keys. Save/Restore of the per-field indexes is optional as they can be rebuilt at load time. The current implementation elects to save/restore vector indexes because the cost of rebuilding those is so high.

Indexes are not built at the per-slot level, but rather at the per-shard level. Thus slot migration cannot move indexes. Rather slot migration is simply treated as per-key index update operation, i.e., delete from one shard and insert into another shard -- just like standard slot migration.

Because query operations are broadcast to all shards, individual shards will perform their portion of the query at different points in time. This creates the situation where any key that's in the process of being migrated while the shards are being interrogated could be present in 0, 1 or 2 shard results (depending on the timing). The duplication case is easily handled during the merging of the per-shard results. The 0 occurrence case presents as a degradation of recall, not a correctness issue.

Comment on lines +77 to +79
**ACLs CUJ** \- As a user of Access Control Lists (ACLs), I want to manage user-based permissions, specific command permissions, etc to manage vector search access

- Requirement: Existing ACls functionality should extend to vector search commands
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm missing some details about ACL.

  • ACL per command automatically extends to module commands, so we get these automatically, right?
  • Does the module create a new ACL category for the VSS commands? (There's module API for that, IIRC.)
  • ACL rules per key pattern, do these extend to searches? How/why not?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the search commands actually have keys on the command, so the existing built-in ACL support for modules doesn't apply. Separately AWS has implemented module extensions for ACLs for search. This will be contributed, but under a separate RFC.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACL is not only about keys. You can allow and disallow commands:

   • +<command>: Add the command to the list of commands the user can call.  Can be used with | for allowing subcommands (e.g “+config|get”).

   • -<command>:  Remove the command to the list of commands the user can call.  Starting Valkey 7.0, it can be used with | for blocking sub‐
     commands (e.g “-config|set”).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, module-added commands are already supported by ACL. We will flesh out the details of how ACL will work in a separate RFC.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you flesh it out here? We want a single RFC for VSS, this is our main point of alignment and it's called out in the template as a section we would like people to address.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is covered in the next comment. An extension to the existing module interface is required, should that be proposed here or in a separate RFC for the core?

FT.SEARCH index_name "@price:[min max] @tags:{sport} @title:basketball"
```

- ACLs: allowing query and index operations to respect the permissions configured for each user.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this about the ACL rules for key patterns?

What's the idea here? Should users be able to search only for keys they have access to read?

Technically, how could this work? Search all keys first and then filter before returning the results to the client?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The security model that AWS implemented directly supports the prefix-based definition of indexes. This implementation operates at the start of a command and checks if the current user has read access to any of the keys that MIGHT be in the index, i.e., current user must have read access to 100% of the keyspace covered by the PREFIX clause of the creation of the index. Once that check passes, no additional security check would be performed.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I think that we should discuss this. I do have some reservations about this approach because ACLs might be changed after an index is created and ft.search should still honor the ACLs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps my explanation was poorly worded. The ACL checks operate on every command. My comment about "no additional security check would be performed" meant to say that there's no key-level access checking. An individual user either has 100% or 0% access to an index which is determined right at the start of the command.

@hwware
Copy link
Member

hwware commented Dec 13, 2024

Can we close RFC #8?

VSS.md Show resolved Hide resolved
A query operation is a two step process. In the first step, a query is executed that generates a set of keys and distances. In the second step, that list of keys and distances is used to generate a response.

The first step is performed in a background thread, while the second step is performed by the mainthread. If a key produced by step 1 is mutated before step 2 is executed by the mainthread, the result of the query may not be consistent with the latest state of the keys.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we tolerance this inconsistent?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the term "consistent" might be misleading in this context. It's important to clarify that there is no consistency issue. Rather, the situation involves search results being omitted or excluded from the search response. This occurs only when a query includes unindexed fields in the specified returned fields, and some search result entries are modified during the background thread's execution, causing them to no longer match the query filter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In CMD mode, the application could use LUA or multi/exec to avoid this issue-- with the concurrent reduction in throughput.

For CME, the question is more interesting as there is no attempt to synchronize the per-shard queries. Thus while it may be possible to avoid this issue in the originating shard (i.e., do something like the LUA multi/exec described above), the application will still have to deal with the fact that the returned results might be out-of-date for all of the non-originating shards.


- **NUMERIC** Field contains a number.

- **VECTOR**: Field contains a vector. Two vector indexing algorithms are currently supported: HNSW (Hierarchical Navigable Small World) and FLAT (brute force). Each algorithm has a set of additional attributes, some required and other optional.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a chance to support GRAPH_PQ ( combination of the HNSW algorithm and the PQ algorithm), IVF_GRAPH ( combination of IVF and HNSW) and IVF_GRAPH_PQ (combination of the PQ algorithm with the IVF or HNSW). These algorithms could work for huge amount records (from hundreds of millions to more than 1 billion files in shards)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've taken the stance of putting into the RFC the functionality that's being contributed -- which doesn't include these particular algorithms. But with that said, there's no reason that this query architecture can't easily be extended to support these algorithms in the future.

- **FLAT:** The Flat algorithm provides exact answers, but has runtime proportional to the number of indexed vectors and thus may not be appropriate for large data sets.
- **DIM \<number\>** (required): Specifies the number of dimensions in a vector.
- **TYPE FLOAT32** (required): Data type, currently only FLOAT32 is supported.
- **DISTANCE\_METRIC \[L2 | IP | COSINE\]** (required): Specifies the distance algorithm
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a chance to support hamming: Hamming distance ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, there's only support for the 32-bit floating point datatype. Hamming distance has no meaning for this datatype. As described above, when binary-based vectors get supported in some future implementation I would assume that Hamming distance would be supported for those algorithms.


```
<filtering>=>[ KNN <K> @<vector_field_name> $<vector_parameter_name> <query-modifiers> ]
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a chance to implement the Boolean query, ScriptScore Query and Re-Score Query (If the GRAPH_PQ or IVF_GRAPH_PQ index is used)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same answer as above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your address.

@hwware
Copy link
Member

hwware commented Dec 18, 2024

I just add comments for this RFC, I am not sure if we could implment more index algorithms and query methods, but the link Using Vector Indexes for Data Search in an Elasticsearch Cluster give me more hint https://support.huaweicloud.com/intl/en-us/usermanual-css/css_01_0123.html

Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Core team discussion: High level consensus on the design of just copying the APIs. In the future we will consider adding our own APIs with our own semantics.

  1. Should be able to run on the existing LangChain.
    1. We should be compatible with Redis' langchain code, but long term we should support our own.
  2. ACLs?
    1. We should figure out what we should do before version 1.0, add it to the open discussions.
    2. Align with scan behavior (require * permission)
    3. Prefixes
    4. Do the check on each result.
  3. Release the source code on January 15th, (Yair, can I post this?)
  4. CRC hash is questionable? Why not xxHash or some modern hashing algorithm? Maybe we don't really care.
  5. Module APIs needed in the core. (These need to be done for 8.1)
    1. API for reducing the overhead for clusterbus messages.
    2. Need an API so that modules can share memory. Is Memory duplication a requirement for V1?
    3. Need an API for blocking a client from a keyspace notification.

Signed-off-by: Allen Samuels <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants