Memorystore Vector Search For Redis Cluster And Valkey

                                 

With the release of vector search earlier this year, Memorystore for Redis Cluster became the ideal platform for Gen AI use cases like as recommendation systems, semantic search, RAG, and more. Why? because of its very fast vector search. A single Memorystore for Redis instance can do vector search across tens of millions of vectors with a one-digit millisecond latency. However, what would happen if you needed to store more vectors than could fit on a single virtual machine?

Three fascinating features are combined in Google's vector search on the new Memorystore for Redis Cluster and Memorystore for Valkey:

1) Scalability (in or out) with zero downtime;

2) In-memory vector search with very low latency;

3) A strong and effective vector search that covers billions or even millions of vectors.

With the now-preview vector support for these Memorystore products, you may grow your cluster up to 250 shards and store billions of vectors in a single instance. On a single Memorystore for Redis Cluster instance, vector search on over a billion vectors with over 99% recall can be performed with single-digit millisecond latency! This size enables demanding corporate applications, such semantic search across a global corpus of data.

Variable in-memory vector analysis

Performance and scalability of the cluster depend on how the vector index is divided across its nodes. Every node in Memorystore has an index partition that corresponds to the portion of the keyspace that is stored locally because it uses a local index partitioning mechanism. The keyspace has already been evenly sharded by the OSS cluster protocol, so the sizes of each index split are roughly equal.

With the addition of nodes, this design results in index construction times for all vector indices improving linearly. Moreover, if the number of vectors is constant, adding nodes improves the efficiency of logarithmic and linear brute-force searches as well as Hierarchical Navigable Small World (HNSW) searches. Ultimately, billions of searchable and indexable vectors might be supported by a single cluster, all while maintaining fast index generation times and low search latencies at high recall.

Hybrid inquiries

Along with improved scalability, Google is adding support for hybrid queries on Memorystore for Redis Cluster and Memorystore for Valkey. In hybrid inquiries, you may combine filters on tag and numeric data with vector searches. Memorystore provides complex query replies by combining tag, vector, and numeric search.

Furthermore, by combining many data, these filter expressions with boolean logic enable the refinement of search results to only contain relevant information. With this new feature, applications may customize vector search queries to match their needs, providing far richer results than before.

OSS Valkey in the distance

The open-source community is very interested in the Valkey key-value datastore. As part of its commitment to make Valkey great, it coauthored a Request for Comments (RFC) and is working with the open source community to provide its vector search capabilities to Valkey. An RFC launches the community alignment process and invites feedback on its concept and implementation. Its major goal is to enable Valkey developers all around the globe to create amazing next-generation AI applications using Valkey vector search.

The hunt is over for a fast and scalable vector search

Fast and scalable vector search is now available on Memorystore for Redis Cluster and Memorystore for Valkey, adding to the capabilities previously available on Memorystore for Redis and providing ultra-low latency across all of its most popular engines. For creating generative AI applications that need dependable and constant low-latency vector search, Memorystore will thus be hard to compete with. Launch a Memorystore for Valkey or Memorystore for Redis Cluster instance right now to begin enjoying the speed of in-memory search.



Post a Comment

0 Comments