Couchbase, the cloud database platform company, has recently introduced vector search, a new feature designed to help businesses develop AI-powered adaptive applications that engage users in a hyper-personalised and contextualised way. This new offering is part of Couchbase Capella Database-as-a-Service (DBaaS) and Couchbase Server. The company claims to be the first database platform to offer vector search that is optimised for running onsite, across multiple clouds, to mobile and IoT devices at the edge.
The Senior Vice President of product management and business operations at Couchbase, Scott Anderson, emphasised that the addition of vector search to their platform was critical in enabling their customers to create a new range of adaptive applications. He said, "Couchbase is seizing this moment, bringing together vector search and real-time data analysis on the same platform. Our approach provides customers a safe, fast and simplified database architecture that's multipurpose, real-time and ready for AI."
Vector search has become a key part of the development of AI-powered applications, as businesses strive to offer hyper-personalised, high-performing and adaptive user experiences. These applications, such as chatbots, recommendation systems and semantic search, utilise AI to respond to user requests quickly and accurately. A unique element of vector search is the ability to execute a hybrid search, combining vectors, text, numerical ranges, operational inventory query and geospatial matching.
There is a growing demand from organisations looking to build intelligence into applications that communicate with large language models (LLMs). Semantic search capabilities powered by vector search and augmented by retrieval-augmented generation (RAG) are crucial for improving response accuracy and reducing errors. Single standalone solutions can often complicate the IT architecture for companies and hamper the performance of applications. In contrast, Couchbase's multifunctional capabilities bypass this friction, delivering a simplified architecture to enhance the accuracy of LLM results. Couchbase also supports developers by allowing a single SQL++ query to be used with the vector index, eliminating the need for multiple indexes or products.
In alignment with its AI strategy, Couchbase is extending its AI partner ecosystem with LangChain and LlamaIndex support. This extension will facilitate developer productivity by providing them with more choices when building adaptive applications, speeding up query prompt assembly, improving response validation and contributing to RAG applications. "Many LLM-driven applications demand user-specific data beyond the model's training dataset, relying on robust databases to feed in supplementary data and context from different sources. Our integration with Couchbase provides customers another powerful database option for vector store so they can more easily build AI applications," said Harrison Chase, CEO and co-founder of LangChain.
The newly announced capabilities are expected to be available in the first quarter of Couchbase's fiscal year 2025 in Capella and Couchbase Server and in beta for mobile and edge.