Redis made a unified release of its namesake data store available today for consistent deployment across all supported platforms. Previously, the pace at which upgrades were made was staggered across different offerings only as they became available.
At the same time, Redis is moving to unify client libraries used to access its open source data store by working directly with the community maintainers of the five most popular client libraries—Jedis (Java), node-redis (NodeJS), redis-py (Python), NRedisStack (.Net), and Go-Redis (Go).
Other capabilities added to version 7.2 of the platform include support for auto-tiering, to more easily migrate less frequently accessed data to less expensive storage devices. This is managed via an enhanced interface for the Redis cluster management and data integration tools, available in preview. Both make it possible to consume data in real-time from another data source, such as a SQL database. Developers can also take advantage of triggers and functions to build those types of applications.
Redis is also previewing a search tool for vector data used to create generative artificial intelligence (AI) applications. Rather than employing a separate vector database to build these applications, the Redis platform supports this data type to enable generative AI applications to run alongside other classes of applications accessing the same data.
Redis CEO Rowan Trollope said collectively these efforts will help DevOps teams reduce costs and streamline the management of upgrades to the platform and updates to applications accessing it.
Most organizations initially adopt Redis to provide a layer of cache using a data store that runs in-memory. However, as more applications are developed many organizations wind up using Redis as their primary database for applications that require a lot of interactivity, noted Trollope.
While developers primarily make that architectural decision, the overall management of the Redis data store falls to DevOps teams that will now find it easier to streamline workflows across multiple instances of Redis running in the cloud, or on a Kubernetes cluster, added Trollope.
Of course, in some cases, organizations still rely on database administrators and data engineers to manage those workflows, but as DevOps continues to evolve, more instances of data stores are being incorporated into workflows managed by a DevOps team. The ultimate goal should be to align DevOps and DataOps best practices to increase the productivity of developers and data science teams building generative AI applications using vector data to customize an existing large language model (LLM).
Regardless of the motivation, what’s clear is the volume of data that needs to run in-memory continues to increase substantially. As that shift continues to occur, more latency-sensitive applications are being deployed. Maintaining the performance of those applications across a highly distributed enterprise environment with multiple service dependencies requires a level of DevOps expertise that is still difficult to find and retain.