Skip to content

Valkey™ (Previously Redis™)

For Welkin Managed Customers

You can order Managed Ephemeral Valkey™ by filing a service ticket. Here are the highlights:

  • Business continuity: Replicated across three dedicated Nodes.
  • Disaster recovery: none -- we recommend against using Valkey as a primary database.
  • Monitoring, security patching and incident management: included.

For more information, please read ToS Appendix 3 Managed Additional Service Specification.

Valkey Deployment Model
Valkey on Welkin Deployment Model
This help you build a mental model on how to access Valkey as an Application Developer and how to connect your application to Valkey.

This page will help you succeed in connecting your application to a low-latency in-memory cache Valkey which meets your security and compliance requirements.

Important: Access Control with NetworkPolicies

Please note the follow information about Valkey access control from the upstream documentation:

Valkey is designed to be accessed by trusted clients inside trusted environments.

Valkey access is protected by NetworkPolicies. To allow your applications access to a Valkey cluster the Pods need to be labeled with elastisys.io/valkey-<cluster_name>-access: allow.

Important: No Disaster Recovery

We do not recommend using Valkey as primary database. Valkey should be used to store:

  • Cached data: If this is lost, this data can be quickly retrieved from the primary database, such as the PostgreSQL cluster.
  • Session state: If this is lost, the user experience might be impacted -- e.g., the user needs to re-login -- but no data should be lost.

Important: Sentinel support

We recommend a highly available setup with at minimum three instances. The Valkey client library that you use in your application needs to support Valkey Sentinel. Notice that clients with Sentinel support need extra steps to discover the Valkey primary.

Install Prerequisites

Before continuing, make sure you have access to the Kubernetes API, as describe here.

Make sure to install the Valkey client on your workstation. On Ubuntu, this can be achieved as follows:

sudo apt install redis-tools

Getting Access

Your administrator will set up a ConfigMap inside Welkin, which contains all information you need to access your Valkey Cluster. The ConfigMap has the following shape:

apiVersion: v1
kind: ConfigMap
metadata:
    name: $CONFIG_MAP
    namespace: $NAMESPACE
data:
    # VALKEY is the name of the Valkey Cluster. You need to know the name to label your Pods correctly for network access.
    VALKEY_CLUSTER_NAME: $VALKEY_CLUSTER_NAME

    # VALKEY_SENTINEL_HOST represents a cluster-scoped Valkey Sentinel host, which only makes sense inside the Kubernetes cluster.
    # E.g., rfs-valkey-cluster.valkey-system
    VALKEY_SENTINEL_HOST: $VALKEY_SENTINEL_HOST

    # VALKEY_SENTINEL_PORT represents a cluster-scoped Valkey Sentinel port, which only makes sense inside the Kubernetes cluster.
    # E.g., 26379
    VALKEY_SENTINEL_PORT: "$VALKEY_SENTINEL_PORT"

To extract this information, proceed as follows:

export CONFIG_MAP=            # Get this from your administrator
export NAMESPACE=         # Get this from your administrator

export VALKEY_SENTINEL_HOST=$(kubectl -n $NAMESPACE get configmap $CONFIG_MAP -o 'jsonpath={.data.VALKEY_SENTINEL_HOST}')
export VALKEY_SENTINEL_PORT=$(kubectl -n $NAMESPACE get configmap $CONFIG_MAP -o 'jsonpath={.data.VALKEY_SENTINEL_PORT}')

Important

At the time of this writing, we do not recommend to use a Valkey Cluster in a multi-tenant fashion. One Valkey Cluster should have only one purpose.

Create a ConfigMap

First, check that you are on the right Welkin Cluster, in the right application namespace:

kubectl get nodes
kubectl config view --minify --output 'jsonpath={..namespace}'; echo

Now, create a Kubernetes ConfigMap in your application namespace to store the Valkey Sentinel connection parameters:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
    name: app-valkey-config
data:
    VALKEY_SENTINEL_HOST: $VALKEY_SENTINEL_HOST
    VALKEY_SENTINEL_PORT: "$VALKEY_SENTINEL_PORT"
EOF

Allow your Pods to communicate with the Valkey Cluster

The Valkey Cluster is protected by Network Policies. Add the following label to your Pods: elastisys.io/valkey-<cluster_name>-access: allow

cluster_name can be retrieved from the ConfigMap provided by your administrator:

kubectl -n $NAMESPACE get configmap $CONFIG_MAP -o 'jsonpath={.data.VALKEY_CLUSTER_NAME}'

Expose Valkey Connection Parameters to Your Application

To expose the Valkey Cluster to your application, follow one of the following upstream documentation:

Important

Make sure to use a Valkey client library with Sentinel support. For example:

If your library doesn't support sentinel you could use this project - Redis sentinel proxy. Note that the default configuration in this repository will not ensure HA for Valkey. For this you'll either need to use multiple replicas or use it as a sidecar for your applications.

Follow the Go-Live Checklist

You should be all set. Before going into production, don't forget to go through the go-live checklist.

Welkin Valkey Release Notes

Check out the release notes for the Valkey Cluster that runs in Welkin environments!

  • Eviction Policy: Choose the eviction policy that works for your application. The default eviction policy for our Managed Valkey is allkeys-lru, which means any key can be evicted under memory pressure irrespective of whether the key is expired or not. It will keep the most recently used keys and remove the least recently used (LRU) key.

    Note

    Since this is a server setting, it cannot be set by the user itself, but needs to be set by the administrators. Please send a support ticket with the values you would like to set.

  • Set TTL: If possible, take the advantage of expiring keys, such as temporary OAuth authentication keys. When you set the key, set it with some expiration and Valkey will clean up for you. Refer to TTL

  • Avoid expensive or blocking operations: Since Valkey command processing is single-threaded, operations like the KEYS command are expensive and should be avoided. You can avoid KEYS by using SCAN to reduce CPU spikes.
  • Monitor memory usage: Monitor the usage in Grafana dashboard to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues.
  • Manage idle connection: The number of connections to Valkey increases if connections are not properly terminated. This can lead to bad performance. Therefore, we recommend to setting timeout which allows you to set the number of seconds before idle client connections are automatically terminated. The default timeout for our Managed Valkey is set to 0, which means the idle clients do not timeout and remain connected until the client issues the termination.

    Note

    Since this is a server setting, it cannot be set by the user itself, but needs to be set by the administrators. Please send a support ticket with the values you would like to set.

  • Cache-hit ratio: You should regularly monitor your cache-hit ratio so that you know what percentage of key lookups are successfully returned by keys in your Valkey instance. info stats command provides keyspace_hits & keyspace_misses metric data to further calculate cache hit ratio for a running Valkey instance.

Further Reading