mc

package module
v0.0.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 4, 2025 License: MIT Imports: 24 Imported by: 0

README

Memcache Client for Go

CI Go Report Card GoDoc

A high-performance memcache client library for Go that uses memcache's Meta Text Protocol and supports namespacing, compression, and context cancellation out of the box.

Table of Contents

Features

  • Meta Text Protocol - Full support for memcache's modern protocol
  • Context Support - All operations support context.Context for cancellation and timeouts
  • Namespacing - Built-in namespace support for cache invalidation
  • Compression - Optional compression with configurable thresholds
  • Connection Pooling - Efficient connection pooling with automatic management
  • Binary Key Encoding - Optional binary key encoding for better performance
  • Type Safety - Strong typing with comprehensive error handling
  • Atomic Operations - Safe append, prepend, and replace operations
  • Optimistic Locking - CAS (Compare-And-Swap) for concurrent updates
  • Batch Operations - Efficient multi-key retrieval with GetMulti

Installation

go get github.com/kinescope/mc

Quick Start

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	// Create a new client
	client, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer client.Close()

	ctx := context.Background()

	// Set a value
	err = client.Set(ctx, &mc.Item{
		Key:   "user:123",
		Value: []byte("John Doe"),
		Flags: 0,
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Get a value
	item, err := client.Get(ctx, "user:123")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Value: %s\n", item.Value)
}

Basic Operations

Set, Get, Delete
ctx := context.Background()

// Set
err := client.Set(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("value"),
}, mc.WithExpiration(60))

// Get
item, err := client.Get(ctx, "key")
if err != nil {
	if err == mc.ErrCacheMiss {
		// Key not found
	}
}

// Delete
err = client.Del(ctx, "key")
Add, Replace, Append, and Prepend
// Add (only if key doesn't exist)
err := client.Add(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("value"),
})

// Replace (only if key exists)
err := client.Replace(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("new value"),
})

// Append data to existing item
err := client.Append(ctx, &mc.Item{
	Key:   "key",
	Value: []byte(" appended"),
})

// Prepend data to existing item
err := client.Prepend(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("prepended "),
})

// Append with autovivify (create if doesn't exist)
// WithExpiration enables autovivify for append/prepend operations
err := client.Append(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("value"),
}, mc.WithExpiration(3600)) // Create with 1 hour TTL if missing

// Prepend with autovivify (create if doesn't exist)
err := client.Prepend(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("prefix"),
}, mc.WithExpiration(3600)) // Create with 1 hour TTL if missing
Increment and Decrement
// Increment
newValue, err := client.Inc(ctx, "counter", 1, 3600, mc.WithInitialValue(0))

// Decrement
newValue, err := client.Dec(ctx, "counter", 1, 3600)
Get Multiple Keys
keys := []string{"key1", "key2", "key3"}
items, err := client.GetMulti(ctx, keys)
for key, item := range items {
	fmt.Printf("%s: %s\n", key, item.Value)
}

Advanced Features

Cache Management
Namespacing

Namespacing allows you to invalidate all cache entries for a namespace with a single operation. This is useful for cache invalidation by user, tenant, or any logical grouping.

ctx := context.Background()
userID := "123"

// Store items with namespace
err := client.Set(ctx, &mc.Item{
	Key:   "name",
	Value: []byte("John"),
}, mc.WithNamespace("user:"+userID))

err = client.Set(ctx, &mc.Item{
	Key:   "email",
	Value: []byte("[email protected]"),
}, mc.WithNamespace("user:"+userID))

// Invalidate all items for this user
err = client.PurgeNamespace(ctx, "user:"+userID)
// Both "name" and "email" keys are now invalidated

For more information, see the memcache namespacing documentation.

Compression

Enable compression for values larger than a specified threshold:

err := client.Set(ctx, &mc.Item{
	Key:   "large_data",
	Value: largeData,
}, mc.WithCompression(1024)) // Compress if value > 1KB

You can also provide custom compression functions:

client, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	Compression: struct {
		Compress   func([]byte) ([]byte, error)
		Decompress func([]byte) ([]byte, error)
	}{
		Compress:   myCompress,
		Decompress: myDecompress,
	},
})
Min Uses

Require an item to be set a minimum number of times before it can be retrieved:

// Set with min uses
err := client.Set(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("value"),
}, mc.WithMinUses(3))

// First 2 Get calls will return ErrCacheMiss
// After 3rd Set, Get will succeed
Concurrency & Consistency
Compare-and-Swap (Optimistic Locking)

Use CAS for safe concurrent updates:

// Get item with CAS value
item, err := client.Get(ctx, "key", mc.WithCAS())
if err == nil {
	// Modify the value
	item.Value = []byte("new value")
	// Update only if CAS matches (no concurrent modification)
	err = client.CompareAndSwap(ctx, item)
	if err == mc.ErrCASConflict {
		// Another client modified the item, retry
	}
}
CAS Override

For forced updates, use Set instead of CompareAndSwap:

// Set bypasses CAS check for forced updates
err := client.Set(ctx, &mc.Item{
	Key:   "key",
	Value: []byte("forced value"),
}, mc.WithExpiration(3600))
Performance Optimization
Early Recache & Stampeding Herd Prevention

Prevent cache stampede using early recache:

item, err := client.Get(ctx, "key", mc.WithEarlyRecache(60))
if item.Won() {
	// This client won the recache - refresh in background
	go refreshCache(ctx, client, "key")
} else {
	// Other clients get cached data immediately
	useCachedData(item.Value)
}
Serve Stale Data

Serve stale data when items have expired to prevent cache misses:

item, err := client.Get(ctx, "key", mc.WithEarlyRecache(60))
if item.Stale() {
	// Serve stale data while refreshing in background
	if item.Won() {
		go refreshCache(ctx, client, "key")
	}
	useData(item.Value)
}
Hot Key Detection & Management

Identify and manage frequently accessed (hot) cache keys. For hot keys, avoid deletion as it causes cache misses. Instead, update values in place or use early recache:

// Track hot keys using hit status and last access time
item, err := client.Get(ctx, "key", mc.WithHit(), mc.WithLastAccess())
if item.Hit() && item.LastAccess() < 10 {
	// Key is hot - update value without deletion to avoid cache misses
	// This keeps the key available while refreshing data
	client.Set(ctx, &mc.Item{
		Key:   "key",
		Value: freshData,
	}, mc.WithExpiration(3600))
	
	// Alternative: Use early recache for background refresh
	// The key remains available while being refreshed
}
Last Access Time

Get the time since last access for cache optimization:

item, err := client.Get(ctx, "key", mc.WithLastAccess())
fmt.Printf("Last accessed %d seconds ago\n", item.LastAccess())
Hit Tracking

Track hit status to identify frequently accessed items:

item, err := client.Get(ctx, "key", mc.WithHit())
if item.Hit() {
	// Key has been accessed before - consider it hot
	// Extend TTL, pre-warm, or monitor for optimization
}
Context and Timeouts

All operations support context.Context for cancellation and timeouts:

// With timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

item, err := client.Get(ctx, "key")
	if err != nil {
	if err == context.DeadlineExceeded {
		// Operation timed out
	}
}

// With cancellation
ctx, cancel := context.WithCancel(context.Background())
go func() {
	time.Sleep(1 * time.Second)
	cancel() // Cancel the operation
}()
item, err := client.Get(ctx, "key")

Configuration

Client Options
client, err := mc.New(&mc.Options{
	// Server addresses
	Addrs: []string{"127.0.0.1:11211", "127.0.0.1:11212"},

	// Connection settings
	DialTimeout:         200 * time.Millisecond,
	ConnMaxLifetime:     time.Minute,
	MaxIdleConnsPerAddr: 10,

	// Disable binary key encoding (use plain text keys)
	DisableBinaryEncodedKeys: false,

	// Custom server selection function
	PickServer: func(key string) []string {
		// Custom logic to select servers
		return []string{"127.0.0.1:11211"}
	},

	// Custom compression
	Compression: struct {
		Compress   func([]byte) ([]byte, error)
		Decompress func([]byte) ([]byte, error)
	}{
		Compress:   gzip.Compress,
		Decompress: gzip.Decompress,
	},
})

Error Handling

The library provides typed errors for common scenarios:

item, err := client.Get(ctx, "key")
switch err {
case mc.ErrCacheMiss:
	// Key not found
case mc.ErrNotStored:
	// Item not stored (e.g., Add failed because key exists)
case mc.ErrCASConflict:
	// Compare-and-swap conflict
case mc.ErrMalformedKey:
	// Invalid key format
case mc.ErrNoServers:
	// No servers available
case nil:
	// Success
default:
	// Other error
}

Graceful Shutdown

Always close the client when done to release resources:

client, err := mc.New(&mc.Options{
	Addrs: []string{"127.0.0.1:11211"},
})
defer client.Close() // Closes all connections in the pool

Meta Text Protocol

This library uses memcache's Meta Text Protocol, a modern, efficient protocol that solves several problems with the traditional ASCII protocol:

Key Advantages
  1. Reduced Network Overhead - More compact command format, fewer round trips
  2. Atomic Operations - Safe append, prepend, and replace operations
  3. Rich Metadata - CAS values, flags, TTL, last access time, hit status in a single request
  4. Conditional Operations - Compare-and-Swap (CAS) for optimistic locking
  5. Efficient Multi-Get - Batch key retrieval with the mn command
  6. Early Recaching - Built-in support for preventing cache stampedes
  7. Binary Key Support - Efficient binary-encoded keys
  8. Opaque Values - Request/response correlation for async operations
  9. Flexible Expiration - Update TTL without retrieving the value
  10. Better Error Handling - Detailed error responses and status codes
  11. Hot Key Detection - Hit tracking and last access time
  12. Stale Data Serving - Serve expired data to prevent cache misses
Protocol Commands Used
  • mg - Meta Get (retrieve items)
  • ms - Meta Set (store items)
  • ma - Meta Arithmetic (increment/decrement)
  • md - Meta Delete (delete items)
  • mn - Meta No-op (end of multi-get batch)

Development

Running Tests
# Run all tests
make test

# Run tests with race detector
make test-race

# Run tests with coverage
make test-coverage

# Run benchmarks
make test-bench
Code Quality
# Format code
make fmt

# Run linter
make lint

# Run all checks
make check
Generating Protobuf Code
make proto

Examples

See the example_test.go file for comprehensive examples including:

  • Basic operations (Set, Get, Delete, Add, Replace, Append, Prepend)
  • Compare-and-Swap operations
  • Increment/Decrement with initial values
  • Multi-key retrieval
  • Namespace management
  • Compression
  • Early recache and stampeding herd prevention
  • Serve stale data
  • Hot key cache invalidation
  • CAS consistency patterns
  • Context and timeout handling

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

This project is licensed under the MIT License - see the LICENSE file for details.

References

Documentation

Overview

Example (CasConsistency)

ExampleCASConsistency demonstrates using CAS for optimistic locking to ensure data consistency in concurrent scenarios.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "counter",
		Value: []byte("100"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Get with CAS value
	item, err := memcache.Get(ctx, "counter", mc.WithCAS())
	if err != nil {
		log.Fatal(err)
	}

	// Simulate concurrent modification
	// In real scenario, another client might modify the value here

	// Try to update with CAS
	newValue := []byte("101")
	item.Value = newValue
	err = memcache.CompareAndSwap(ctx, item)
	if err != nil {
		if err == mc.ErrCASConflict {
			fmt.Println("CAS conflict: value was modified by another client")
			// Retry: get fresh value and try again
			item, _ := memcache.Get(ctx, "counter", mc.WithCAS())
			if item != nil {
				item.Value = newValue
				err = memcache.CompareAndSwap(ctx, item)
				if err == nil {
					fmt.Println("Successfully updated on retry")
				}
			}
		} else {
			log.Fatal(err)
		}
	} else {
		fmt.Println("Successfully updated with CAS")
	}
}
Output:

Successfully updated with CAS
Example (CasOverride)

ExampleCASOverride demonstrates CAS override pattern for forced updates when you need to update regardless of current CAS value.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "config",
		Value: []byte("old_config"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Normal update with CAS check
	item, err := memcache.Get(ctx, "config", mc.WithCAS())
	if err != nil {
		log.Fatal(err)
	}

	// Update value
	item.Value = []byte("new_config")
	err = memcache.CompareAndSwap(ctx, item)
	if err != nil {
		log.Fatal(err)
	}

	// For forced update (override), use Set instead of CompareAndSwap
	// This bypasses CAS check
	err = memcache.Set(ctx, &mc.Item{
		Key:   "config",
		Value: []byte("forced_config"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	item, err = memcache.Get(ctx, "config")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Config: %s\n", item.Value)
}
Output:

Config: forced_config
Example (HotKeyCacheInvalidation)

ExampleHotKeyCacheInvalidation demonstrates using hit tracking and last access time to identify and manage hot cache keys that are frequently accessed. Important: For hot keys, avoid deletion as it causes cache misses and potential cache stampedes. Instead, update values in place or use early recache.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set multiple keys
	keys := []string{"hot_key1", "hot_key2", "cold_key"}
	for i, key := range keys {
		err = memcache.Set(ctx, &mc.Item{
			Key:   key,
			Value: []byte(fmt.Sprintf("value%d", i+1)),
		}, mc.WithExpiration(3600))
		if err != nil {
			log.Fatal(err)
		}
	}

	// Simulate frequent access to hot keys
	for i := 0; i < 5; i++ {
		memcache.Get(ctx, "hot_key1", mc.WithHit(), mc.WithLastAccess())
		memcache.Get(ctx, "hot_key2", mc.WithHit(), mc.WithLastAccess())
	}

	// Check which keys are hot (frequently accessed)
	hotKeys := []string{}
	for _, key := range keys {
		item, err := memcache.Get(ctx, key, mc.WithHit(), mc.WithLastAccess())
		if err != nil {
			continue
		}
		// Consider key "hot" if it has been hit and accessed recently
		if item.Hit() && item.LastAccess() < 10 {
			hotKeys = append(hotKeys, key)
			fmt.Printf("Hot key detected: %s (last accessed %d seconds ago)\n", key, item.LastAccess())
		}
	}

	// Update hot keys in place - avoids cache misses
	// This is better than deletion as it keeps keys available
	for _, key := range hotKeys {
		err = memcache.Set(ctx, &mc.Item{
			Key:   key,
			Value: []byte("fresh_value"),
		}, mc.WithExpiration(3600))
		if err != nil {
			log.Fatal(err)
		}
		fmt.Printf("Updated hot key: %s\n", key)
	}

	fmt.Println("Hot key management completed")
}
Output:

Hot key detected: hot_key1 (last accessed 0 seconds ago)
Hot key detected: hot_key2 (last accessed 0 seconds ago)
Updated hot key: hot_key1
Updated hot key: hot_key2
Hot key management completed
Example (OpaqueValues)

ExampleOpaqueValues demonstrates using opaque values for request correlation in GetMulti operations. Useful for pipelining and async operations.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set multiple values
	keys := []string{"key1", "key2", "key3"}
	for i, key := range keys {
		err = memcache.Set(ctx, &mc.Item{
			Key:   key,
			Value: []byte(fmt.Sprintf("value%d", i+1)),
		}, mc.WithExpiration(60))
		if err != nil {
			log.Fatal(err)
		}
	}

	// GetMulti automatically uses opaque values internally
	// to correlate responses with requests
	items, err := memcache.GetMulti(ctx, keys)
	if err != nil {
		log.Fatal(err)
	}

	for _, key := range keys {
		if item, ok := items[key]; ok {
			fmt.Printf("%s: %s\n", key, item.Value)
		}
	}
}
Output:

key1: value1
key2: value2
key3: value3
Example (ProbabilisticHotCache)

ExampleProbabilisticHotCache demonstrates using hit tracking to identify frequently accessed (hot) cache items for optimization.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set a value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "hot_item",
		Value: []byte("data"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// First access - item hasn't been hit before
	item, err := memcache.Get(ctx, "hot_item", mc.WithHit())
	if err != nil {
		log.Fatal(err)
	}
	if !item.Hit() {
		fmt.Println("First access - not hit before")
	}

	// Second access - item has been hit
	item, err = memcache.Get(ctx, "hot_item", mc.WithHit())
	if err != nil {
		log.Fatal(err)
	}
	if item.Hit() {
		fmt.Println("Hot cache item - has been accessed before")
		// In real application, you might:
		// - Extend TTL for hot items
		// - Pre-warm cache with hot items
		// - Monitor hit rates for optimization
	}
}
Output:

First access - not hit before
Hot cache item - has been accessed before
Example (ServeStale)

ExampleServeStale demonstrates serving stale data when item has expired but is still in cache. This prevents cache misses during refresh.

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set a value with short expiration
	err = memcache.Set(ctx, &mc.Item{
		Key:   "stale_data",
		Value: []byte("original_value"),
	}, mc.WithExpiration(1)) // 1 second TTL
	if err != nil {
		log.Fatal(err)
	}

	// Wait for expiration
	time.Sleep(2 * time.Second)

	// Get with early recache - will return stale data if available
	// Note: Stale data serving depends on memcached configuration
	item, err := memcache.Get(ctx, "stale_data", mc.WithEarlyRecache(60))
	if err == nil {
		if item.Stale() {
			fmt.Printf("Serving stale data: %s\n", item.Value)
			if item.Won() {
				fmt.Println("Refreshing stale data...")
			}
		} else {
			fmt.Printf("Fresh data: %s\n", item.Value)
		}
	} else if err == mc.ErrCacheMiss {
		fmt.Println("Cache miss - need to fetch from source")
	}
}
Output:

Cache miss - need to fetch from source
Example (StampedingHerdHandling)

ExampleStampedingHerdHandling demonstrates how to prevent cache stampede using early recache. Only one client "wins" the recache and refreshes the data.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value with expiration
	err = memcache.Set(ctx, &mc.Item{
		Key:   "popular_data",
		Value: []byte("cached_value"),
	}, mc.WithExpiration(60)) // 60 seconds TTL
	if err != nil {
		log.Fatal(err)
	}

	// Multiple clients can request with early recache
	// Only one will "win" and should refresh the cache
	// Note: In practice, won flag may not always be set depending on TTL
	item, err := memcache.Get(ctx, "popular_data", mc.WithEarlyRecache(10))
	if err != nil {
		log.Fatal(err)
	}

	// Check if this client won the recache
	if item.Won() {
		// This client won the recache - should refresh the data in background
		fmt.Println("Won recache, refreshing data...")
	} else {
		// Other clients get cached data immediately
		fmt.Printf("Using cached data: %s\n", item.Value)
	}
}
Output:

Using cached data: cached_value

Index

Examples

Constants

View Source
const (
	DefaultTimeout             = 200 * time.Millisecond
	DefaultConnMaxLifetime     = time.Minute
	DefaultMaxIdleConnsPerAddr = 10
)

Variables

View Source
var (
	ErrCacheMiss                = errors.New("memcache: cache miss")
	ErrNotStored                = errors.New("memcache: item not stored")
	ErrEmptyValue               = errors.New("memcache: empty value")
	ErrCASConflict              = errors.New("memcache: compare-and-swap conflict")
	ErrMalformedKey             = errors.New("memcache: key is too long or contains invalid characters")
	ErrNoServers                = errors.New("memcache: no servers configured or available")
	ErrNonexistentCommandName   = errors.New("memcache: nonexistent command name")
	ErrUnsupportedServerVersion = errors.New("memcache: unsupported server version ( < 1.6.14 )")
	ErrBadIncrDec               = errors.New("memcache: cannot increment or decrement non-numeric value")
	ErrCorruptGetResultRead     = errors.New("memcache: corrupt get result read")
)

Functions

This section is empty.

Types

type Client

type Client struct {
	// contains filtered or unexported fields
}

func New

func New(opts *Options) (*Client, error)

New creates a new memcache client with the provided options. It initializes connection pools for each server address and sets up key encoding based on the configuration. Returns an error if no servers are provided or if server selection function setup fails.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	ctx := context.Background()
	err = memcache.Set(ctx, &mc.Item{
		Key:   "key",
		Value: []byte("value"),
		Flags: 42,
	}, mc.WithExpiration(5))
	if err != nil {
		log.Fatal(err)
	}

	i, err := memcache.Get(ctx, "key")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("key=%q, value=%q\n", i.Key, i.Value)
}
Output:

key="key", value="value"

func (*Client) Add

func (c *Client) Add(ctx context.Context, i *Item, o ...MsOption) error

Add stores an item only if the key doesn't already exist. If the key exists, it returns ErrNotStored. This is useful for initializing cache entries or implementing distributed locks.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Delete key if it exists to ensure clean state
	memcache.Del(ctx, "newkey")

	// Add only works if key doesn't exist
	err = memcache.Add(ctx, &mc.Item{
		Key:   "newkey",
		Value: []byte("value"),
	}, mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	// Try to add again - will fail because key exists
	err = memcache.Add(ctx, &mc.Item{
		Key:   "newkey",
		Value: []byte("another value"),
	})
	if err == mc.ErrNotStored {
		fmt.Println("Key already exists, Add failed")
	}
}
Output:

Key already exists, Add failed

func (*Client) Append added in v0.0.9

func (c *Client) Append(ctx context.Context, i *Item, o ...MsOption) error

Append appends data to an existing item. If the item doesn't exist and WithExpiration is provided, the item will be created with that TTL (autovivify). Otherwise returns ErrNotStored.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// First, set an initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "message",
		Value: []byte("Hello"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Append data to the existing value
	err = memcache.Append(ctx, &mc.Item{
		Key:   "message",
		Value: []byte(" World"),
	})
	if err != nil {
		log.Fatal(err)
	}

	// Retrieve the updated value
	item, err := memcache.Get(ctx, "message")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Message: %s\n", item.Value)
}
Output:

Message: Hello World
Example (Autovivify)
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Delete the key if it exists to demonstrate autovivify
	memcache.Del(ctx, "log")

	// Append with expiration - creates the key if it doesn't exist (autovivify)
	// WithExpiration enables autovivify for append/prepend operations
	// Note: autovivify may not work in all memcached versions/configurations
	err = memcache.Append(ctx, &mc.Item{
		Key:   "log",
		Value: []byte("Entry 1"),
	}, mc.WithExpiration(3600)) // Create with 1 hour TTL if missing
	if err != nil {
		// If autovivify doesn't work, create the key first
		err = memcache.Set(ctx, &mc.Item{
			Key:   "log",
			Value: []byte("Entry 1"),
		}, mc.WithExpiration(3600))
		if err != nil {
			log.Fatal(err)
		}
	}

	// Append more data (expiration not needed for existing keys)
	err = memcache.Append(ctx, &mc.Item{
		Key:   "log",
		Value: []byte("\nEntry 2"),
	})
	if err != nil {
		log.Fatal(err)
	}

	// Retrieve the log
	item, err := memcache.Get(ctx, "log")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Log:\n%s\n", item.Value)
}
Output:

Log:
Entry 1
Entry 2

func (*Client) Close added in v0.0.7

func (c *Client) Close() error

Close closes all connections in the pool and releases resources. After Close is called, the Client should not be used.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}

	// Always close the client when done
	defer memcache.Close()

	ctx := context.Background()
	err = memcache.Set(ctx, &mc.Item{
		Key:   "key",
		Value: []byte("value"),
	}, mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Client operations completed")
}
Output:

Client operations completed

func (*Client) CompareAndSwap

func (c *Client) CompareAndSwap(ctx context.Context, i *Item, o ...MsOption) error

CompareAndSwap updates an item only if its CAS (Compare-And-Swap) value matches the CAS value stored in the item. This provides optimistic locking to prevent race conditions in concurrent applications. Returns ErrCASConflict if the CAS value doesn't match (item was modified by another client). The item must be retrieved with WithCAS() to get the CAS value.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "counter",
		Value: []byte("100"),
	}, mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	// Get with CAS value
	item, err := memcache.Get(ctx, "counter", mc.WithCAS())
	if err != nil {
		log.Fatal(err)
	}

	// Modify and update using CAS
	item.Value = []byte("101")
	err = memcache.CompareAndSwap(ctx, item)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Updated successfully using CAS")
}
Output:

Updated successfully using CAS

func (*Client) Dec

func (c *Client) Dec(ctx context.Context, k string, delta uint64, expiration uint32, o ...MaOption) (new uint64, _ error)

Dec decrements a numeric value stored at the given key by delta. If the key doesn't exist, it can be created with an initial value using WithInitialValue(). Returns the new value after decrement. The value must be numeric (stored as a string representation of a number).

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "stock",
		Value: []byte("100"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Decrement
	newValue, err := memcache.Dec(ctx, "stock", 10, 3600)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Stock after decrement: %d\n", newValue)
}
Output:

Stock after decrement: 90

func (*Client) Del added in v0.0.2

func (c *Client) Del(ctx context.Context, k string, o ...MdOption) (retErr error)

Del deletes an item from the cache by its key. Returns ErrCacheMiss if the key doesn't exist. Supports WithInvalidate() to mark the item as stale instead of deleting it, which allows serving stale data while refreshing.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set a value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "temp",
		Value: []byte("data"),
	}, mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	// Delete the key
	err = memcache.Del(ctx, "temp")
	if err != nil {
		log.Fatal(err)
	}

	// Try to get deleted key
	_, err = memcache.Get(ctx, "temp")
	if err == mc.ErrCacheMiss {
		fmt.Println("Key successfully deleted")
	}
}
Output:

Key successfully deleted

func (*Client) Get

func (c *Client) Get(ctx context.Context, k string, o ...MgOption) (_ *Item, retErr error)

Get retrieves an item from the cache by its key. Returns ErrCacheMiss if the key doesn't exist. Supports various options like WithCAS(), WithEarlyRecache(), WithHit(), and WithLastAccess() to get additional metadata or modify behavior.

func (*Client) GetMulti

func (c *Client) GetMulti(ctx context.Context, keys []string, o ...MgOption) (_ map[string]*Item, retErr error)

GetMulti retrieves multiple items from the cache in a single operation. Keys are broadcast to all servers from PickServer, and responses are validated to ensure keys come from a valid server (by hash). This handles cases where Set wrote to alternative servers when primary was unavailable. Only keys from valid servers (according to PickServer) are accepted to avoid stale data. Returns a map of found items (keys that don't exist are not included). This is more efficient than multiple Get() calls, especially when keys are on different servers.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set multiple keys
	keys := []string{"key1", "key2", "key3"}
	for i, key := range keys {
		err = memcache.Set(ctx, &mc.Item{
			Key:   key,
			Value: []byte(fmt.Sprintf("value%d", i+1)),
		}, mc.WithExpiration(60))
		if err != nil {
			log.Fatal(err)
		}
	}

	// Get multiple keys at once
	items, err := memcache.GetMulti(ctx, keys)
	if err != nil {
		log.Fatal(err)
	}

	for _, key := range keys {
		if item, ok := items[key]; ok {
			fmt.Printf("%s: %s\n", key, item.Value)
		}
	}
}
Output:

key1: value1
key2: value2
key3: value3

func (*Client) Inc

func (c *Client) Inc(ctx context.Context, k string, delta uint64, expiration uint32, o ...MaOption) (new uint64, _ error)

Inc increments a numeric value stored at the given key by delta. If the key doesn't exist, it can be created with an initial value using WithInitialValue(). Returns the new value after increment. The value must be numeric (stored as a string representation of a number).

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Use a unique key to avoid conflicts with other tests
	key := "example_counter_inc"

	// Delete key first to ensure clean state
	memcache.Del(ctx, key)

	// Increment counter, create with initial value 1 if doesn't exist
	// WithInitialValue(1) creates the key with value 1 if it doesn't exist
	newValue, err := memcache.Inc(ctx, key, 1, 3600, mc.WithInitialValue(1))
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Counter after first increment: %d\n", newValue)

	// Increment again
	newValue, err = memcache.Inc(ctx, key, 5, 3600)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("Counter after increment by 5: %d\n", newValue)
}
Output:

Counter after first increment: 1
Counter after increment by 5: 6

func (*Client) Prepend added in v0.0.9

func (c *Client) Prepend(ctx context.Context, i *Item, o ...MsOption) error

Prepend prepends data to an existing item. If the item doesn't exist and WithExpiration is provided, the item will be created with that TTL (autovivify). Otherwise returns ErrNotStored.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// First, set an initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "message",
		Value: []byte("World"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Prepend data to the existing value
	err = memcache.Prepend(ctx, &mc.Item{
		Key:   "message",
		Value: []byte("Hello "),
	})
	if err != nil {
		log.Fatal(err)
	}

	// Retrieve the updated value
	item, err := memcache.Get(ctx, "message")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Message: %s\n", item.Value)
}
Output:

Message: Hello World

func (*Client) PurgeNamespace

func (c *Client) PurgeNamespace(ctx context.Context, ns string) error

PurgeNamespace invalidates all cache entries for the given namespace. This is done by incrementing the namespace version, which causes all items stored with that namespace to become invalid on the next access. This is useful for cache invalidation by user, tenant, or any logical grouping.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	ctx := context.Background()
	err = memcache.Set(ctx, &mc.Item{
		Key:   "key",
		Value: []byte("value"),
		Flags: 42,
	}, mc.WithExpiration(5), mc.WithNamespace("namespace"))
	if err != nil {
		log.Fatal(err)
	}

	i, err := memcache.Get(ctx, "key")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Printf("key=%q, value=%q\n", i.Key, i.Value)

	memcache.PurgeNamespace(ctx, "namespace")

	if _, err = memcache.Get(ctx, "key"); err == nil {
		log.Fatal("ns bug")
	}
	fmt.Println(err)
}
Output:

key="key", value="value"
memcache: cache miss

func (*Client) Replace added in v0.0.9

func (c *Client) Replace(ctx context.Context, i *Item, o ...MsOption) error

Replace replaces an existing item. If the item doesn't exist, it returns ErrNotStored.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// First, set an initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "status",
		Value: []byte("old"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Replace the value (only works if key exists)
	err = memcache.Replace(ctx, &mc.Item{
		Key:   "status",
		Value: []byte("new"),
	})
	if err != nil {
		log.Fatal(err)
	}

	// Retrieve the updated value
	item, err := memcache.Get(ctx, "status")
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Status: %s\n", item.Value)
}
Output:

Status: new

func (*Client) Set

func (c *Client) Set(ctx context.Context, i *Item, o ...MsOption) error

Set stores an item, overwriting any existing value for the key. This is the most common operation for storing cache entries.

type ClientError added in v0.0.2

type ClientError struct {
	Message string
}

func (*ClientError) Error added in v0.0.2

func (c *ClientError) Error() string

type Item

type Item struct {
	Key   string
	Value Value
	Flags uint16
	// contains filtered or unexported fields
}

func (*Item) CompressionRatio added in v0.0.2

func (i *Item) CompressionRatio() float32

CompressionRatio returns the compression ratio if the item was compressed. A value greater than 1.0 indicates the data was compressed. Returns 0.0 if the item was not compressed.

func (*Item) Hit added in v0.0.2

func (i *Item) Hit() bool

Hit returns true if the item has been accessed before. This is useful for identifying hot cache items that are frequently accessed.

func (*Item) LastAccess added in v0.0.2

func (i *Item) LastAccess() int

LastAccess returns the time in seconds since the item was last accessed. This is useful for cache optimization and identifying frequently used items.

func (*Item) Stale added in v0.0.2

func (i *Item) Stale() bool

Stale returns true if the item has expired but is still being served. Stale items are served to prevent cache misses during refresh operations.

func (*Item) Won added in v0.0.2

func (i *Item) Won() bool

Won returns true if this client "won" the early recache flag. When multiple clients request an item with WithEarlyRecache(), only one client wins and should refresh the cache in the background.

type MaOption added in v0.0.2

type MaOption func(c *maOpts)

func WithDeadlineArithmetic added in v0.0.7

func WithDeadlineArithmetic(t time.Time) MaOption

WithDeadlineArithmetic sets a deadline for the arithmetic operation. The operation will timeout if not completed by the deadline. If context also has a deadline, the minimum of both is used.

func WithInitialValue added in v0.0.2

func WithInitialValue(v uint64) MaOption

WithInitialValue sets the initial value for Inc/Dec operations if the key doesn't exist. Without this, Inc/Dec will fail if the key is missing.

type MdOption added in v0.0.2

type MdOption func(c *mdOpts)

func WithDeadlineDel added in v0.0.7

func WithDeadlineDel(t time.Time) MdOption

WithDeadlineDel sets a deadline for the Delete operation. The operation will timeout if not completed by the deadline. If context also has a deadline, the minimum of both is used.

func WithInvalidate added in v0.0.2

func WithInvalidate(seconds uint32) MdOption

WithInvalidate marks the item as stale instead of deleting it. The item will be served as stale data for the specified number of seconds, allowing clients to serve stale data while refreshing in the background.

type MgOption added in v0.0.2

type MgOption func(c *mgOpts)

func WithCAS added in v0.0.2

func WithCAS() MgOption

WithCAS requests the CAS (Compare-And-Swap) value to be returned with the item. The CAS value can be used with CompareAndSwap() for optimistic locking.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set initial value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "data",
		Value: []byte("initial"),
	}, mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	// Get with CAS to retrieve CAS value for optimistic locking
	item, err := memcache.Get(ctx, "data", mc.WithCAS())
	if err != nil {
		log.Fatal(err)
	}

	// Use CAS value for CompareAndSwap operation
	item.Value = []byte("updated")
	err = memcache.CompareAndSwap(ctx, item)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Updated using CAS")
}
Output:

Updated using CAS

func WithDeadline added in v0.0.2

func WithDeadline(t time.Time) MgOption

WithDeadline sets a deadline for the Get operation. The operation will timeout if not completed by the deadline. If context also has a deadline, the minimum of both is used.

func WithEarlyRecache added in v0.0.2

func WithEarlyRecache(seconds int) MgOption

WithEarlyRecache enables early recaching to prevent cache stampede. If the remaining TTL is less than the specified seconds, this client may "win" the recache flag and should refresh the cache in the background. Other clients will get stale data immediately.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set a value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "cacheable",
		Value: []byte("data"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Get with early recache - will return stale data if within recache window
	item, err := memcache.Get(ctx, "cacheable", mc.WithEarlyRecache(300))
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Value: %s\n", item.Value)
}
Output:

Value: data

func WithHit added in v0.0.2

func WithHit() MgOption

WithHit requests the hit status to be returned with the item. Hit status indicates whether the item has been accessed before, useful for identifying hot cache items.

func WithLastAccess added in v0.0.2

func WithLastAccess() MgOption

WithLastAccess requests the last access time to be returned with the item. Returns the time in seconds since the item was last accessed.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set a value
	err = memcache.Set(ctx, &mc.Item{
		Key:   "tracked",
		Value: []byte("data"),
	}, mc.WithExpiration(3600))
	if err != nil {
		log.Fatal(err)
	}

	// Get with last access time
	item, err := memcache.Get(ctx, "tracked", mc.WithLastAccess())
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("Retrieved: %s\n", item.Value)
}
Output:

Retrieved: data

type MsOption added in v0.0.2

type MsOption func(c *msOpts)

func WithCompression added in v0.0.2

func WithCompression(minLen int) MsOption

WithCompression enables compression for values larger than minLen bytes. Uses the compression functions provided in Options.Compression.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Create large value (will be compressed if > 1024 bytes)
	largeData := make([]byte, 2048)
	for i := range largeData {
		largeData[i] = byte(i % 256)
	}

	err = memcache.Set(ctx, &mc.Item{
		Key:   "large_data",
		Value: largeData,
	}, mc.WithCompression(1024), mc.WithExpiration(60))
	if err != nil {
		log.Fatal(err)
	}

	// Get and verify
	item, err := memcache.Get(ctx, "large_data")
	if err != nil {
		log.Fatal(err)
	}

	if len(item.Value) == len(largeData) {
		fmt.Println("Data retrieved and decompressed successfully")
	}
}
Output:

Data retrieved and decompressed successfully

func WithDeadlineSet added in v0.0.7

func WithDeadlineSet(t time.Time) MsOption

WithDeadlineSet sets a deadline for the Set operation. The operation will timeout if not completed by the deadline. If context also has a deadline, the minimum of both is used.

func WithExpiration

func WithExpiration(seconds uint32) MsOption

WithExpiration sets the Time-To-Live (TTL) for the item in seconds. For append/prepend operations, this also enables autovivify (automatic creation) if the key doesn't exist.

func WithMinUses

func WithMinUses(number uint64) MsOption

WithMinUses requires an item to be set a minimum number of times before it can be retrieved. This is useful for preventing race conditions where an item might be read before it's fully initialized.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kinescope/mc"
)

func main() {
	memcache, err := mc.New(&mc.Options{
		Addrs: []string{"127.0.0.1:11211"},
	})
	if err != nil {
		log.Fatal(err)
	}
	defer memcache.Close()

	ctx := context.Background()

	// Set with min uses - item will only be stored if accessed at least N times
	err = memcache.Set(ctx, &mc.Item{
		Key:   "popular",
		Value: []byte("data"),
	}, mc.WithExpiration(60), mc.WithMinUses(3))
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("Item set with min uses requirement")
}
Output:

Item set with min uses requirement

func WithNamespace

func WithNamespace(ns string) MsOption

WithNamespace stores the item with a namespace, allowing all items in a namespace to be invalidated at once using PurgeNamespace(). This is useful for cache invalidation by user, tenant, or logical grouping.

type Options

type Options struct {
	Addrs                    []string
	PickServer               func(key string) []string
	DialTimeout              time.Duration
	ConnMaxLifetime          time.Duration
	MaxIdleConnsPerAddr      int
	DisableBinaryEncodedKeys bool
	Compression              struct {
		Compress   func([]byte) ([]byte, error)
		Decompress func([]byte) ([]byte, error)
	}
}

type ServerError added in v0.0.2

type ServerError struct {
	Message string
}

func (*ServerError) Error added in v0.0.2

func (s *ServerError) Error() string

type Value

type Value []byte

func (*Value) Marshal

func (val *Value) Marshal(v any) error

Marshal serializes the given value into the Value using the appropriate encoding based on the value's type. Supports protobuf, JSON, MessagePack, and binary marshaling interfaces, falling back to JSON encoding.

func (Value) Unmarshal

func (val Value) Unmarshal(v any) error

Unmarshal deserializes the Value into the given value using the appropriate decoding based on the value's type. Supports protobuf, JSON, MessagePack, and binary unmarshaling interfaces, falling back to JSON decoding.

Directories

Path Synopsis
proto

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL