geche

package module
v1.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 10, 2026 License: MIT Imports: 10 Imported by: 0

README

Geche (generic cache)

Workflow status Go Report Card GoDoc Coverage Status

Collection of generic cache implementations in Go focused on simplicity. No clever tricks, no dependencies, no gazillion of configuration options. Implementations are as simple as possible to be predictable in max latency, memory allocation and concurrency impact (writes lock reads and are serialized with other writes).

  • MapCache is a very simple map-based thread-safe cache, that is not limited from growing. Can be used when you have relatively small number of distinct keys that does not grow significantly, and you do not need the values to expire automatically. E.g. if your keys are country codes, timezones etc, this cache type is ok to use.
  • MapTTLCache is map-based thread-safe cache with support for TTL (values automatically expire). If you don't want to read value from cache that is older than some threshold (e.g. 1 sec), you set this TTL when initializing the cache object and obsolete rows will be removed from cache automatically.
  • RingBuffer is a predefined size cache that allocates all memory from the start and will not grow above it. It keeps constant size by overwriting the oldest values in the cache with new ones. Use this cache when you need speed and fixed memory footprint, and your key cardinality is predictable (or you are ok with having cache misses if cardinality suddenly grows above your cache size).
  • KVCache is a specialized cache designed for efficient prefix-based key lookups. It uses a trie data structure to store keys, enabling lexicographical ordering and fast retrieval of all values whose keys start with a given prefix.

Examples

Interface is quite simple with six methods: Set, Get, Del, SetIfPresent, Snapshot and Len. Here's a quick example for a ring buffer holding 10k records.

package main

import (
    "fmt"

    "github.com/c-pro/geche"
)

func main() {
    // Create ring buffer FIFO cache with size 10k.
    c := geche.NewRingBuffer[int, string](10000)

    c.Set(1, "one")
    c.Set(2, "dua")
    c.Del(2)
    v, err := c.Get(1)
    if err != nil {
        fmt.Println(err)
        return
    }

	// will update the value associated to key 1
	previousVal, updated := c.SetIfPresent(1, "two")
	// will print "one"
	fmt.Println(previousVal)
	// will print "true"
	fmt.Println(updated)

	// will not have any effect
	c.SetIfPresent(2, "dua")

	// will print 2
    fmt.Println(v)
}

Sometimes it is useful to get snapshot of the whole cache (e.g. to avoid cold cache on service restart). All cache implementations have Snapshot() map[K]V function that aquires Read Lock, copies the cache content to the map and returns it. Notice that maps in Go do not guarantee order of keys, so if you need to iterate over the cache in some specific order, see section for NewKV wrapper below.

Please be aware that it is a shallow copy, so if your cache value type contains reference types, it may be unsafe to modify the returned copy.

    c := geche.NewMapCache[int, string]()
    c.Set(1, "one")
    c.Set(2, "dua")

    fmt.Println(c.Snapshot())

Wrappers

There are several wrappers that you can use to add some extra features to your cache of choice. You can nest wrappers, but be aware that order in which you wrap will change the behaviour of resulting cache.

For example if you wrap NewUpdater with NewSharded, your updater poolSize will be effectively multiplied by number of shards, because each shard will be a separate Updater instance.

CacheUpdater

Another useful wrapper is the CacheUpdater. It implements a common scenario, when on cache miss some function is called to get the value from external resource (database, API, some lengthy calculation). This scenario sounds deceptively simple, but with straightforward implementation can lead to nasty problems like cache stampede. To avoid these types of problems CacheUpdater has two limiting mechanisms:

  • Pool size limits number of keys that can be updated concurrently. E.g. if you set pool size of 10, your cache update will never run more then 10 simultaneous queries to get the value that was not found in the cache.
  • In-flight mechanism does not allow to call update function to get the value for the key if the update function for this key is already running. E.g. if suddenly 10 requests for the same key hit the cache and miss (key is not in the cache), only the first one will trigger the update and others will just wait for the result.
updateFn := func(key string) (string, error) {
    return DoSomeDatabaseQuery(key)
}

u := NewCacheUpdater[string, string](
    NewMapCache[string, string](),
    updateFn,
    10,
)

v, err := c.Get("1") // Updater will get value from db here.
if err != nil {
    fmt.Println(err)
    return
}

fmt.Println(v)

v, err = c.Get("1") // And now the value will be returned from the cache.
if err != nil {
    fmt.Println(err)
    return
}

fmt.Println(v)

Updater provides ListByPrefix function, but it can be used only if underlying cache supports it. Otherwize it will panic.

Sharding

If you intend to use cache in higlhy concurrent manner (16+ cores and 100k+ RPS). It may make sense to shard it. To shard the cache you need to wrap it using NewSharded. Sharded cache will determine to which shard the value should go using a mapper that implements interface with Map(key K, numShards int) int function. The point of this function is to uniformly map keys to provided number of shards.

func main() {
    // Create sharded TTL cache with number of shards defined automatically
    // based on number of available CPUs.
    c := NewSharded[string](
                    func() Geche[string, string] {
                        return NewMapTTLCache[string, string](ctx, time.Second, time.Second)
                    },
                    0,
                    &StringMapper{},
                )

    c.Set("1", "one")
    c.Set("2", "dua")
    c.Del("2")
    v, err := c.Get("1")
    if err != nil {
        fmt.Println(err)
        return
    }

    fmt.Println(v)
}
KV

If your use-case requires not only random but also sequential access to values in the cache, you can wrap it using NewKV wrapper. It will provide you with extra ListByPrefix function that returns all values in the cache that have keys starting with provided prefix. Values will be returned in lexicographical order of the keys (order by key).

Another useful trick is to use ListByPrefix("") to get all values in the cache as a slice ordered by key.

Internally KV maintains trie structure to store keys to be able to quickly find all keys with the same prefix. This trie is updated on every Set and Del operation, so it is not free both in terms of CPU and memory consumption. If you don't need ListByPrefix functionality, don't use this wrapper.

This wrapper has some limitations:

  • KV only supports keys of type string.
  • Lexicographical order is maintained on the byte level, so it will work as expected for ASCII strings, but may not work for other encodings.
  • Updater and Locker wrappers provide ListByPrefix function, that will call underlying KV implementation. But if you wrap KV with Sharded wrapper, you will loose this functionality. In other words it would not make sense to wrap KV with Sharded wrapper.
	cache := NewMapCache[string, string]()
	kv := NewKV[string](cache)

	kv.Set("foo", "bar")
	kv.Set("foo2", "bar2")
	kv.Set("foo3", "bar3")
	kv.Set("foo1", "bar1")

	res, _ := kv.ListByPrefix("foo")
	fmt.Println(res)
	// Output: [bar bar1 bar2 bar3]
Locker

This wrapper is useful when you need to make several operations on the cache atomically. For example you store account balances in the cache and want to transfer some amount from one account to another:

	locker := NewLocker[int, int](NewMapCache[int, int]())
    // Acquire RW lock "transaction".
    tx := locker.Lock()

    balA, _ := tx.Get(accA)
    balB, _ := tx.Get(accB)

    amount := 100

    balA += amount
    balB -= amount

    tx.Set(accA, balA)
    tx.Set(accB, balB)

    // Unlock the cache.
    tx.Unlock()

The Locker itself does not implement Geche interface, but Tx object returned by Lock or RLock method does. Be careful to follow these rules (will lead to panics):

  • do not use Set and Del on read-only Tx acquired with RLock.
  • do not use Tx after Unlock call.
  • do not Unlock Tx that was unlocked before. And do not forget to Unlock the Tx object, otherwise it will lead to lock to be held forever.

Returned Tx object is not a transaction in a sense that it does not allow rollback, but it provides atomicity and isolation guarantees.

Locker provides ListByPrefix function, but it can only be used if underlying cache implementation supports it (is a KV wrapper). Otherwize it will panic.

Benchmarks

Benchmarks are designed to compare basic operations of different cache implementations in this library.

 $ go test -bench=. -benchmem -benchtime=10s .
goos: linux
goarch: amd64
pkg: github.com/c-pro/geche
cpu: AMD Ryzen 7 PRO 8840U w/ Radeon 780M Graphics
BenchmarkSet/MapCache-16                65541076               182.1 ns/op             0 B/op          0 allocs/op
BenchmarkSet/MapTTLCache-16             19751806               754.7 ns/op            10 B/op          0 allocs/op
BenchmarkSet/RingBuffer-16              51921265               365.9 ns/op             0 B/op          0 allocs/op
BenchmarkSet/KVMapCache-16               3876873              3461 ns/op             804 B/op         11 allocs/op
BenchmarkSet/KVCache-16                 14983084              1025 ns/op              54 B/op          1 allocs/op
BenchmarkSetIfPresentOnlyHits/MapCache-16               79179759               187.6 ns/op             0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyHits/MapTTLCache-16            37620368               371.8 ns/op             0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyHits/RingBuffer-16             100000000              110.3 ns/op             0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyHits/KVMapCache-16             39745081               345.1 ns/op             8 B/op          1 allocs/op
BenchmarkSetIfPresentOnlyMisses/MapCache-16             786898237               15.04 ns/op            0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyMisses/MapTTLCache-16          648632726               18.43 ns/op            0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyMisses/RingBuffer-16           746030799               15.92 ns/op            0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyMisses/KVMapCache-16           625973469               19.00 ns/op            0 B/op          0 allocs/op
BenchmarkSetIfPresentOnlyMisses/KVCache-16              972807471               12.02 ns/op            0 B/op          0 allocs/op
BenchmarkGetHit/MapCache-16                             100000000              104.8 ns/op             0 B/op          0 allocs/op
BenchmarkGetHit/MapTTLCache-16                          57810127               261.9 ns/op             0 B/op          0 allocs/op
BenchmarkGetHit/RingBuffer-16                           121727826               98.63 ns/op            0 B/op          0 allocs/op
BenchmarkGetHit/KVMapCache-16                           100000000              106.3 ns/op             0 B/op          0 allocs/op
BenchmarkGetHit/KVCache-16                              158599485               78.32 ns/op            0 B/op          0 allocs/op
BenchmarkGetMiss/MapCache-16                            1000000000              11.01 ns/op            0 B/op          0 allocs/op
BenchmarkGetMiss/MapTTLCache-16                         749231084               15.85 ns/op            0 B/op          0 allocs/op
BenchmarkGetMiss/RingBuffer-16                          676585886               17.73 ns/op            0 B/op          0 allocs/op
BenchmarkGetMiss/KVMapCache-16                          1000000000              11.64 ns/op            0 B/op          0 allocs/op
BenchmarkGetMiss/KVCache-16                             297815424               39.80 ns/op            0 B/op          0 allocs/op
BenchmarkDelHit/MapCache-16                             1000000000              10.84 ns/op            0 B/op          0 allocs/op
BenchmarkDelHit/MapTTLCache-16                          756901813               14.37 ns/op            0 B/op          0 allocs/op
BenchmarkDelHit/RingBuffer-16                           1000000000              10.28 ns/op            0 B/op          0 allocs/op
BenchmarkDelHit/KVMapCache-16                           358719861               28.27 ns/op            1 B/op          0 allocs/op
BenchmarkDelHit/KVCache-16                              366528763               31.60 ns/op           17 B/op          1 allocs/op
BenchmarkDelMiss/MapCache-16                            792498559               15.05 ns/op            0 B/op          0 allocs/op
BenchmarkDelMiss/MapTTLCache-16                         735312480               16.18 ns/op            0 B/op          0 allocs/op
BenchmarkDelMiss/RingBuffer-16                          364969610               32.75 ns/op            0 B/op          0 allocs/op
BenchmarkDelMiss/KVMapCache-16                          78108807               153.3 ns/op            64 B/op          1 allocs/op
BenchmarkDelMiss/KVCache-16                             49184259               233.0 ns/op           352 B/op          4 allocs/op
BenchmarkEverything/MapCache-16                         67129406               175.8 ns/op             0 B/op          0 allocs/op
BenchmarkEverything/MapTTLCache-16                      24496364               650.0 ns/op             8 B/op          0 allocs/op
BenchmarkEverything/RingBuffer-16                       60798320               304.7 ns/op             0 B/op          0 allocs/op
BenchmarkEverything/ShardedRingBufferUpdater-16         44071453               427.5 ns/op            19 B/op          0 allocs/op
BenchmarkEverything/KVMapCache-16                        5465926              2695 ns/op             745 B/op         10 allocs/op
BenchmarkEverything/KVCache-16                          17036607              1020 ns/op              60 B/op          0 allocs/op
BenchmarkEverything/LockerMapCache-16                   48808772               209.0 ns/op             1 B/op          0 allocs/op
BenchmarkKVListByPrefix-16                               3052804              3905 ns/op            1008 B/op         15 allocs/op
BenchmarkKVCacheListByPrefix-16                          8222977              1438 ns/op             580 B/op          8 allocs/op
PASS
ok      github.com/c-pro/geche  956.205s

Parallel benchmarks

I considered sharding cache to several buckets to ease lock contention, but after comparing test results with several cache libraries that have sharding, I do not see clear need for that. Maybe with 96 CPUs I would reconsider, but with 10 CPU I do not see a significant bottleneck in the mutex.

I implemented sharding anyway because why not. But it is a separate wrapper, so does not complicate existing codebase.

go test -benchtime=10s -benchmem -bench .
goos: linux
goarch: amd64
pkg: cache_bench
cpu: AMD Ryzen 7 PRO 8840U w/ Radeon 780M Graphics
BenchmarkEverythingParallel/MapCache-16         	100000000	       173.9 ns/op	       1 B/op	       0 allocs/op
BenchmarkEverythingParallel/MapTTLCache-16      	65010415	       382.5 ns/op	       2 B/op	       0 allocs/op
BenchmarkEverythingParallel/RingBuffer-16       	100000000	       225.9 ns/op	       0 B/op	       0 allocs/op
BenchmarkEverythingParallel/ShardedMapCache-16  	198813898	        56.77 ns/op	       0 B/op	       0 allocs/op
BenchmarkEverythingParallel/ShardedMapTTLCache-16         	122482419	        97.60 ns/op	       0 B/op	       0 allocs/op
BenchmarkEverythingParallel/ShardedRingBuffer-16          	188570131	        63.23 ns/op	       0 B/op	       0 allocs/op
BenchmarkEverythingParallel/github.com/Code-Hex/go-generics-cache-16         	55945956	       474.2 ns/op	      91 B/op	       2 allocs/op
BenchmarkEverythingParallel/github.com/Yiling-J/theine-go-16                 	71100289	       172.2 ns/op	       1 B/op	       0 allocs/op
BenchmarkEverythingParallel/github.com/jellydator/ttlcache-16                	58265994	       924.8 ns/op	      30 B/op	       0 allocs/op
BenchmarkEverythingParallel/github.com/erni27/imcache-16                     	236973852	        45.84 ns/op	      33 B/op	       0 allocs/op
BenchmarkEverythingParallel/github.com/dgraph-io/ristretto-16                	88618468	       141.9 ns/op	      94 B/op	       2 allocs/op
BenchmarkEverythingParallel/github.com/hashicorp/golang-lru/v2-16            	78454165	       399.1 ns/op	       2 B/op	       0 allocs/op
BenchmarkEverythingParallel/github.com/egregors/kesh-16                      	68416022	       337.8 ns/op	       7 B/op	       0 allocs/op
BenchmarkEverythingParallel/KVMapCache-16                                    	 7607014	      2050 ns/op	     254 B/op	       3 allocs/op
BenchmarkEverythingParallel/KVCache-16                                       	52397652	       902.8 ns/op	      53 B/op	       0 allocs/op
BenchmarkEverythingParallel/ShardedKVCache-16                                	100000000	       150.9 ns/op	      45 B/op	       0 allocs/op
PASS
ok  	cache_bench	390.130s

KV and KVCache results are worse then other caches, but it is expected as it keeps key index allowing prefix search with deterministic order, that other caches do not allow. It updates trie structure on Set and does extra work to cleanup the key on Del.

Concurrent comparison benchmark is located in a separate repository to avoid pulling unnecessary dependencies in the library.

Documentation

Overview

Package geche (GEneric Cache) implements several types of caches using Go generics (requires go 1.18+).

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrNotFound = errors.New("not found")

Functions

This section is empty.

Types

type BufferRec added in v1.4.0

type BufferRec[K comparable, V any] struct {
	K K
	V V
	// contains filtered or unexported fields
}

type Geche

type Geche[K comparable, V any] interface {
	Set(K, V)
	// SetIfPresent sets the kv only if the key was already present, and returns the previous value (if any) and whether the insertion was performed
	SetIfPresent(K, V) (V, bool)
	// SetIfAbsent sets the kv only if the key didn't exist yet, and returns the existing value (if any) and whether the insertion was performed
	SetIfAbsent(K, V) (V, bool)
	Get(K) (V, error)
	Del(K) error
	Snapshot() map[K]V
	Len() int
}

Geche interface is a common interface for all cache implementations.

type KV

type KV[V any] struct {
	// contains filtered or unexported fields
}

func NewKV

func NewKV[V any](
	cache Geche[string, V],
) *KV[V]
Example
cache := NewMapCache[string, string]()
kv := NewKV[string](cache)

kv.Set("foo", "bar")
kv.Set("foo2", "bar2")
kv.Set("foo3", "bar3")
kv.Set("foo1", "bar1")

res, _ := kv.ListByPrefix("foo")
fmt.Println(res)
Output:

[bar bar1 bar2 bar3]

func (*KV[V]) Del

func (kv *KV[V]) Del(key string) error

Del key from the underlying cache.

func (*KV[V]) Get

func (kv *KV[V]) Get(key string) (V, error)

Get value by key from the underlying cache.

func (*KV[V]) Len

func (kv *KV[V]) Len() int

Len returns total number of elements in the underlying caches.

func (*KV[V]) ListByPrefix

func (kv *KV[V]) ListByPrefix(prefix string) ([]V, error)

func (*KV[V]) Set

func (kv *KV[V]) Set(key string, value V)

Set key-value pair while updating the trie. Panics if key is empty.

func (*KV[V]) SetIfAbsent added in v1.5.2

func (kv *KV[V]) SetIfAbsent(key string, value V) (V, bool)

func (*KV[V]) SetIfPresent added in v1.3.0

func (kv *KV[V]) SetIfPresent(key string, value V) (V, bool)

func (*KV[V]) Snapshot

func (kv *KV[V]) Snapshot() map[string]V

Snapshot returns a shallow copy of the cache data. Sequentially locks each of she undelnying shards from modification for the duration of the copy.

type KVCache added in v1.5.0

type KVCache[K byteSlice, V any] struct {
	// contains filtered or unexported fields
}

KVCache is a container that stores the values ordered by their keys using a trie index. It allows in order listing of values by prefix.

func NewKVCache added in v1.5.0

func NewKVCache[K byteSlice, V any]() *KVCache[K, V]

NewKVCache creates a new KVCache.

Example
cache := NewKVCache[string, string]()

cache.Set("foo", "bar")
cache.Set("foo2", "bar2")
cache.Set("foo3", "bar3")
cache.Set("foo1", "bar1")

res, _ := cache.ListByPrefix("foo")
fmt.Println(res)
Output:

[bar bar1 bar2 bar3]

func (*KVCache[K, V]) AllByPrefix added in v1.5.0

func (kv *KVCache[K, V]) AllByPrefix(prefix string) iter.Seq2[string, V]

AllByPrefix returns an (read only) iterator over values with keys starting with the given prefix. The iterator yields key-value pairs. Attempting to modify the cache while iterating will lead to a deadlock.

Example
cache := NewKVCache[string, string]()

cache.Set("foo", "bar")
cache.Set("foo2", "bar2")
cache.Set("foo3", "bar3")
cache.Set("foo1", "bar1")

for k, v := range cache.AllByPrefix("foo") {
	fmt.Println(k, v)
}
Output:

foo bar
foo1 bar1
foo2 bar2
foo3 bar3

func (*KVCache[K, V]) Del added in v1.5.0

func (kv *KVCache[K, V]) Del(key string) error

Del removes the record by key. Return value is always nil.

func (*KVCache[K, V]) Get added in v1.5.0

func (kv *KVCache[K, V]) Get(key K) (V, error)

Get retrieves a value by key.

func (*KVCache[K, V]) Len added in v1.5.0

func (kv *KVCache[K, V]) Len() int

Len returns the number of the values in the cache.

func (*KVCache[K, V]) ListByPrefix added in v1.5.0

func (kv *KVCache[K, V]) ListByPrefix(prefix string) ([]V, error)

ListByPrefix returns all values with keys starting with the given prefix.

func (*KVCache[K, V]) Set added in v1.5.0

func (kv *KVCache[K, V]) Set(key K, value V)

Set sets the value for the key.

func (*KVCache[K, V]) SetIfAbsent added in v1.5.2

func (kv *KVCache[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*KVCache[K, V]) SetIfPresent added in v1.5.0

func (kv *KVCache[K, V]) SetIfPresent(key K, value V) (V, bool)

SetIfPresent sets the value only if the key already exists.

func (*KVCache[K, V]) Snapshot added in v1.5.0

func (kv *KVCache[K, V]) Snapshot() map[string]V

Snapshot returns a copy of the cache.

type Locker added in v1.1.0

type Locker[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Locker is a wrapper for any Geche interface implementation, that provides Lock() and RLock() methods that return Tx object implementing Geche interface. Returned object is not a transaction in a sense that it does not allow commit/rollback or isolation level higher than READ COMMITTED. It only provides a way to do multiple cache operations atomically.

func NewLocker added in v1.1.0

func NewLocker[K comparable, V any](
	cache Geche[K, V],
) *Locker[K, V]

NewLocker creates a new Locker instance.

func (*Locker[K, V]) Lock added in v1.1.0

func (t *Locker[K, V]) Lock() *Tx[K, V]

Retuns read/write locked cache object.

func (*Locker[K, V]) RLock added in v1.1.0

func (t *Locker[K, V]) RLock() *Tx[K, V]

Retuns read-only locked cache object.

type MapCache

type MapCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MapCache is the simplest thread-safe map-based cache implementation. Does not have any limits or TTL, can grow indefinitely. Should be used when number of distinct keys in the cache is fixed or grows very slow.

func NewMapCache

func NewMapCache[K comparable, V any]() *MapCache[K, V]

func (*MapCache[K, V]) Del

func (c *MapCache[K, V]) Del(key K) error

Del removes key from the cache. Return value is always nil.

func (*MapCache[K, V]) Get

func (c *MapCache[K, V]) Get(key K) (V, error)

Get returns ErrNotFound if key does not exist in the cache.

func (*MapCache[K, V]) Len

func (c *MapCache[K, V]) Len() int

Len returns number of items in the cache.

func (*MapCache[K, V]) Set

func (c *MapCache[K, V]) Set(key K, value V)

func (*MapCache[K, V]) SetIfAbsent added in v1.5.2

func (c *MapCache[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*MapCache[K, V]) SetIfPresent added in v1.3.0

func (c *MapCache[K, V]) SetIfPresent(key K, value V) (V, bool)

func (*MapCache[K, V]) Snapshot

func (c *MapCache[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache data. Locks the cache from modification for the duration of the copy.

type MapTTLCache

type MapTTLCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

MapTTLCache is the thread-safe map-based cache with TTL cache invalidation support. MapTTLCache uses double linked list to maintain FIFO order of inserted values.

func NewMapTTLCache

func NewMapTTLCache[K comparable, V any](
	ctx context.Context,
	ttl time.Duration,
	cleanupInterval time.Duration,
) *MapTTLCache[K, V]

NewMapTTLCache creates MapTTLCache instance and spawns background cleanup goroutine, that periodically removes outdated records. Cleanup goroutine will run cleanup once in cleanupInterval until ctx is canceled. Each record in the cache is valid for ttl duration since it was Set.

func (*MapTTLCache[K, V]) Del

func (c *MapTTLCache[K, V]) Del(key K) error

func (*MapTTLCache[K, V]) Get

func (c *MapTTLCache[K, V]) Get(key K) (V, error)

Get returns ErrNotFound if key is not found in the cache or record is outdated.

func (*MapTTLCache[K, V]) Len

func (c *MapTTLCache[K, V]) Len() int

Len returns the number of records in the cache.

func (*MapTTLCache[K, V]) OnEvict added in v1.6.0

func (c *MapTTLCache[K, V]) OnEvict(f onEvictFunc[K, V])

OnEvict sets a callback function that will be called when an entry is evicted from the cache due to TTL expiration. The callback receives the key and value of the evicted entry. Note that the eviction callback is not called for Del operation.

func (*MapTTLCache[K, V]) Set

func (c *MapTTLCache[K, V]) Set(key K, value V)

func (*MapTTLCache[K, V]) SetIfAbsent added in v1.5.2

func (c *MapTTLCache[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*MapTTLCache[K, V]) SetIfPresent added in v1.3.0

func (c *MapTTLCache[K, V]) SetIfPresent(key K, value V) (V, bool)

SetIfPresent sets the given key to the given value if the key was already present, and resets the TTL

func (*MapTTLCache[K, V]) Snapshot

func (c *MapTTLCache[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache data. Locks the cache from modification for the duration of the copy.

type Mapper

type Mapper[K any] interface {
	Map(key K, numShards int) int
}

Mapper maps keys to shards. Good mapper maps them uniformly.

type NumberMapper

type NumberMapper[K integer] struct{}

NumberMapper maps integer keys to N shards using modulo operation.

func (*NumberMapper[K]) Map

func (nm *NumberMapper[K]) Map(key K, numShards int) int

Map key to shard number.

type RingBuffer

type RingBuffer[K comparable, V any] struct {
	// contains filtered or unexported fields
}

RingBuffer cache preallocates a fixed number of elements and starts overwriting oldest values when this number is reached. The idea is to reduce allocations and GC pressure while having fixed memory footprint (does not grow).

func NewRingBuffer

func NewRingBuffer[K comparable, V any](size int) *RingBuffer[K, V]

NewRingBuffer creates RingBuffer instance with predifined size (number of records). This number of records is preallocated immediately. RingBuffer cache can't hold more than size values.

func (*RingBuffer[K, V]) All added in v1.4.0

func (c *RingBuffer[K, V]) All() iter.Seq2[K, V]

All is a (read-only) iterator over all key-value pairs in the cache. Attempt to modify the cache (Set/Del, etc.) while iterating will lead to a deadlock.

func (*RingBuffer[K, V]) Del

func (c *RingBuffer[K, V]) Del(key K) error

Del removes key from the cache. Return value is always nil.

func (*RingBuffer[K, V]) Get

func (c *RingBuffer[K, V]) Get(key K) (V, error)

Get returns cached value for the key, or ErrNotFound if the key does not exist.

func (*RingBuffer[K, V]) Len

func (c *RingBuffer[K, V]) Len() int

Len returns number of items in the cache.

func (*RingBuffer[K, V]) ListAll added in v1.4.0

func (c *RingBuffer[K, V]) ListAll() []BufferRec[K, V]

ListAll returns all key-value pairs in the cache in the order they were added.

func (*RingBuffer[K, V]) ListAllKeys added in v1.4.0

func (c *RingBuffer[K, V]) ListAllKeys() []K

ListAllKeys returns all keys in the cache in the order they were added.

func (*RingBuffer[K, V]) ListAllValues added in v1.4.0

func (c *RingBuffer[K, V]) ListAllValues() []V

ListAllValues returns all values in the cache in the order they were added.

func (*RingBuffer[K, V]) Set

func (c *RingBuffer[K, V]) Set(key K, value V)

Set adds value to the ring buffer and key index.

func (*RingBuffer[K, V]) SetIfAbsent added in v1.5.2

func (c *RingBuffer[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*RingBuffer[K, V]) SetIfPresent added in v1.3.0

func (c *RingBuffer[K, V]) SetIfPresent(key K, value V) (V, bool)

func (*RingBuffer[K, V]) Snapshot

func (c *RingBuffer[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache data. Locks the cache from modification for the duration of the copy.

type Sharded

type Sharded[K comparable, V any] struct {
	N int
	// contains filtered or unexported fields
}

Sharded is a wrapper for any Geche interface implementation that provides the same interface itself. The idea is to better utilize CPUs using several thread safe shards.

func NewSharded

func NewSharded[K comparable, V any](
	shardFactory func() Geche[K, V],
	numShards int,
	keyMapper Mapper[K],
) *Sharded[K, V]

NewSharded creates numShards underlying cache containers using shardFactory function to initialize each, and returns Sharded instance that implements Geche interface and routes operations to shards using keyMapper function.

Example
numShards := 4
c := NewSharded[int](
	func() Geche[int, string] {
		return NewMapCache[int, string]()
	},
	numShards,
	&NumberMapper[int]{},
)

// Set 4000 records in 4 parallel goroutines.
wg := sync.WaitGroup{}
wg.Add(numShards)
for i := 0; i < numShards; i++ {
	go func(i int) {
		defer wg.Done()
		for j := i * 1000; j < i*1000+1000; j++ {
			c.Set(j, strconv.Itoa(j))
		}
	}(i)
}

wg.Wait()

for i := 0; i < 10; i++ {
	v, _ := c.Get(i*1000 + 500)
	fmt.Println(v)
}
Output:

500
1500
2500
3500

func (*Sharded[K, V]) Del

func (s *Sharded[K, V]) Del(key K) error

Del key from the underlying sharded cache.

func (*Sharded[K, V]) Get

func (s *Sharded[K, V]) Get(key K) (V, error)

Get value by key from the underlying sharded cache.

func (*Sharded[K, V]) Len

func (s *Sharded[K, V]) Len() int

Len returns total number of elements in the underlying sharded caches.

func (*Sharded[K, V]) Set

func (s *Sharded[K, V]) Set(key K, value V)

Set key-value pair in the underlying sharded cache.

func (*Sharded[K, V]) SetIfAbsent added in v1.5.2

func (s *Sharded[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*Sharded[K, V]) SetIfPresent added in v1.3.0

func (s *Sharded[K, V]) SetIfPresent(key K, value V) (V, bool)

func (*Sharded[K, V]) Snapshot

func (s *Sharded[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache data. Sequentially locks each of she undelnying shards from modification for the duration of the copy.

type StringMapper

type StringMapper struct{}

StringMapper is a simple implementation mapping string keys to N shards. It works best with number of shards that is power of 2, and it works up to 256 shards.

func (*StringMapper) Map

func (sm *StringMapper) Map(key string, numShards int) int

Map key to shard number. Should be uniform enough 🤣

type Tx added in v1.1.0

type Tx[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Tx is a "transaction" object returned by Locker.Lock() and Locker.RLock() methods. See Locker for more details.

func (*Tx[K, V]) Del added in v1.1.0

func (tx *Tx[K, V]) Del(key K) error

Del key from the underlying locked cache. Will panic if called on RLocked Tx.

func (*Tx[K, V]) Get added in v1.1.0

func (tx *Tx[K, V]) Get(key K) (V, error)

Get value by key from the underlying sharded cache.

func (*Tx[K, V]) Len added in v1.1.0

func (tx *Tx[K, V]) Len() int

Len returns total number of elements in the cache.

func (*Tx[K, V]) ListByPrefix added in v1.1.0

func (tx *Tx[K, V]) ListByPrefix(prefix string) ([]V, error)

ListByPrefix should only be called if underlying cache supports ListByPrefix.

func (*Tx[K, V]) Set added in v1.1.0

func (tx *Tx[K, V]) Set(key K, value V)

Set key-value pair in the underlying locked cache. Will panic if called on RLocked Tx.

func (*Tx[K, V]) SetIfAbsent added in v1.5.2

func (tx *Tx[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*Tx[K, V]) SetIfPresent added in v1.3.0

func (tx *Tx[K, V]) SetIfPresent(key K, value V) (V, bool)

func (*Tx[K, V]) Snapshot added in v1.1.0

func (tx *Tx[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache data.

func (*Tx[K, V]) Unlock added in v1.1.0

func (tx *Tx[K, V]) Unlock()

Unlock underlying cache.

type UpdateFn

type UpdateFn[K comparable, V any] func(key K) (V, error)

UpdateFn is a type for a function to be called to get updated value when Updater has a cache miss.

type Updater

type Updater[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Updater is a wrapper on any Geche interface implementation That calls cache update function if key does not exist in the cache. It only allows one Update function per key to be running at a single point of time, reducing odds to get a "cache centipede" situation.

func NewCacheUpdater

func NewCacheUpdater[K comparable, V any](
	cache Geche[K, V],
	updateFn UpdateFn[K, V],
	poolSize int,
) *Updater[K, V]

NewCacheUpdater returns cache wrapped with Updater. It calls updateFn whenever Get function returns ErrNotFound to update cache key. Only one updateFn for a given key can run at the same time, and only poolSize updateFn with different keys san run simultaneously.

Example
ctx, cancel := context.WithCancel(context.Background())
defer cancel()

// Create a cache updater with updaterFunction
// that sets cache value equal to the key.
u := NewCacheUpdater[string, string](
	NewMapTTLCache[string, string](ctx, time.Minute, time.Minute),
	updateFn,
	2,
)

// Value is not in the cache yet.
// But it will be set by the updater function.
v, _ := u.Get("nonexistent")
fmt.Println(v)
Output:

nonexistent

func (*Updater[K, V]) Del

func (u *Updater[K, V]) Del(key K) error

Del deletes key from the cache.

func (*Updater[K, V]) Get

func (u *Updater[K, V]) Get(key K) (V, error)

Get returns value from the cache. If the value is not in the cache, it calls updateFn to get the value and update the cache first. Since updateFn can return error, Get is not guaranteed to always return the value. When cache update fails, Get will return the error that updateFn returned, and not ErrNotFound.

func (*Updater[K, V]) Len

func (u *Updater[K, V]) Len() int

Len returns the number of items in the cache.

func (*Updater[K, V]) ListByPrefix added in v1.1.0

func (u *Updater[K, V]) ListByPrefix(prefix string) ([]V, error)

ListByPrefix should only be called if underlying cache supports ListByPrefix. Otherwise it will panic.

func (*Updater[K, V]) Set

func (u *Updater[K, V]) Set(key K, value V)

func (*Updater[K, V]) SetIfAbsent added in v1.5.2

func (u *Updater[K, V]) SetIfAbsent(key K, value V) (V, bool)

func (*Updater[K, V]) SetIfPresent added in v1.3.0

func (u *Updater[K, V]) SetIfPresent(key K, value V) (V, bool)

func (*Updater[K, V]) Snapshot

func (u *Updater[K, V]) Snapshot() map[K]V

Snapshot returns a shallow copy of the cache. It locks the cache from modification for the duration of the copy.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL