package store
import "k8s.io/apiserver/pkg/storage/cacher/store"
Index ¶
- func ElementIndexFunc(objIndexFunc cache.IndexFunc) cache.IndexFunc
- func ElementIndexers(indexers *cache.Indexers) cache.Indexers
- func ElementKey(obj interface{}) (string, error)
- func ElementObject(obj interface{}) (runtime.Object, error)
- type Element
- type Indexer
- type OrderedLister
- type Snapshotter
Functions ¶
func ElementIndexFunc ¶
func ElementIndexers ¶
func ElementKey ¶
func ElementObject ¶
Types ¶
type Element ¶
Computing a key of an object is generally non-trivial (it performs e.g. validation underneath). Similarly computing object fields and labels. To avoid computing them multiple times (to serve the event in different List/Watch requests), in the underlying store we are keeping structs (key, object, labels, fields).
type Indexer ¶
type Indexer interface {
Add(obj interface{}) error
Update(obj interface{}) error
Delete(obj interface{}) error
List() []interface{}
ListKeys() []string
Get(obj interface{}) (item interface{}, exists bool, err error)
GetByKey(key string) (item interface{}, exists bool, err error)
Replace([]interface{}, string) error
ByIndex(indexName, indexedValue string) ([]interface{}, error)
}
func NewIndexer ¶
type OrderedLister ¶
type OrderedLister interface {
ListPrefix(prefix, continueKey string) []interface{}
Count(prefix, continueKey string) (count int)
Clone() OrderedLister
}
type Snapshotter ¶
type Snapshotter interface {
Reset()
GetLessOrEqual(rv uint64) (OrderedLister, bool)
Add(rv uint64, indexer OrderedLister)
RemoveLess(rv uint64)
Len() int
}
func NewSnapshotter ¶
func NewSnapshotter() Snapshotter
newStoreSnapshotter returns a storeSnapshotter that stores snapshots for serving read requests with exact resource versions (RV) and pagination.
Snapshots are created by calling Clone method on orderedLister, which is expected to be fast and efficient thanks to usage of B-trees. B-trees can create a lazy copy of the tree structure, minimizing overhead.
Assuming the watch cache observes all events and snapshots cache after each of them, requests for a specific resource version can be served by retrieving the snapshot with the greatest RV less than or equal to the requested RV. To make snapshot retrivial efficient we need an ordered data structure, such as tree.
The initial implementation uses a B-tree to achieve the following performance characteristics (n - number of snapshots stored):
- `Add`: Adds a new snapshot. Complexity: O(log n). Executed for each watch event observed by the cache.
- `GetLessOrEqual`: Retrieves the snapshot with the greatest RV less than or equal to the requested RV. Complexity: O(log n). Executed for each LIST request with match=Exact or continuation.
- `RemoveLess`: Cleans up snapshots outside the watch history window. Complexity: O(k log n), k - number of snapshots to remove, usually only one if watch capacity was not reduced. Executed per watch event observed when the cache is full.
- `Reset`: Cleans up all snapshots. Complexity: O(1). Executed when the watch cache is reinitialized.
Further optimization is possible by leveraging the property that adds always increase the maximum RV and deletes only increase the minimum RV. For example, a binary search on a cyclic buffer of (RV, snapshot) should reduce number of allocations and improve removal complexity. However, this solution is more complex and is deferred for future implementation.
TODO: Rewrite to use a cyclic buffer
Source Files ¶
store.go store_btree.go
- Version
- v0.36.0 (latest)
- Published
- Apr 22, 2026
- Platform
- linux/amd64
- Imports
- 10 packages
- Last checked
- 4 days ago –
Tools for package owners.