+
+-------
### What does it do?
You can easily and powerfully perform caching operations in Python as fast as possible.
This can make your application very faster and it's a good choice in big applications.
+**Ideal for optimizing large-scale applications** with efficient, low-overhead caching.
-- ๐ 10-50x faster than other caching libraries.
-- ๐ Very low memory usage (1/2 of dictionary).
-- ๐ฅ Full-feature and easy-to-use
+**Key Features:**
+- ๐ Extremely fast (10-50x faster than other caching libraries -- [*benchmarks*](https://github.com/awolverp/cachebox-benchmark))
+- ๐ Minimal memory footprint (50% of standard dictionary memory usage)
+- ๐ฅ Full-featured and user-friendly
- ๐งถ Completely thread-safe
- ๐ง Tested and correct
-- **\[R\]** written in Rust that has high-performance
-- ๐ค Support Python 3.8+ (PyPy & CPython)
-- ๐ฆ Over 7 cache algorithms are supported
-
-## Page Content
-- [**When i need caching and cachebox?**](#when-i-need-caching-and-cachebox)
-- [**Why `cachebox`?**](#why-cachebox)
-- [**Installation**](#installation)
-- [**Example**](#example)
-- [**Learn**](#learn)
-- [**Incompatible changes**](#incompatible-changes)
-- [**Tips & Notes**](#tips-and-notes)
+- **\[R\]** written in Rust for maximum performance
+- ๐ค Compatible with Python 3.8+ (PyPy and CPython)
+- ๐ฆ Supports 7 advanced caching algorithms
-## When i need caching and cachebox?
-**๐ Frequent Data Access** \
-If your application frequently accesses the same data, caching can helps you.
+### Page Contents
+- โ [**When i need caching and cachebox**](#when-i-need-caching-and-cachebox)
+- ๐ [**Why `cachebox`**](#why-cachebox)
+- ๐ง [**Installation**](#installation)
+- ๐ก [**Preview**](#examples)
+- ๐ [**Getting started**](#getting-started)
+- โ๏ธ [**Incompatible changes**](#incompatible-changes)
+- ๐ [**Tips & Notes**](#tips-and-notes)
-**๐ Expensive Operations** \
-When data retrieval involves costly operations such as database queries or API calls, caching can save time and resources.
+### When i need caching and cachebox
+- ๐ **Frequently Data Access** \
+ If you need to access the same data multiple times, caching can help reduce the number of database queries or API calls, improving performance.
-**๐ High Traffic Scenarios** \
-In big applications with high user traffic caching can help by reducing the number of operations.
+- ๐ **Expensive Operations** \
+ If you have operations that are computationally expensive, caching can help reduce the number of times these operations need to be performed.
-**#๏ธโฃ Web Page Rendering** \
-Caching HTML pages can speed up the delivery of static content.
+- ๐ **High Traffic Scenarios** \
+ If your application has high user traffic, caching can help reduce the load on your server by reducing the number of requests that need to be processed.
-**๐ง Rate Limiting** \
-Caching can help you to manage rate limits imposed by third-party APIs by reducing the number of requests sent.
+- #๏ธโฃ **Web Page Rendring** \
+ If you are rendering web pages, caching can help reduce the time it takes to generate the page by caching the results of expensive operations. Caching HTML pages can speed up the delivery of static content.
-**๐ค Machine Learning Models** \
-If your application frequently makes predictions using the same input data, caching the results can save computation time.
+- ๐ง **Rate Limiting** \
+ If you have a rate limiting system in place, caching can help reduce the number of requests that need to be processed by the rate limiter. Also, caching can help you to manage rate limits imposed by third-party APIs by reducing the number of requests sent.
-**And a lot of other situations ...**
+- ๐ค **Machine Learning Models** \
+ If your application frequently makes predictions using the same input data, caching the results can save computation time.
-## Why cachebox?
-**โก Rust** \
+### Why cachebox?
+- **โก Rust** \
It uses *Rust* language to has high-performance.
-**๐งฎ SwissTable** \
+- **๐งฎ SwissTable** \
It uses Google's high-performance SwissTable hash map. thanks to [hashbrown](https://github.com/rust-lang/hashbrown).
-**โจ Low memory usage** \
+- **โจ Low memory usage** \
It has very low memory usage.
-**โญ Zero Dependency** \
+- **โญ Zero Dependency** \
As we said, `cachebox` written in Rust so you don't have to install any other dependecies.
-**๐งถ Thread safe** \
+- **๐งถ Thread safe** \
It's completely thread-safe and uses locks to prevent problems.
-**๐ Easy To Use** \
+- **๐ Easy To Use** \
You only need to import it and choice your implementation to use and behave with it like a dictionary.
-**๐ซ Avoids Cache Stampede** \
-It avoids [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) to have better performance.
+- **๐ซ Avoids Cache Stampede** \
+It avoids [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) by using a distributed lock system.
+
## Installation
cachebox is installable by `pip`:
@@ -82,9 +98,9 @@ pip3 install -U cachebox
```
> [!WARNING]\
-> The new version v4 has some incompatible with v3, for more info please see [Incompatible changes](#incompatible-changes)
+> The new version v5 has some incompatible with v4, for more info please see [Incompatible changes](#incompatible-changes)
-## Example
+## Examples
The simplest example of **cachebox** could look like this:
```python
import cachebox
@@ -93,7 +109,7 @@ import cachebox
@cachebox.cached(cachebox.FIFOCache(maxsize=128))
def factorial(number: int) -> int:
fact = 1
- for num in range(2, n + 1):
+ for num in range(2, number + 1):
fact *= num
return fact
@@ -107,48 +123,58 @@ async def make_request(method: str, url: str) -> dict:
return response.json()
```
-> [!NOTE]\
-> Unlike functools.lru_cache and other caching libraries, cachebox will copy `dict`, `list`, and `set`.
-> ```python
-> @cachebox.cached(cachebox.LRUCache(maxsize=128))
-> def make_dict(name: str, age: int) -> dict:
-> return {"name": name, "age": age}
+Also, unlike functools.lru_cache and other caching libraries, cachebox can copy `dict`, `list`, and `set` objects.
+```python
+@cachebox.cached(cachebox.LRUCache(maxsize=128))
+def make_dict(name: str, age: int) -> dict:
+ return {"name": name, "age": age}
>
-> d = make_dict("cachebox", 10)
-> assert d == {"name": "cachebox", "age": 10}
-> d["new-key"] = "new-value"
->
-> d2 = make_dict("cachebox", 10)
-> # `d2` will be `{"name": "cachebox", "age": 10, "new-key": "new-value"}` if you use other libraries
-> assert d2 == {"name": "cachebox", "age": 10}
-> ```
+d = make_dict("cachebox", 10)
+assert d == {"name": "cachebox", "age": 10}
+d["new-key"] = "new-value"
+
+d2 = make_dict("cachebox", 10)
+# `d2` will be `{"name": "cachebox", "age": 10, "new-key": "new-value"}` if you use other libraries
+assert d2 == {"name": "cachebox", "age": 10}
+```
+
+You can use cache alghoritms without `cached` decorator -- just import what cache alghoritms you want and use it like a dictionary.
+```python
+from cachebox import FIFOCache
+
+cache = FIFOCache(maxsize=128)
+cache["key"] = "value"
+assert cache["key"] == "value"
-## Learn
-There are 2 decorators:
-- [**cached**](#function-cached): a decorator that helps you to cache your functions and calculations with a lot of options.
-- [**cachedmethod**](#function-cachedmethod): this is excatly works like `cached()`, but ignores `self` parameters in hashing and key making.
-- [**is_cached**](#function-is_cached): check if a function/method cached by cachebox or not
-
-There are 9 classes:
-- [**BaseCacheImpl**](#class-basecacheimpl): base-class for all classes.
-- [**Cache**](#class-cache): A simple cache that has no algorithm; this is only a hashmap.
-- [**FIFOCache**](#class-fifocache): the FIFO cache will remove the element that has been in the cache the longest.
-- [**RRCache**](#class-rrcache): the RR cache will choice randomly element to remove it to make space when necessary.
-- [**TTLCache**](#class-ttlcache): the TTL cache will automatically remove the element in the cache that has expired.
-- [**LRUCache**](#class-lrucache): the LRU cache will remove the element in the cache that has not been accessed in the longest time.
-- [**LFUCache**](#class-lfucache): the LFU cache will remove the element in the cache that has been accessed the least, regardless of time.
-- [**VTTLCache**](#class-vttlcache): the TTL cache will automatically remove the element in the cache that has expired when need.
-- [**Frozen**](#class-frozen): you can use this class for freezing your caches.
-
-Using this library is very easy and you only need to import cachebox and then use these classes like a dictionary (or use its decorator such as `cached` and `cachedmethod`).
-
-There are some examples for you with different methods for introducing those. \
+# You can also use `cache.get(key, default)`
+assert cache.get("key") == "value"
+```
+
+## Getting started
+There are 3 useful functions:
+- [**cached**](#cached--decorator): a decorator that helps you to cache your functions and calculations with a lot of options.
+- [**cachedmethod**](#cachedmethod--decorator): this is excatly works like `cached()`, but ignores `self` parameters in hashing and key making.
+- [**is_cached**](#is_cached--function): check if a function/method cached by cachebox or not
+
+And 9 classes:
+- [**BaseCacheImpl**](#basecacheimpl-๏ธ-class): base-class for all classes.
+- [**Cache**](#cache-๏ธ-class): A simple cache that has no algorithm; this is only a hashmap.
+- [**FIFOCache**](#fifocache-๏ธ-class): the FIFO cache will remove the element that has been in the cache the longest.
+- [**RRCache**](#rrcache-๏ธ-class): the RR cache will choice randomly element to remove it to make space when necessary.
+- [**LRUCache**](#lrucache-๏ธ-class): the LRU cache will remove the element in the cache that has not been accessed in the longest time.
+- [**LFUCache**](#lfucache-๏ธ-class): the LFU cache will remove the element in the cache that has been accessed the least, regardless of time.
+- [**TTLCache**](#ttlcache-๏ธ-class): the TTL cache will automatically remove the element in the cache that has expired.
+- [**VTTLCache**](#vttlcache-๏ธ-class): the TTL cache will automatically remove the element in the cache that has expired when need.
+- [**Frozen**](#frozen-๏ธ-class): you can use this class for freezing your caches.
+
+You only need to import the class which you want, and behave with it like a dictionary (except for [VTTLCache](#vttlcache-๏ธ-class), this have some differences)
+
+There are some examples for you with different methods for introducing those.
**All the methods you will see in the examples are common across all classes (except for a few of them).**
* * *
-### *function* cached
-
+### `cached` (๐ decorator)
Decorator to wrap a function with a memoizing callable that saves results in a cache.
**Parameters:**
@@ -169,7 +195,11 @@ Decorator to wrap a function with a memoizing callable that saves results in a c
`0` means "never copy", `1` means "only copy `dict`, `list`, and `set` results" and
`2` means "always copy the results".
-**A simple example:**
+
+Examples
+
+
+A simple example:
```python
import cachebox
@@ -184,7 +214,7 @@ sum_as_string.cache_clear()
assert len(sum_as_string.cache) == 0
```
-**A key_maker example:**
+A key_maker example:
```python
import cachebox
@@ -197,7 +227,7 @@ async def request_handler(request: Request):
return Response("hello man")
```
-**A typed key_maker example:**
+A typed key_maker example:
```python
import cachebox
@@ -223,13 +253,13 @@ print(sum_as_string.cache)
# LRUCache(0 / 9223372036854775807, capacity=0)
print(sum_as_string.cache_info())
-# CacheInfo(hits=0, misses=0, maxsize=9223372036854775807, length=0, cachememory=8)
+# CacheInfo(hits=0, misses=0, maxsize=9223372036854775807, length=0, memory=8)
# `.cache_clear()` clears the cache
sum_as_string.cache_clear()
```
-**callback example:** (Added in v4.2.0)
+callback example: *(Added in v4.2.0)*
```python
import cachebox
@@ -259,6 +289,9 @@ assert func(5, 4) == 9
# callback_func: miss event (5, 4) 9
```
+
+
+
> [!NOTE]\
> Recommended use `cached` method for **@staticmethod**s and use [`cachedmethod`](#function-cachedmethod) for **@classmethod**s;
> And set `copy_level` parameter to `2` on **@classmethod**s.
@@ -285,15 +318,14 @@ assert func(5, 4) == 9
> sum_as_string(10, 20, cachebox__ignore=True)
> ```
-> [!NOTE]\
-> You can see [LRUCache here](#class-lrucache).
-
* * *
-### *function* cachedmethod
-
+### `cachedmethod` (๐ decorator)
this is excatly works like `cached()`, but ignores `self` parameters in hashing and key making.
+
+Example
+
```python
import cachebox
@@ -306,14 +338,18 @@ c = MyClass()
c.my_method()
```
-> [!NOTE]\
-> You can see [TTLCache here](#class-ttlcache).
+
* * *
-### *function* is_cached
+### `is_cached` (๐ฆ function)
+Checks that a function/method is cached by cachebox or not.
+
+**Parameters:**
+- `func`: The function/method to check.
-Check if a function/method cached by cachebox or not
+
+Example
```python
import cachebox
@@ -325,45 +361,67 @@ def func():
assert cachebox.is_cached(func)
```
-> [!NOTE]\
-> You can see [TTLCache here](#class-ttlcache).
+
* * *
-### *class* BaseCacheImpl
-This is the base class of all cache classes such as Cache, FIFOCache, ... \
-Do not try to call its constructor, this is only for type-hint.
+### `BaseCacheImpl` (๐๏ธ class)
+Base implementation for cache classes in the cachebox library.
+
+This abstract base class defines the generic structure for cache implementations,
+supporting different key and value types through generic type parameters.
+Serves as a foundation for specific cache variants like Cache and FIFOCache.
+
+
+Example
```python
import cachebox
+# subclass
class ClassName(cachebox.BaseCacheImpl):
- # ...
+ ...
+# type-hint
def func(cache: BaseCacheImpl):
- # ...
+ ...
+# isinstance
cache = cachebox.LFUCache(0)
assert isinstance(cache, cachebox.BaseCacheImpl)
```
+
+
* * *
-### *class* Cache
-A simple cache that has no algorithm; this is only a hashmap.
+### `Cache` (๐๏ธ class)
+A thread-safe, memory-efficient hashmap-like cache with configurable maximum size.
+
+Provides a flexible key-value storage mechanism with:
+- Configurable maximum size (zero means unlimited)
+- Lower memory usage compared to standard dict
+- Thread-safe operations
+- Useful memory management methods
+
+Supports initialization with optional initial data and capacity,
+and provides dictionary-like access with additional cache-specific operations.
> [!TIP]\
-> **`Cache` vs `dict`**:
-> - it is thread-safe and unordered, while `dict` isn't thread-safe and ordered (Python 3.6+).
-> - it uses very lower memory than `dict`.
-> - it supports useful and new methods for managing memory, while `dict` does not.
-> - it does not support `popitem`, while `dict` does.
-> - You can limit the size of `Cache`, but you cannot for `dict`.
+> Differs from standard `dict` by:
+> - it is thread-safe and unordered, while dict isn't thread-safe and ordered (Python 3.6+).
+> - it uses very lower memory than dict.
+> - it supports useful and new methods for managing memory, while dict does not.
+> - it does not support popitem, while dict does.
+> - You can limit the size of Cache, but you cannot for dict.
| | get | insert | delete | popitem |
| ------------ | ----- | ------- | ------ | ------- |
| Worse-case | O(1) | O(1) | O(1) | N/A |
+
+Example
+
```python
from cachebox import Cache
@@ -389,16 +447,27 @@ cache.update({i:i for i in range(200)})
# OverflowError: The cache has reached the bound.
```
+
+
* * *
-### *class* FIFOCache
-FIFO Cache implementation - First-In First-Out Policy (thread-safe).
+### `FIFOCache` (๐๏ธ class)
+A First-In-First-Out (FIFO) cache implementation with configurable maximum size and optional initial capacity.
-In simple terms, the FIFO cache will remove the element that has been in the cache the longest.
+This cache provides a fixed-size container that automatically removes the oldest items when the maximum size is reached.
-| | get | insert | delete(i) | popitem |
-| ------------ | ----- | ------- | --------- | ------- |
-| Worse-case | O(1) | O(1) | O(min(i, n-i)) | O(1) |
+**Key features**:
+- Deterministic item eviction order (oldest items removed first)
+- Efficient key-value storage and retrieval
+- Supports dictionary-like operations
+- Allows optional initial data population
+
+| | get | insert | delete | popitem |
+| ------------ | ----- | ------- | ------------- | ------- |
+| Worse-case | O(1) | O(1) | O(min(i, n-i)) | O(1) |
+
+
+Example
```python
from cachebox import FIFOCache
@@ -424,16 +493,24 @@ print(cache.insert("new-key", "val")) # None
print(cache.first())
```
+
+
* * *
-### *class* RRCache
-RRCache implementation - Random Replacement policy (thread-safe).
+### `RRCache` (๐๏ธ class)
+A thread-safe cache implementation with Random Replacement (RR) policy.
+
+This cache randomly selects and removes elements when the cache reaches its maximum size,
+ensuring a simple and efficient caching mechanism with configurable capacity.
-In simple terms, the RR cache will choice randomly element to remove it to make space when necessary.
+Supports operations like insertion, retrieval, deletion, and iteration with O(1) complexity.
| | get | insert | delete | popitem |
| ------------ | ----- | ------- | ------ | ------- |
-| Worse-case | O(1) | O(1) | O(1) | O(1)~ |
+| Worse-case | O(1) | O(1) | O(1) | O(1) |
+
+
+Example
```python
from cachebox import RRCache
@@ -449,51 +526,29 @@ print(cache.capacity()) # 28
cache.shrink_to_fit()
print(cache.capacity()) # 10
-print(len(cache)) # 10
-cache.clear()
-print(len(cache)) # 0
+# Returns a random key
+print(cache.random_key()) # 4
```
-* * *
-
-### *class* TTLCache
-TTL Cache implementation - Time-To-Live Policy (thread-safe).
-
-In simple terms, the TTL cache will automatically remove the element in the cache that has expired.
-
-| | get | insert | delete(i) | popitem |
-| ------------ | ----- | ------- | --------- | ------- |
-| Worse-case | O(1)~ | O(1)~ | O(min(i, n-i)) | O(n) |
-
-```python
-from cachebox import TTLCache
-import time
-
-# The `ttl` param specifies the time-to-live value for each element in cache (in seconds); cannot be zero or negative.
-cache = TTLCache(0, ttl=2)
-cache.update({i:str(i) for i in range(10)})
-
-print(cache.get_with_expire(2)) # ('2', 1.99)
-
-# Returns the oldest key in cache; this is the one which will be removed by `popitem()`
-print(cache.first()) # 0
-
-cache["mykey"] = "value"
-time.sleep(2)
-cache["mykey"] # KeyError
-```
+
* * *
-### *class* LRUCache
-LRU Cache implementation - Least recently used policy (thread-safe).
+### `LRUCache` (๐๏ธ class)
+Thread-safe Least Recently Used (LRU) cache implementation.
-In simple terms, the LRU cache will remove the element in the cache that has not been accessed in the longest time.
+Provides a cache that automatically removes the least recently used items when
+the cache reaches its maximum size. Supports various operations like insertion,
+retrieval, and management of cached items with configurable maximum size and
+initial capacity.
| | get | insert | delete(i) | popitem |
| ------------ | ----- | ------- | --------- | ------- |
| Worse-case | O(1)~ | O(1)~ | O(1)~ | O(1)~ |
+
+Example
+
```python
from cachebox import LRUCache
@@ -501,6 +556,7 @@ cache = LRUCache(0, {i:i*2 for i in range(10)})
# access `1`
print(cache[0]) # 0
+print(cache.least_recently_used()) # 1
print(cache.popitem()) # (1, 2)
# .peek() searches for a key-value in the cache and returns it without moving the key to recently used.
@@ -511,53 +567,112 @@ print(cache.popitem()) # (3, 6)
print(cache.drain(5)) # 5
```
+
+
* * *
-### *class* LFUCache
-LFU Cache implementation - Least frequantly used policy (thread-safe).
+### `LFUCache` (๐๏ธ class)
+A thread-safe Least Frequently Used (LFU) cache implementation.
-In simple terms, the LFU cache will remove the element in the cache that has been accessed the least, regardless of time.
+This cache removes elements that have been accessed the least number of times,
+regardless of their access time. It provides methods for inserting, retrieving,
+and managing cache entries with configurable maximum size and initial capacity.
| | get | insert | delete(i) | popitem |
| ------------ | ----- | ------- | --------- | ------- |
-| Worse-case | O(1)~ | O(1)~ | O(n) | O(n) |
+| Worse-case | O(1)~ | O(1)~ | O(min(i, n-i)) | O(1)~ |
+
+
+Example
```python
from cachebox import LFUCache
cache = cachebox.LFUCache(5)
-cache.insert(1, 1)
-cache.insert(2, 2)
+cache.insert('first', 'A')
+cache.insert('second', 'B')
-# access 1 twice
-cache[1]
-cache[1]
+# access 'first' twice
+cache['first']
+cache['first']
-# access 2 once
-cache[2]
+# access 'second' once
+cache['second']
-assert cache.least_frequently_used() == 2
+assert cache.least_frequently_used() == 'second'
assert cache.least_frequently_used(2) is None # 2 is out of range
-for item in cache.items():
+for item in cache.items_with_frequency():
print(item)
-# (2, '2')
-# (1, '1')
+# ('second', 'B', 1)
+# ('first', 'A', 2)
```
-> [!TIP]\
-> `.items()`, `.keys()`, and `.values()` are ordered (v4.0+)
+
+
+* * *
+
+### `TTLCache` (๐๏ธ class)
+A thread-safe Time-To-Live (TTL) cache implementation with configurable maximum size and expiration.
+
+This cache automatically removes elements that have expired based on their time-to-live setting.
+Supports various operations like insertion, retrieval, and iteration.
+
+| | get | insert | delete(i) | popitem |
+| ------------ | ----- | ------- | --------- | ------- |
+| Worse-case | O(1)~ | O(1)~ | O(min(i, n-i)) | O(n) |
+
+
+Example
+
+```python
+from cachebox import TTLCache
+import time
+
+# The `ttl` param specifies the time-to-live value for each element in cache (in seconds); cannot be zero or negative.
+cache = TTLCache(0, ttl=2)
+cache.update({i:str(i) for i in range(10)})
+
+print(cache.get_with_expire(2)) # ('2', 1.99)
+
+# Returns the oldest key in cache; this is the one which will be removed by `popitem()`
+print(cache.first()) # 0
+
+cache["mykey"] = "value"
+time.sleep(2)
+cache["mykey"] # KeyError
+```
+
+
* * *
-### *class* VTTLCache
-VTTL Cache implementation - Time-To-Live Per-Key Policy (thread-safe).
+### `VTTLCache` (๐๏ธ class)
+A thread-safe, time-to-live (TTL) cache implementation with per-key expiration policy.
+
+This cache allows storing key-value pairs with optional expiration times. When an item expires,
+it is automatically removed from the cache. The cache supports a maximum size and provides
+various methods for inserting, retrieving, and managing cached items.
-In simple terms, the TTL cache will automatically remove the element in the cache that has expired when need.
+Key features:
+- Per-key time-to-live (TTL) support
+- Configurable maximum cache size
+- Thread-safe operations
+- Automatic expiration of items
+
+Supports dictionary-like operations such as get, insert, update, and iteration.
| | get | insert | delete(i) | popitem |
| ------------ | ----- | ------- | --------- | ------- |
-| Worse-case | O(1)~ | O(1)~ | O(n) | O(n) |
+| Worse-case | O(1)~ | O(1)~ | O(min(i, n-i)) | O(1)~ |
+
+> [!TIP]\
+> `VTTLCache` vs `TTLCache`:
+> - In `VTTLCache` each item has its own unique time-to-live, unlike `TTLCache`.
+> - `VTTLCache` is generally slower than `TTLCache`.
+
+
+Example
```python
from cachebox import VTTLCache
@@ -582,15 +697,18 @@ print(cache.get("key1")) # value
print(cache.get("key2")) # None
```
-> [!TIP]
-> **`VTTLCache` vs `TTLCache`:**
-> - In `VTTLCache` each item has its own unique time-to-live, unlike `TTLCache`.
-> - `VTTLCache` is generally slower than `TTLCache`.
+
* * *
-### *class* Frozen
-**This is not a cache.** this class can freeze your caches and prevents changes โ๏ธ.
+### `Frozen` (๐๏ธ class)
+**This is not a cache**; This is a wrapper class that prevents modifications to an underlying cache implementation.
+
+This class provides a read-only view of a cache, optionally allowing silent
+suppression of modification attempts instead of raising exceptions.
+
+
+Example
```python
from cachebox import Frozen, FIFOCache
@@ -615,6 +733,8 @@ frozen.insert("key", "value")
# TypeError: This cache is frozen.
```
+
+
> [!NOTE]\
> The **Frozen** class can't prevent expiring in [TTLCache](#ttlcache) or [VTTLCache](#vttlcache).
>
@@ -627,64 +747,73 @@ frozen.insert("key", "value")
> print(len(frozen)) # 0
> ```
-## Incompatible changes
+## โ ๏ธ Incompatible Changes
These are changes that are not compatible with the previous version:
**You can see more info about changes in [Changelog](CHANGELOG.md).**
-* * *
-
-#### Pickle serializing changed!
-If you try to load bytes that has dumped by pickle in previous version, you will get `TypeError` exception.
-There's no way to fix that ๐, but it's worth it.
+#### CacheInfo's cachememory attribute renamed!
+The `CacheInfo.cachememory` was renamed to `CacheInfo.memory`.
```python
-import pickle
+@cachebox.cached({})
+def func(a: int, b: int) -> str:
+ ...
-with open("old-version.pickle", "rb") as fd:
- pickle.load(fd) # TypeError: ...
-```
-
-* * *
+info = func.cache_info()
-#### Iterators changed!
-In previous versions, the iterators are not ordered; but now all of iterators are ordered.
-this means all of `.keys()`, `.values()`, `.items()`, and `iter(cache)` methods are ordered now.
+# Older versions
+print(info.cachememory)
-For example:
-```python
-from cachebox import FIFOCache
+# New version
+print(info.memory)
+```
-cache = FIFOCache(maxsize=4)
-for i in range(4):
- cache[i] = str(i)
+#### Errors in the `__eq__` method will not be ignored!
+Now the errors which occurred while doing `__eq__` operations will not be ignored.
-for key in cache:
- print(key)
-# 0
-# 1
-# 2
-# 3
+```python
+class A:
+ def __hash__(self):
+ return 1
+
+ def __eq__(self, other):
+ raise NotImplementedError("not implemeneted")
+
+cache = cachebox.FIFOCache(0, {A(): 10})
+
+# Older versions:
+cache[A()] # => KeyError
+
+# New version:
+cache[A()]
+# Traceback (most recent call last):
+# File "script.py", line 11, in
+# cache[A()]
+# ~~~~~^^^^^
+# File "script.py", line 7, in __eq__
+# raise NotImplementedError("not implemeneted")
+# NotImplementedError: not implemeneted
```
-* * *
+#### Cache comparisons will not be strict!
+In older versions, cache comparisons depended on the caching algorithm. Now, they work just like dictionary comparisons.
-#### `.insert()` method changed!
-In new version, the `.insert()` method has a small change that can help you in coding.
+```python
+cache1 = cachebox.FIFOCache(10)
+cache2 = cachebox.FIFOCache(10)
-`.insert()` equals to `self[key] = value`, but:
-- If the cache did not have this key present, **None is returned**.
-- If the cache did have this key present, the value is updated,
-and **the old value is returned**. The key is not updated, though;
+cache1.insert(1, 'first')
+cache1.insert(2, 'second')
-For example:
-```python
-from cachebox import LRUCache
+cache2.insert(2, 'second')
+cache2.insert(1, 'first')
-lru = LRUCache(10, {"a": "b", "c": "d"})
+# Older versions:
+cache1 == cache2 # False
-print(lru.insert("a", "new-key")) # "b"
-print(lru.insert("no-exists", "val")) # None
+# New version:
+cache1 == cache2 # True
```
## Tips and Notes
@@ -708,25 +837,21 @@ assert c.capacity() == loaded.capacity()
> [!TIP]\
> For more, see this [issue](https://github.com/awolverp/cachebox/issues/8).
-> [!NOTE]\
-> Supported since version 3.1.0
-
* * *
#### How to copy the caches?
-Use `copy.deepcopy` or `copy.copy` for copying caches. For example:
+You can use `copy.deepcopy` or `cache.copy` for copying caches. For example:
```python
-import cachebox, copy
-c = cachebox.LRUCache(100, {i:i for i in range(78)})
+import cachebox
+cache = cachebox.LRUCache(100, {i:i for i in range(78)})
-copied = copy.copy(c)
+# shallow copy
+shallow = cache.copy()
-assert c == copied
-assert c.capacity() == copied.capacity()
+# deep copy
+import copy
+deep = copy.deepcopy(cache)
```
-> [!NOTE]\
-> Supported since version 3.1.0
-
## License
This repository is licensed under the [MIT License](LICENSE)
diff --git a/cachebox/_cachebox.pyi b/cachebox/_cachebox.pyi
deleted file mode 100644
index 80a9796..0000000
--- a/cachebox/_cachebox.pyi
+++ /dev/null
@@ -1,1300 +0,0 @@
-"""
-cachebox core ( written in Rust )
-"""
-
-import typing
-
-__version__: str
-__author__: str
-
-version_info: typing.Tuple[int, int, int, bool]
-""" (major, minor, patch, is_beta) """
-
-KT = typing.TypeVar("KT")
-VT = typing.TypeVar("VT")
-DT = typing.TypeVar("DT")
-
-class BaseCacheImpl(typing.Generic[KT, VT]):
- """
- This is the base class of all cache classes such as Cache, FIFOCache, ...
-
- Do not try to call its constructor, this is only for type-hint.
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None: ...
- @staticmethod
- def __class_getitem__(*args) -> None: ...
- @property
- def maxsize(self) -> int: ...
- def _state(self) -> int: ...
- def __len__(self) -> int: ...
- def __sizeof__(self) -> int: ...
- def __bool__(self) -> bool: ...
- def __contains__(self, key: KT) -> bool: ...
- def __setitem__(self, key: KT, value: VT) -> None: ...
- def __getitem__(self, key: KT) -> VT: ...
- def __delitem__(self, key: KT) -> VT: ...
- def __str__(self) -> str: ...
- def __iter__(self) -> typing.Iterator[KT]: ...
- def __richcmp__(self, other, op: int) -> bool: ...
- def __getstate__(self) -> object: ...
- def __getnewargs__(self) -> tuple: ...
- def __setstate__(self, state: object) -> None: ...
- def capacity(self) -> int: ...
- def is_full(self) -> bool: ...
- def is_empty(self) -> bool: ...
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]: ...
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]: ...
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]: ...
- def setdefault(
- self, key: KT, default: typing.Optional[DT] = None
- ) -> typing.Optional[VT | DT]: ...
- def popitem(self) -> typing.Tuple[KT, VT]: ...
- def drain(self, n: int) -> int: ...
- def clear(self, *, reuse: bool = False) -> None: ...
- def shrink_to_fit(self) -> None: ...
- def update(
- self, iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]]
- ) -> None: ...
- def keys(self) -> typing.Iterable[KT]: ...
- def values(self) -> typing.Iterable[VT]: ...
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]: ...
-
-class Cache(BaseCacheImpl[KT, VT]):
- """
- A simple cache that has no algorithm; this is only a hashmap.
-
- `Cache` vs `dict`:
- - it is thread-safe and unordered, while `dict` isn't thread-safe and ordered (Python 3.6+).
- - it uses very lower memory than `dict`.
- - it supports useful and new methods for managing memory, while `dict` does not.
- - it does not support `popitem`, while `dict` does.
- - You can limit the size of `Cache`, but you cannot for `dict`.
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- A simple cache that has no algorithm; this is only a hashmap.
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
-
- Note: raises `OverflowError` if the cache reached the maxsize limit,
- because this class does not have any algorithm.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
-
- Note: raises `OverflowError` if the cache reached the maxsize limit,
- because this class does not have any algorithm.
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.NoReturn: ... # not implemented for this class
- def drain(self, n: int) -> typing.NoReturn: ... # not implemented for this class
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def shrink_to_fit(self) -> None:
- """
- Shrinks the cache to fit len(self) elements.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
-
- Note: raises `OverflowError` if the cache reached the maxsize limit.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Keys are not ordered.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Values are not ordered.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Items are not ordered.
- """
- ...
-
-class FIFOCache(BaseCacheImpl[KT, VT]):
- """
- FIFO Cache implementation - First-In First-Out Policy (thread-safe).
-
- In simple terms, the FIFO cache will remove the element that has been in the cache the longest
- """
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- FIFO Cache implementation - First-In First-Out Policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.Tuple[KT, VT]:
- """
- Removes the element that has been in the cache the longest
- """
- ...
-
- def drain(self, n: int) -> int:
- """
- Does the `popitem()` `n` times and returns count of removed items.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def first(self, n: int = 0) -> typing.Optional[KT]:
- """
- Returns the first key in cache; this is the one which will be removed by `popitem()` (if n == 0).
-
- By using `n` parameter, you can browse order index by index.
- """
- ...
-
- def last(self) -> typing.Optional[KT]:
- """
- Returns the last key in cache.
- """
- ...
-
-class RRCache(BaseCacheImpl[KT, VT]):
- """
- RRCache implementation - Random Replacement policy (thread-safe).
-
- In simple terms, the RR cache will choice randomly element to remove it to make space when necessary.
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- RRCache implementation - Random Replacement policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def shrink_to_fit(self) -> None:
- """
- Shrinks the cache to fit len(self) elements.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
-
- Note: raises `OverflowError` if the cache reached the maxsize limit.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Keys are not ordered.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Values are not ordered.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Items are not ordered.
- """
- ...
-
-class TTLCache(BaseCacheImpl[KT, VT]):
- """
- TTL Cache implementation - Time-To-Live Policy (thread-safe).
-
- In simple terms, the TTL cache will automatically remove the element in the cache that has expired.
- """
-
- def __init__(
- self,
- maxsize: int,
- ttl: float,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- TTL Cache implementation - Time-To-Live Policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param ttl: specifies the time-to-live value for each element in cache (in seconds); cannot be zero or negative.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.Tuple[KT, VT]:
- """
- Removes the element that has been in the cache the longest
- """
- ...
-
- def drain(self, n: int) -> int:
- """
- Does the `popitem()` `n` times and returns count of removed items.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def first(self, n: int = 0) -> typing.Optional[KT]:
- """
- Returns the oldest key in cache; this is the one which will be removed by `popitem()` (if n == 0).
-
- By using `n` parameter, you can browse order index by index.
- """
- ...
-
- def last(self) -> typing.Optional[KT]:
- """
- Returns the newest key in cache.
- """
- ...
-
- def get_with_expire(
- self, key: KT, default: DT = None
- ) -> typing.Tuple[typing.Union[VT, DT], float]:
- """
- Works like `.get()`, but also returns the remaining time-to-live.
- """
- ...
-
- def pop_with_expire(
- self, key: KT, default: DT = None
- ) -> typing.Tuple[typing.Union[VT, DT], float]:
- """
- Works like `.pop()`, but also returns the remaining time-to-live.
- """
- ...
-
- def popitem_with_expire(self) -> typing.Tuple[KT, VT, float]:
- """
- Works like `.popitem()`, but also returns the remaining time-to-live.
- """
- ...
-
-class LRUCache(BaseCacheImpl[KT, VT]):
- """
- LRU Cache implementation - Least recently used policy (thread-safe).
-
- In simple terms, the LRU cache will remove the element in the cache that has not been accessed in the longest time.
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- LRU Cache implementation - Least recently used policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def peek(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Searches for a key-value in the cache and returns it (without moving the key to recently used).
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.Tuple[KT, VT]:
- """
- Removes the element that has been in the cache the longest
- """
- ...
-
- def drain(self, n: int) -> int:
- """
- Does the `popitem()` `n` times and returns count of removed items.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def least_recently_used(self, n: int = 0) -> typing.Optional[KT]:
- """
- Returns the key in the cache that has not been accessed in the longest time.
- """
- ...
-
- def most_recently_used(self) -> typing.Optional[KT]:
- """
- Returns the key in the cache that has been accessed in the shortest time.
- """
- ...
-
-class LFUCache(BaseCacheImpl[KT, VT]):
- """
- LFU Cache implementation - Least frequantly used policy (thread-safe).
-
- In simple terms, the LFU cache will remove the element in the cache that has been accessed the least, regardless of time
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- *,
- capacity: int = ...,
- ) -> None:
- """
- LFU Cache implementation - Least frequantly used policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but returns a value:
-
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def peek(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Searches for a key-value in the cache and returns it (without increasing frequenctly counter).
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.Tuple[KT, VT]:
- """
- Removes the element that has been in the cache the longest
- """
- ...
-
- def drain(self, n: int) -> int:
- """
- Does the `popitem()` `n` times and returns count of removed items.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def update(self, iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT]) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- """
- ...
-
- def least_frequently_used(self, n: int = 0) -> typing.Optional[KT]:
- """
- Returns the key in the cache that has been accessed the least, regardless of time.
- """
- ...
-
-class VTTLCache(BaseCacheImpl[KT, VT]):
- """
- VTTL Cache implementation - Time-To-Live Per-Key Policy (thread-safe).
-
- In simple terms, the TTL cache will automatically remove the element in the cache that has expired when need.
- """
-
- def __init__(
- self,
- maxsize: int,
- iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
- ttl: typing.Optional[float] = 0.0,
- *,
- capacity: int = ...,
- ) -> None:
- """
- VTTL Cache implementation - Time-To-Live Per-Key Policy (thread-safe).
-
- :param maxsize: you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
-
- :param iterable: you can create cache from a dict or an iterable.
-
- :param ttl: specifies the time-to-live value for each element in cache (in seconds); cannot be zero or negative.
-
- :param capacity: If `capacity` param is given, cache attempts to allocate a new hash table with at
- least enough capacity for inserting the given number of elements without reallocating.
- """
- ...
-
- def __setitem__(self, key: KT, value: VT) -> None:
- """
- Set self[key] to value.
-
- Recommended to use `.insert()` method here.
- """
- ...
-
- def __getitem__(self, key: KT) -> VT:
- """
- Returns self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def __delitem__(self, key: KT) -> VT:
- """
- Deletes self[key].
-
- Note: raises `KeyError` if key not found.
- """
- ...
-
- def capacity(self) -> int:
- """
- Returns the number of elements the map can hold without reallocating.
- """
- ...
-
- def is_full(self) -> bool:
- """
- Equivalent directly to `len(self) == self.maxsize`
- """
- ...
-
- def is_empty(self) -> bool:
- """
- Equivalent directly to `len(self) == 0`
- """
- ...
-
- def insert(self, key: KT, value: VT, ttl: typing.Optional[float] = None) -> typing.Optional[VT]:
- """
- Equals to `self[key] = value`, but:
- - Here you can set ttl for key-value ( with `self[key] = value` you can't )
- - If the cache did not have this key present, None is returned.
- - If the cache did have this key present, the value is updated,
- and the old value is returned. The key is not updated, though;
- """
- ...
-
- def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Equals to `self[key]`, but returns `default` if the cache don't have this key present.
- """
- ...
-
- def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]:
- """
- Removes specified key and return the corresponding value.
-
- If the key is not found, returns the `default`.
- """
- ...
-
- def setdefault(
- self, key: KT, default: typing.Optional[DT] = None, ttl: typing.Optional[float] = None
- ) -> typing.Optional[VT | DT]:
- """
- Inserts key with a value of default if key is not in the cache.
-
- Return the value for key if key is in the cache, else default.
- """
- ...
-
- def popitem(self) -> typing.Tuple[KT, VT]:
- """
- Removes the element that has been in the cache the longest
- """
- ...
-
- def drain(self, n: int) -> int:
- """
- Does the `popitem()` `n` times and returns count of removed items.
- """
- ...
-
- def clear(self, *, reuse: bool = False) -> None:
- """
- Removes all items from cache.
-
- If reuse is True, will not free the memory for reusing in the future.
- """
- ...
-
- def update(
- self,
- iterable: typing.Iterable[typing.Tuple[KT, VT]] | typing.Dict[KT, VT],
- ttl: typing.Optional[float] = None,
- ) -> None:
- """
- Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
- """
- ...
-
- def keys(self) -> typing.Iterable[KT]:
- """
- Returns an iterable object of the cache's keys.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def values(self) -> typing.Iterable[VT]:
- """
- Returns an iterable object of the cache's values.
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
- """
- Returns an iterable object of the cache's items (key-value pairs).
-
- Notes:
- - You should not make any changes in cache while using this iterable object.
- - Don't call `len(cache)`, `bool(cache)`, `cache.is_full()` or `cache.is_empty()` while using this iterable object.
- """
- ...
-
- def first(self, n: int = 0) -> typing.Optional[KT]:
- """
- Returns the oldest key in cache; this is the one which will be removed by `popitem()` (if n == 0).
-
- By using `n` parameter, you can browse order index by index.
- """
- ...
-
- def last(self) -> typing.Optional[KT]:
- """
- Returns the newest key in cache.
- """
- ...
-
- def get_with_expire(
- self, key: KT, default: DT = None
- ) -> typing.Tuple[typing.Union[VT, DT], float]:
- """
- Works like `.get()`, but also returns the remaining time-to-live.
- """
- ...
-
- def pop_with_expire(
- self, key: KT, default: DT = None
- ) -> typing.Tuple[typing.Union[VT, DT], float]:
- """
- Works like `.pop()`, but also returns the remaining time-to-live.
- """
- ...
-
- def popitem_with_expire(self) -> typing.Tuple[KT, VT, float]:
- """
- Works like `.popitem()`, but also returns the remaining time-to-live.
- """
- ...
-
-class cache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
-
-class fifocache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
-
-class ttlcache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
-
-class lrucache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
-
-class lfucache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
-
-class vttlcache_iterator:
- def __len__(self) -> int: ...
- def __iter__(self) -> typing.Iterator: ...
- def __next__(self) -> typing.Any: ...
diff --git a/pyproject.toml b/pyproject.toml
index 503566e..5fab6cb 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,5 +1,5 @@
[build-system]
-requires = ["maturin>=1.6,<2.0"]
+requires = ["maturin>=1.8,<2.0"]
build-backend = "maturin"
[project]
@@ -41,7 +41,12 @@ dynamic = [
[project.urls]
Homepage = 'https://github.com/awolverp/cachebox'
+[project.optional-dependencies]
+
+[tool.pytest.ini_options]
+asyncio_default_fixture_loop_scope = "function"
+
[tool.maturin]
+python-source = "python"
features = ["pyo3/extension-module"]
-bindings = 'pyo3'
-module-name = "cachebox._cachebox"
+module-name = "cachebox._core"
diff --git a/cachebox/__init__.py b/python/cachebox/__init__.py
similarity index 53%
rename from cachebox/__init__.py
rename to python/cachebox/__init__.py
index 488bf16..3438d0c 100644
--- a/cachebox/__init__.py
+++ b/python/cachebox/__init__.py
@@ -1,37 +1,18 @@
-"""
-The fastest caching library written in Rust.
-
-Example::
-
- from cachebox import TTLCache
- import time
-
- cache = TTLCache(1000, ttl=2)
- cache[0] = 1
- time.sleep(2)
- cache.get(0, None) # None
-"""
-
+from ._core import (
+ __author__ as __author__,
+ __version__ as __version__,
+)
from ._cachebox import (
BaseCacheImpl as BaseCacheImpl,
Cache as Cache,
FIFOCache as FIFOCache,
RRCache as RRCache,
- TTLCache as TTLCache,
LRUCache as LRUCache,
LFUCache as LFUCache,
+ TTLCache as TTLCache,
VTTLCache as VTTLCache,
- cache_iterator as cache_iterator,
- fifocache_iterator as fifocache_iterator,
- ttlcache_iterator as ttlcache_iterator,
- lrucache_iterator as lrucache_iterator,
- lfucache_iterator as lfucache_iterator,
- vttlcache_iterator as vttlcache_iterator,
- __version__ as __version__,
- __author__ as __author__,
- version_info as version_info,
+ IteratorView as IteratorView,
)
-
from .utils import (
Frozen as Frozen,
cached as cached,
diff --git a/python/cachebox/_cachebox.py b/python/cachebox/_cachebox.py
new file mode 100644
index 0000000..c3cc796
--- /dev/null
+++ b/python/cachebox/_cachebox.py
@@ -0,0 +1,2110 @@
+from . import _core
+from ._core import BaseCacheImpl
+from datetime import timedelta, datetime
+import copy as _std_copy
+import typing
+
+
+KT = typing.TypeVar("KT")
+VT = typing.TypeVar("VT")
+DT = typing.TypeVar("DT")
+
+
+def _items_to_str(items, length):
+ if length <= 50:
+ return "{" + ", ".join(f"{k!r}: {v!r}" for k, v in items) + "}"
+
+ c = 0
+ left = []
+
+ while c < length:
+ k, v = next(items)
+
+ if c <= 50:
+ left.append(f"{k!r}: {v!r}")
+
+ else:
+ break
+
+ c += 1
+
+ return "{%s, ... %d more ...}" % (", ".join(left), length - c)
+
+
+class IteratorView(typing.Generic[VT]):
+ __slots__ = ("iterator", "func")
+
+ def __init__(self, iterator, func: typing.Callable[[tuple], typing.Any]):
+ self.iterator = iterator
+ self.func = func
+
+ def __iter__(self):
+ self.iterator = self.iterator.__iter__()
+ return self
+
+ def __next__(self) -> VT:
+ return self.func(self.iterator.__next__())
+
+
+class Cache(BaseCacheImpl[KT, VT]):
+ """
+ A thread-safe, memory-efficient hashmap-like cache with configurable maximum size.
+
+ Provides a flexible key-value storage mechanism with:
+ - Configurable maximum size (zero means unlimited)
+ - Lower memory usage compared to standard dict
+ - Thread-safe operations
+ - Useful memory management methods
+
+ Differs from standard dict by:
+ - Being thread-safe
+ - Unordered storage
+ - Size limitation
+ - Memory efficiency
+ - Additional cache management methods
+
+ Supports initialization with optional initial data and capacity,
+ and provides dictionary-like access with additional cache-specific operations.
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[dict, typing.Iterable[tuple]] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new Cache instance.
+
+ Args:
+ maxsize (int): Maximum number of elements the cache can hold. Zero means unlimited.
+ iterable (Union[Cache, dict, tuple, Generator, None], optional): Initial data to populate the cache. Defaults to None.
+ capacity (int, optional): Pre-allocate hash table capacity to minimize reallocations. Defaults to 0.
+
+ Creates a new cache with specified size constraints and optional initial data. The cache can be pre-sized
+ to improve performance when the number of expected elements is known in advance.
+ """
+ self._raw = _core.Cache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Equals to `self[key] = value`, but returns a value:
+
+ - If the cache did not have this key present, None is returned.
+ - If the cache did have this key present, the value is updated,
+ and the old value is returned. The key is not updated, though;
+
+ Note: raises `OverflowError` if the cache reached the maxsize limit,
+ because this class does not have any algorithm.
+ """
+ return self._raw.insert(key, value)
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ return default
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache. Return the value for key if key is
+ in the cache, else `default`.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.NoReturn: # pragma: no cover
+ raise NotImplementedError()
+
+ def drain(self) -> typing.NoReturn: # pragma: no cover
+ raise NotImplementedError()
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """
+ Updates the cache with elements from a dictionary or an iterable object of key/value pairs.
+
+ Note: raises `OverflowError` if the cache reached the maxsize limit.
+ """
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, Cache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, Cache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Items are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x)
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Keys are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[0])
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Values are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[1])
+
+ def copy(self) -> "Cache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "Cache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "Cache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ _items_to_str(self._raw.items(), len(self._raw)),
+ )
+
+
+class FIFOCache(BaseCacheImpl[KT, VT]):
+ """
+ A First-In-First-Out (FIFO) cache implementation with configurable maximum size and optional initial capacity.
+
+ This cache provides a fixed-size container that automatically removes the oldest items when the maximum size is reached.
+ Supports various operations like insertion, retrieval, deletion, and iteration.
+
+ Attributes:
+ maxsize: The maximum number of items the cache can hold.
+ capacity: The initial capacity of the cache before resizing.
+
+ Key features:
+ - Deterministic item eviction order (oldest items removed first)
+ - Efficient key-value storage and retrieval
+ - Supports dictionary-like operations
+ - Allows optional initial data population
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new FIFOCache instance.
+
+ Args:
+ maxsize: The maximum number of items the cache can hold.
+ iterable: Optional initial data to populate the cache. Can be another FIFOCache,
+ a dictionary, tuple, generator, or None.
+ capacity: Optional initial capacity of the cache before resizing. Defaults to 0.
+ """
+ self._raw = _core.FIFOCache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Inserts a key-value pair into the cache, returning the previous value if the key existed.
+
+ Equivalent to `self[key] = value`, but with additional return value semantics:
+
+ - If the key was not previously in the cache, returns None.
+ - If the key was already present, updates the value and returns the old value.
+ The key itself is not modified.
+
+ Args:
+ key: The key to insert.
+ value: The value to associate with the key.
+
+ Returns:
+ The previous value associated with the key, or None if the key was not present.
+ """
+ return self._raw.insert(key, value)
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """ "
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ return default
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache.
+
+ Return the value for key if key is in the cache, else default.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """Removes the element that has been in the cache the longest."""
+ try:
+ return self._raw.popitem()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, FIFOCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, FIFOCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x)
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[0])
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[1])
+
+ def first(self, n: int = 0) -> typing.Optional[KT]:
+ """
+ Returns the first key in cache; this is the one which will be removed by `popitem()` (if n == 0).
+
+ By using `n` parameter, you can browse order index by index.
+ """
+ if n < 0:
+ n = len(self._raw) + n
+
+ if n < 0:
+ return None
+
+ return self._raw.get_index(n)
+
+ def last(self) -> typing.Optional[KT]:
+ """
+ Returns the last key in cache. Equals to `self.first(-1)`.
+ """
+ return self._raw.get_index(len(self._raw) - 1)
+
+ def copy(self) -> "FIFOCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "FIFOCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "FIFOCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ _items_to_str(self._raw.items(), len(self._raw)),
+ )
+
+
+class RRCache(BaseCacheImpl[KT, VT]):
+ """
+ A thread-safe cache implementation with Random Replacement (RR) policy.
+
+ This cache randomly selects and removes elements when the cache reaches its maximum size,
+ ensuring a simple and efficient caching mechanism with configurable capacity.
+
+ Supports operations like insertion, retrieval, deletion, and iteration.
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new RRCache instance.
+
+ Args:
+ maxsize (int): Maximum size of the cache. A value of zero means unlimited capacity.
+ iterable (dict or Iterable[tuple], optional): Initial data to populate the cache. Defaults to None.
+ capacity (int, optional): Preallocated capacity for the cache to minimize reallocations. Defaults to 0.
+
+ Note:
+ - The cache size limit is immutable after initialization.
+ - If an iterable is provided, the cache will be populated using the update method.
+ """
+ self._raw = _core.RRCache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Inserts a key-value pair into the cache, returning the previous value if the key existed.
+
+ Equivalent to `self[key] = value`, but with additional return value semantics:
+
+ - If the key was not previously in the cache, returns None.
+ - If the key was already present, updates the value and returns the old value.
+ The key itself is not modified.
+
+ Args:
+ key: The key to insert.
+ value: The value to associate with the key.
+
+ Returns:
+ The previous value associated with the key, or None if the key was not present.
+ """
+ return self._raw.insert(key, value)
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ return default
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache.
+
+ Return the value for key if key is in the cache, else default.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """Randomly selects and removes a (key, value) pair from the cache."""
+ try:
+ return self._raw.popitem()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def random_key(self) -> KT:
+ """
+ Randomly selects and returns a key from the cache.
+ Raises `KeyError` If the cache is empty.
+ """
+ try:
+ return self._raw.random_key()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, RRCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, RRCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Items are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x)
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Keys are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[0])
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ - Values are not ordered.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[1])
+
+ def copy(self) -> "RRCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "RRCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "RRCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ _items_to_str(self._raw.items(), len(self._raw)),
+ )
+
+
+class LRUCache(BaseCacheImpl[KT, VT]):
+ """
+ Thread-safe Least Recently Used (LRU) cache implementation.
+
+ Provides a cache that automatically removes the least recently used items when
+ the cache reaches its maximum size. Supports various operations like insertion,
+ retrieval, and management of cached items with configurable maximum size and
+ initial capacity.
+
+ Key features:
+ - Configurable maximum cache size
+ - Optional initial capacity allocation
+ - Thread-safe operations
+ - Efficient key-value pair management
+ - Supports initialization from dictionaries or iterables
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new LRU Cache instance.
+
+ Args:
+ maxsize (int): Maximum size of the cache. Zero indicates unlimited size.
+ iterable (dict | Iterable[tuple], optional): Initial data to populate the cache.
+ capacity (int, optional): Pre-allocated capacity for the cache to minimize reallocations.
+
+ Notes:
+ - The cache size is immutable after initialization.
+ - If an iterable is provided, it will be used to populate the cache.
+ """
+ self._raw = _core.LRUCache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Inserts a key-value pair into the cache, returning the previous value if the key existed.
+
+ Equivalent to `self[key] = value`, but with additional return value semantics:
+
+ - If the key was not previously in the cache, returns None.
+ - If the key was already present, updates the value and returns the old value.
+ The key itself is not modified.
+
+ Args:
+ key: The key to insert.
+ value: The value to associate with the key.
+
+ Returns:
+ The previous value associated with the key, or None if the key was not present.
+ """
+ return self._raw.insert(key, value)
+
+ def peek(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Searches for a key-value in the cache and returns it (without moving the key to recently used).
+ """
+ try:
+ return self._raw.peek(key)
+ except _core.CoreKeyError:
+ return default
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ return default
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache.
+
+ Return the value for key if key is in the cache, else default.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """
+ Removes the least recently used item from the cache and returns it as a (key, value) tuple.
+ Raises KeyError if the cache is empty.
+ """
+ try:
+ return self._raw.popitem()
+ except _core.CoreKeyError: # pragma: no cover
+ raise KeyError() from None
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, LRUCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, LRUCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x)
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[0])
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[1])
+
+ def least_recently_used(self) -> typing.Optional[KT]:
+ """
+ Returns the key in the cache that has not been accessed in the longest time.
+ """
+ return self._raw.least_recently_used()
+
+ def most_recently_used(self) -> typing.Optional[KT]:
+ """
+ Returns the key in the cache that has been accessed in the shortest time.
+ """
+ return self._raw.most_recently_used()
+
+ def copy(self) -> "LRUCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "LRUCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "LRUCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ _items_to_str(self._raw.items(), len(self._raw)),
+ )
+
+
+class LFUCache(BaseCacheImpl[KT, VT]):
+ """
+ A thread-safe Least Frequently Used (LFU) cache implementation.
+
+ This cache removes elements that have been accessed the least number of times,
+ regardless of their access time. It provides methods for inserting, retrieving,
+ and managing cache entries with configurable maximum size and initial capacity.
+
+ Key features:
+ - Thread-safe cache with LFU eviction policy
+ - Configurable maximum size and initial capacity
+ - Supports initialization from dictionaries or iterables
+ - Provides methods for key-value management similar to dict
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new Least Frequently Used (LFU) cache.
+
+ Args:
+ maxsize (int): Maximum size of the cache. A value of zero means unlimited size.
+ iterable (dict or Iterable[tuple], optional): Initial data to populate the cache.
+ capacity (int, optional): Initial hash table capacity to minimize reallocations. Defaults to 0.
+
+ The cache uses a thread-safe LFU eviction policy, removing least frequently accessed items when the cache reaches its maximum size.
+ """
+ self._raw = _core.LFUCache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Inserts a key-value pair into the cache, returning the previous value if the key existed.
+
+ Equivalent to `self[key] = value`, but with additional return value semantics:
+
+ - If the key was not previously in the cache, returns None.
+ - If the key was already present, updates the value and returns the old value.
+ The key itself is not modified.
+
+ Args:
+ key: The key to insert.
+ value: The value to associate with the key.
+
+ Returns:
+ The previous value associated with the key, or None if the key was not present.
+ """
+ return self._raw.insert(key, value)
+
+ def peek(
+ self, key: KT, default: typing.Optional[DT] = None
+ ) -> typing.Union[VT, DT]: # pragma: no cover
+ """
+ Searches for a key-value in the cache and returns it (without moving the key to recently used).
+ """
+ try:
+ return self._raw.peek(key)
+ except _core.CoreKeyError:
+ return default
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ return default
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache.
+
+ Return the value for key if key is in the cache, else default.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """
+ Removes and returns the least frequently used (LFU) item from the cache.
+ """
+ try:
+ return self._raw.popitem()
+ except _core.CoreKeyError: # pragma: no cover
+ raise KeyError() from None
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, LFUCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, LFUCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: (x[0], x[1]))
+
+ def items_with_frequency(self) -> IteratorView[typing.Tuple[KT, VT, int]]:
+ """
+ Returns an iterable view - containing tuples of `(key, value, frequency)` - of the cache's items along with their access frequency.
+
+ Notes:
+ - The returned iterator should not be used to modify the cache.
+ - Frequency represents how many times the item has been accessed.
+ """
+ return IteratorView(self._raw.items(), lambda x: x)
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[0])
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x[1])
+
+ def least_frequently_used(self, n: int = 0) -> typing.Optional[KT]:
+ """
+ Returns the key in the cache that has been accessed the least, regardless of time.
+
+ If n is given, returns the nth least frequently used key.
+
+ Notes:
+ - This method may re-sort the cache which can cause iterators to be stopped.
+ - Do not use this method while using iterators.
+ """
+ if n < 0:
+ n = len(self._raw) + n
+
+ if n < 0:
+ return None
+
+ return self._raw.least_frequently_used(n)
+
+ def copy(self) -> "LFUCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "LFUCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "LFUCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ # NOTE: we cannot use self._raw.items() here because iterables a tuples of (key, value, frequency)
+ _items_to_str(self.items(), len(self._raw)),
+ )
+
+
+class TTLCache(BaseCacheImpl[KT, VT]):
+ """
+ A thread-safe Time-To-Live (TTL) cache implementation with configurable maximum size and expiration.
+
+ This cache automatically removes elements that have expired based on their time-to-live setting.
+ Supports various operations like insertion, retrieval, and iteration.
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ ttl: typing.Union[float, timedelta],
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new TTL cache instance.
+
+ Args:
+ maxsize: Maximum number of elements the cache can hold.
+ ttl: Time-to-live for cache entries, either as seconds or a timedelta.
+ iterable: Optional initial items to populate the cache, can be a dict or iterable of tuples.
+ capacity: Optional initial capacity for the underlying cache storage. Defaults to 0.
+
+ Raises:
+ ValueError: If the time-to-live (ttl) is not a positive number.
+ """
+ if isinstance(ttl, timedelta):
+ ttl = ttl.total_seconds()
+
+ if ttl <= 0:
+ raise ValueError("ttl must be a positive number and non-zero")
+
+ self._raw = _core.TTLCache(maxsize, ttl, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ @property
+ def ttl(self) -> float:
+ return self._raw.ttl()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(self, key: KT, value: VT) -> typing.Optional[VT]:
+ """
+ Inserts a key-value pair into the cache, returning the previous value if the key existed.
+
+ Equivalent to `self[key] = value`, but with additional return value semantics:
+
+ - If the key was not previously in the cache, returns None.
+ - If the key was already present, updates the value and returns the old value.
+ The key itself is not modified.
+
+ Args:
+ key: The key to insert.
+ value: The value to associate with the key.
+
+ Returns:
+ The previous value associated with the key, or None if the key was not present.
+ """
+ return self._raw.insert(key, value)
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key).value()
+ except _core.CoreKeyError:
+ return default
+
+ def get_with_expire(
+ self, key: KT, default: typing.Optional[DT] = None
+ ) -> typing.Tuple[typing.Union[VT, DT], float]:
+ """
+ Retrieves the value and expiration duration for a given key from the cache.
+
+ Returns a tuple containing the value associated with the key and its duration.
+ If the key is not found, returns the default value and 0.0 duration.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ A tuple of (value, duration), where value is the cached value or default,
+ and duration is the time-to-live for the key (or 0.0 if not found).
+ """
+ try:
+ pair = self._raw.get(key)
+ except _core.CoreKeyError:
+ return default, 0.0
+ else:
+ return (pair.value(), pair.duration())
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key).value()
+ except _core.CoreKeyError:
+ return default
+
+ def pop_with_expire(
+ self, key: KT, default: typing.Optional[DT] = None
+ ) -> typing.Tuple[typing.Union[VT, DT], float]:
+ """
+ Removes the specified key from the cache and returns its value and expiration duration.
+
+ If the key is not found, returns the default value and 0.0 duration.
+
+ Args:
+ key: The key to remove from the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ A tuple of (value, duration), where value is the cached value or default,
+ and duration is the time-to-live for the key (or 0.0 if not found).
+ """
+ try:
+ pair = self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default, 0.0
+ else:
+ return (pair.value(), pair.duration())
+
+ def setdefault(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Inserts key with a value of default if key is not in the cache.
+
+ Return the value for key if key is in the cache, else default.
+ """
+ return self._raw.setdefault(key, default)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """Removes the element that has been in the cache the longest."""
+ try:
+ val = self._raw.popitem()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+ else:
+ return val.pack2()
+
+ def popitem_with_expire(self) -> typing.Tuple[KT, VT, float]:
+ """
+ Removes and returns the element that has been in the cache the longest, along with its key and expiration duration.
+
+ If the cache is empty, raises a KeyError.
+
+ Returns:
+ A tuple of (key, value, duration), where:
+ - key is the key of the removed item
+ - value is the value of the removed item
+ - duration is the time-to-live for the removed item
+ """
+ try:
+ val = self._raw.popitem()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+ else:
+ return val.pack3()
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(self, iterable: typing.Union[dict, typing.Iterable[tuple]]) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ self._raw.update(iterable)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key).value()
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, TTLCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, TTLCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items_with_expire(self) -> IteratorView[typing.Tuple[KT, VT, float]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs along with their expiration duration).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.pack3())
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.pack2())
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.key())
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.value())
+
+ def first(self, n: int = 0) -> typing.Optional[KT]: # pragma: no cover
+ """
+ Returns the first key in cache; this is the one which will be removed by `popitem()` (if n == 0).
+
+ By using `n` parameter, you can browse order index by index.
+ """
+ if n < 0:
+ n = len(self._raw) + n
+
+ if n < 0:
+ return None
+
+ return self._raw.get_index(n)
+
+ def last(self) -> typing.Optional[KT]:
+ """
+ Returns the last key in cache. Equals to `self.first(-1)`.
+ """
+ return self._raw.get_index(len(self._raw) - 1)
+
+ def expire(self) -> None: # pragma: no cover
+ """
+ Manually removes expired key-value pairs from memory and releases their memory.
+
+ Notes:
+ - This operation is typically automatic and does not require manual invocation.
+ """
+ self._raw.expire()
+
+ def copy(self) -> "TTLCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "TTLCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "TTLCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d, ttl=%f](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ self._raw.ttl(),
+ _items_to_str(self.items(), len(self._raw)),
+ )
+
+
+class VTTLCache(BaseCacheImpl[KT, VT]):
+ """
+ A thread-safe, time-to-live (TTL) cache implementation with per-key expiration policy.
+
+ This cache allows storing key-value pairs with optional expiration times. When an item expires,
+ it is automatically removed from the cache. The cache supports a maximum size and provides
+ various methods for inserting, retrieving, and managing cached items.
+
+ Key features:
+ - Per-key time-to-live (TTL) support
+ - Configurable maximum cache size
+ - Thread-safe operations
+ - Automatic expiration of items
+
+ Supports dictionary-like operations such as get, insert, update, and iteration.
+ """
+
+ __slots__ = ("_raw",)
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Union[dict, typing.Iterable[tuple]], None] = None,
+ ttl: typing.Union[float, timedelta, datetime, None] = None, # This is not a global TTL!
+ *,
+ capacity: int = 0,
+ ) -> None:
+ """
+ Initialize a new VTTLCache instance.
+
+ Args:
+ maxsize (int): Maximum size of the cache. Zero indicates unlimited size.
+ iterable (dict or Iterable[tuple], optional): Initial data to populate the cache.
+ ttl (float or timedelta or datetime, optional): Time-to-live duration for `iterable` items.
+ capacity (int, optional): Preallocated capacity for the cache to minimize reallocations.
+
+ Raises:
+ ValueError: If provided TTL is zero or negative.
+ """
+ self._raw = _core.VTTLCache(maxsize, capacity=capacity)
+
+ if iterable is not None:
+ self.update(iterable, ttl)
+
+ @property
+ def maxsize(self) -> int:
+ return self._raw.maxsize()
+
+ def capacity(self) -> int:
+ """Returns the number of elements the map can hold without reallocating."""
+ return self._raw.capacity()
+
+ def __len__(self) -> int:
+ return len(self._raw)
+
+ def __sizeof__(self): # pragma: no cover
+ return self._raw.__sizeof__()
+
+ def __contains__(self, key: KT) -> bool:
+ return key in self._raw
+
+ def __bool__(self) -> bool:
+ return not self.is_empty()
+
+ def is_empty(self) -> bool:
+ return self._raw.is_empty()
+
+ def is_full(self) -> bool:
+ return self._raw.is_full()
+
+ def insert(
+ self, key: KT, value: VT, ttl: typing.Union[float, timedelta, datetime, None] = None
+ ) -> typing.Optional[VT]:
+ """
+ Insert a key-value pair into the cache with an optional time-to-live (TTL).
+ Returns the previous value associated with the key, if it existed.
+
+ Args:
+ key (KT): The key to insert.
+ value (VT): The value to associate with the key.
+ ttl (float or timedelta or datetime, optional): Time-to-live duration for the item.
+ If a timedelta or datetime is provided, it will be converted to seconds.
+
+ Raises:
+ ValueError: If the provided TTL is zero or negative.
+ """
+ if ttl is not None: # pragma: no cover
+ if isinstance(ttl, timedelta):
+ ttl = ttl.total_seconds()
+
+ elif isinstance(ttl, datetime):
+ ttl = (ttl - datetime.now()).total_seconds()
+
+ if ttl <= 0:
+ raise ValueError("ttl must be positive and non-zero")
+
+ return self._raw.insert(key, value, ttl)
+
+ def get(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Retrieves the value for a given key from the cache.
+
+ Returns the value associated with the key if present, otherwise returns the specified default value.
+ Equivalent to `self[key]`, but provides a fallback default if the key is not found.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ The value associated with the key, or the default value if the key is not found.
+ """
+ try:
+ return self._raw.get(key).value()
+ except _core.CoreKeyError:
+ return default
+
+ def get_with_expire(
+ self, key: KT, default: typing.Optional[DT] = None
+ ) -> typing.Tuple[typing.Union[VT, DT], float]:
+ """
+ Retrieves the value and expiration duration for a given key from the cache.
+
+ Returns a tuple containing the value associated with the key and its duration.
+ If the key is not found, returns the default value and 0.0 duration.
+
+ Args:
+ key: The key to look up in the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ A tuple of (value, duration), where value is the cached value or default,
+ and duration is the time-to-live for the key (or 0.0 if not found).
+ """
+ try:
+ pair = self._raw.get(key)
+ except _core.CoreKeyError:
+ return default, 0.0
+ else:
+ return (pair.value(), pair.duration())
+
+ def pop(self, key: KT, default: typing.Optional[DT] = None) -> typing.Union[VT, DT]:
+ """
+ Removes specified key and return the corresponding value. If the key is not found, returns the `default`.
+ """
+ try:
+ return self._raw.remove(key).value()
+ except _core.CoreKeyError:
+ return default
+
+ def pop_with_expire(
+ self, key: KT, default: typing.Optional[DT] = None
+ ) -> typing.Tuple[typing.Union[VT, DT], float]:
+ """
+ Removes the specified key from the cache and returns its value and expiration duration.
+
+ If the key is not found, returns the default value and 0.0 duration.
+
+ Args:
+ key: The key to remove from the cache.
+ default: The value to return if the key is not present in the cache. Defaults to None.
+
+ Returns:
+ A tuple of (value, duration), where value is the cached value or default,
+ and duration is the time-to-live for the key (or 0.0 if not found).
+ """
+ try:
+ pair = self._raw.remove(key)
+ except _core.CoreKeyError:
+ return default, 0.0
+ else:
+ return (pair.value(), pair.duration())
+
+ def setdefault(
+ self,
+ key: KT,
+ default: typing.Optional[DT] = None,
+ ttl: typing.Union[float, timedelta, datetime, None] = None,
+ ) -> typing.Union[VT, DT]:
+ """
+ Inserts a key-value pair into the cache with an optional time-to-live (TTL).
+
+ If the key is not in the cache, it will be inserted with the default value.
+ If the key already exists, its current value is returned.
+
+ Args:
+ key: The key to insert or retrieve from the cache.
+ default: The value to insert if the key is not present. Defaults to None.
+ ttl: Optional time-to-live for the key. Can be a float (seconds), timedelta, or datetime.
+ If not specified, the key will not expire.
+
+ Returns:
+ The value associated with the key, either existing or the default value.
+
+ Raises:
+ ValueError: If the provided TTL is not a positive value.
+ """
+ if ttl is not None: # pragma: no cover
+ if isinstance(ttl, timedelta):
+ ttl = ttl.total_seconds()
+
+ elif isinstance(ttl, datetime):
+ ttl = (ttl - datetime.now()).total_seconds()
+
+ if ttl <= 0:
+ raise ValueError("ttl must be positive and non-zero")
+
+ return self._raw.setdefault(key, default, ttl)
+
+ def popitem(self) -> typing.Tuple[KT, VT]:
+ """
+ Removes and returns the key-value pair that is closest to expiration.
+
+ Returns:
+ A tuple containing the key and value of the removed item.
+
+ Raises:
+ KeyError: If the cache is empty.
+ """
+ try:
+ val = self._raw.popitem()
+ except _core.CoreKeyError: # pragma: no cover
+ raise KeyError() from None
+ else:
+ return val.pack2()
+
+ def popitem_with_expire(self) -> typing.Tuple[KT, VT, float]:
+ """
+ Removes and returns the key-value pair that is closest to expiration, along with its expiration duration.
+
+ Returns:
+ A tuple containing the key, value, and expiration duration of the removed item.
+
+ Raises:
+ KeyError: If the cache is empty.
+ """
+ try:
+ val = self._raw.popitem()
+ except _core.CoreKeyError:
+ raise KeyError() from None
+ else:
+ return val.pack3()
+
+ def drain(self, n: int) -> int: # pragma: no cover
+ """Does the `popitem()` `n` times and returns count of removed items."""
+ if n <= 0:
+ return 0
+
+ for i in range(n):
+ try:
+ self._raw.popitem()
+ except _core.CoreKeyError:
+ return i
+
+ return i
+
+ def update(
+ self,
+ iterable: typing.Union[dict, typing.Iterable[tuple]],
+ ttl: typing.Union[float, timedelta, datetime, None] = None,
+ ) -> None:
+ """Updates the cache with elements from a dictionary or an iterable object of key/value pairs."""
+ if hasattr(iterable, "items"):
+ iterable = iterable.items()
+
+ if ttl is not None: # pragma: no cover
+ if isinstance(ttl, timedelta):
+ ttl = ttl.total_seconds()
+
+ elif isinstance(ttl, datetime):
+ ttl = (ttl - datetime.now()).total_seconds()
+
+ if ttl <= 0:
+ raise ValueError("ttl must be positive and non-zero")
+
+ self._raw.update(iterable, ttl)
+
+ def __setitem__(self, key: KT, value: VT) -> None:
+ self.insert(key, value, None)
+
+ def __getitem__(self, key: KT) -> VT:
+ try:
+ return self._raw.get(key).value()
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __delitem__(self, key: KT) -> None:
+ try:
+ self._raw.remove(key)
+ except _core.CoreKeyError:
+ raise KeyError(key) from None
+
+ def __eq__(self, other) -> bool:
+ if not isinstance(other, VTTLCache):
+ return False # pragma: no cover
+
+ return self._raw == other._raw
+
+ def __ne__(self, other) -> bool:
+ if not isinstance(other, VTTLCache):
+ return False # pragma: no cover
+
+ return self._raw != other._raw
+
+ def shrink_to_fit(self) -> None:
+ """Shrinks the cache to fit len(self) elements."""
+ self._raw.shrink_to_fit()
+
+ def clear(self, *, reuse: bool = False) -> None:
+ """
+ Removes all items from cache.
+
+ If reuse is True, will not free the memory for reusing in the future.
+ """
+ self._raw.clear(reuse)
+
+ def items_with_expire(self) -> IteratorView[typing.Tuple[KT, VT, float]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs along with their expiration duration).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.pack3())
+
+ def items(self) -> IteratorView[typing.Tuple[KT, VT]]:
+ """
+ Returns an iterable object of the cache's items (key-value pairs).
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.pack2())
+
+ def keys(self) -> IteratorView[KT]:
+ """
+ Returns an iterable object of the cache's keys.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.key())
+
+ def values(self) -> IteratorView[VT]:
+ """
+ Returns an iterable object of the cache's values.
+
+ Notes:
+ - You should not make any changes in cache while using this iterable object.
+ """
+ return IteratorView(self._raw.items(), lambda x: x.value())
+
+ def expire(self) -> None: # pragma: no cover
+ """
+ Manually removes expired key-value pairs from memory and releases their memory.
+
+ Notes:
+ - This operation is typically automatic and does not require manual invocation.
+ """
+ self._raw.expire()
+
+ def copy(self) -> "VTTLCache[KT, VT]":
+ """Returns a shallow copy of the cache"""
+ return self.__copy__()
+
+ def __copy__(self) -> "VTTLCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.copy(self._raw)
+ return copied
+
+ def __deepcopy__(self, memo) -> "VTTLCache[KT, VT]":
+ cls = type(self)
+ copied = cls.__new__(cls)
+ copied._raw = _std_copy.deepcopy(self._raw, memo)
+ return copied
+
+ def __iter__(self) -> IteratorView[KT]:
+ return self.keys()
+
+ def __repr__(self) -> str:
+ cls = type(self)
+
+ return "%s.%s[%d/%d](%s)" % (
+ cls.__module__,
+ cls.__name__,
+ len(self._raw),
+ self._raw.maxsize(),
+ _items_to_str(self.items(), len(self._raw)),
+ )
diff --git a/python/cachebox/_core.pyi b/python/cachebox/_core.pyi
new file mode 100644
index 0000000..728a3d4
--- /dev/null
+++ b/python/cachebox/_core.pyi
@@ -0,0 +1,73 @@
+import typing
+
+__version__: str
+__author__: str
+
+class CoreKeyError(Exception):
+ """
+ An exception when a key is not found in a cache.
+ This exception is internal to the library core and won't affect you.
+ """
+
+ ...
+
+KT = typing.TypeVar("KT")
+VT = typing.TypeVar("VT")
+DT = typing.TypeVar("DT")
+
+class BaseCacheImpl(typing.Generic[KT, VT]):
+ """
+ Base implementation for cache classes in the cachebox library.
+
+ This abstract base class defines the generic structure for cache implementations,
+ supporting different key and value types through generic type parameters.
+ Serves as a foundation for specific cache variants like Cache and FIFOCache.
+ """
+
+ def __init__(
+ self,
+ maxsize: int,
+ iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]] = ...,
+ *,
+ capacity: int = ...,
+ ) -> None: ...
+ @staticmethod
+ def __class_getitem__(*args) -> None: ...
+ @property
+ def maxsize(self) -> int: ...
+ def __len__(self) -> int: ...
+ def __sizeof__(self) -> int: ...
+ def __bool__(self) -> bool: ...
+ def __contains__(self, key: KT) -> bool: ...
+ def __setitem__(self, key: KT, value: VT) -> None: ...
+ def __getitem__(self, key: KT) -> VT: ...
+ def __delitem__(self, key: KT) -> VT: ...
+ def __str__(self) -> str: ...
+ def __iter__(self) -> typing.Iterator[KT]: ...
+ def __eq__(self, other) -> bool: ...
+ def __ne__(self, other) -> bool: ...
+ def capacity(self) -> int: ...
+ def is_full(self) -> bool: ...
+ def is_empty(self) -> bool: ...
+ def insert(self, key: KT, value: VT, *args, **kwargs) -> typing.Optional[VT]: ...
+ def get(self, key: KT, default: DT = None) -> typing.Union[VT, DT]: ...
+ def pop(self, key: KT, default: DT = None) -> typing.Union[VT, DT]: ...
+ def setdefault(
+ self, key: KT, default: typing.Optional[DT] = None, *args, **kwargs
+ ) -> typing.Optional[VT | DT]: ...
+ def popitem(self) -> typing.Tuple[KT, VT]: ...
+ def drain(self, n: int) -> int: ...
+ def clear(self, *, reuse: bool = False) -> None: ...
+ def shrink_to_fit(self) -> None: ...
+ def update(
+ self,
+ iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]],
+ *args,
+ **kwargs,
+ ) -> None: ...
+ def keys(self) -> typing.Iterable[KT]: ...
+ def values(self) -> typing.Iterable[VT]: ...
+ def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]: ...
+ def __copy__(self) -> "BaseCacheImpl[KT, VT]": ...
+ def __deepcopy__(self, memo) -> "BaseCacheImpl[KT, VT]": ...
+ def copy(self) -> "BaseCacheImpl[KT, VT]": ...
diff --git a/cachebox/py.typed b/python/cachebox/py.typed
similarity index 100%
rename from cachebox/py.typed
rename to python/cachebox/py.typed
diff --git a/cachebox/utils.py b/python/cachebox/utils.py
similarity index 77%
rename from cachebox/utils.py
rename to python/cachebox/utils.py
index 43e9290..56e8f73 100644
--- a/cachebox/utils.py
+++ b/python/cachebox/utils.py
@@ -1,7 +1,6 @@
from ._cachebox import BaseCacheImpl, FIFOCache
from collections import namedtuple, defaultdict
import functools
-import warnings
import asyncio
import _thread
import inspect
@@ -13,16 +12,24 @@
DT = typing.TypeVar("DT")
-class Frozen(BaseCacheImpl, typing.Generic[KT, VT]):
+class Frozen(BaseCacheImpl, typing.Generic[KT, VT]): # pragma: no cover
+ """
+ A wrapper class that prevents modifications to an underlying cache implementation.
+
+ This class provides a read-only view of a cache, optionally allowing silent
+ suppression of modification attempts instead of raising exceptions.
+ """
+
__slots__ = ("__cache", "ignore")
def __init__(self, cls: BaseCacheImpl[KT, VT], ignore: bool = False) -> None:
"""
- **This is not a cache.** this class can freeze your caches and prevents changes.
+ Initialize a frozen cache wrapper.
- :param cls: your cache
-
- :param ignore: If False, will raise TypeError if anyone try to change cache. will do nothing otherwise.
+ :param cls: The underlying cache implementation to be frozen
+ :type cls: BaseCacheImpl[KT, VT]
+ :param ignore: If True, silently ignores modification attempts; if False, raises TypeError when modification is attempted
+ :type ignore: bool, optional
"""
assert isinstance(cls, BaseCacheImpl)
assert type(cls) is not Frozen
@@ -131,7 +138,10 @@ def shrink_to_fit(self) -> None:
raise TypeError("This cache is frozen.")
def update(
- self, iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]]
+ self,
+ iterable: typing.Union[typing.Iterable[typing.Tuple[KT, VT]], typing.Dict[KT, VT]],
+ *args,
+ **kwargs,
) -> None:
if self.ignore:
return
@@ -150,7 +160,10 @@ def items(self) -> typing.Iterable[typing.Tuple[KT, VT]]:
class _LockWithCounter:
"""
- A threading/asyncio lock which count the waiters
+ A lock with a counter to track the number of waiters.
+
+ This class provides a lock mechanism that supports both synchronous and asynchronous contexts,
+ with the ability to track the number of threads or coroutines waiting to acquire the lock.
"""
__slots__ = ("lock", "waiters")
@@ -189,6 +202,17 @@ def _copy_if_need(obj, tocopy=(dict, list, set), level: int = 1):
def make_key(args: tuple, kwds: dict, fasttype=(int, str)):
+ """
+ Create a hashable key from function arguments for caching purposes.
+
+ Args:
+ args (tuple): Positional arguments to be used in key generation.
+ kwds (dict): Keyword arguments to be used in key generation.
+ fasttype (tuple, optional): Types that can be directly used as keys. Defaults to (int, str).
+
+ Returns:
+ A hashable key representing the function arguments, optimized for simple single-argument cases.
+ """
key = args
if kwds:
key += (object,)
@@ -202,10 +226,30 @@ def make_key(args: tuple, kwds: dict, fasttype=(int, str)):
def make_hash_key(args: tuple, kwds: dict):
+ """
+ Create a hashable hash key from function arguments for caching purposes.
+
+ Args:
+ args (tuple): Positional arguments to be used in key generation.
+ kwds (dict): Keyword arguments to be used in key generation.
+
+ Returns:
+ int: A hash value representing the function arguments.
+ """
return hash(make_key(args, kwds))
def make_typed_key(args: tuple, kwds: dict):
+ """
+ Create a hashable key from function arguments that includes type information.
+
+ Args:
+ args (tuple): Positional arguments to be used in key generation.
+ kwds (dict): Keyword arguments to be used in key generation.
+
+ Returns:
+ A hashable key representing the function arguments, including the types of the arguments.
+ """
key = make_key(args, kwds, fasttype=())
key += tuple(type(v) for v in args) # type: ignore
@@ -215,7 +259,7 @@ def make_typed_key(args: tuple, kwds: dict):
return key
-CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "length", "cachememory"])
+CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "length", "memory"])
EVENT_MISS = 1
EVENT_HIT = 2
@@ -402,27 +446,21 @@ def cached(
clear_reuse: bool = False,
callback: typing.Optional[typing.Callable[[int, typing.Any, typing.Any], typing.Any]] = None,
copy_level: int = 1,
- always_copy: typing.Optional[bool] = None,
):
"""
- Decorator to wrap a function with a memoizing callable that saves results in a cache.
-
- :param cache: Specifies a cache that handles and stores the results. if `None` or `dict`, `FIFOCache` will be used.
-
- :param key_maker: Specifies a function that will be called with the same positional and keyword
- arguments as the wrapped function itself, and which has to return a suitable
- cache key (must be hashable).
+ Decorator to create a memoized cache for function results.
- :param clear_reuse: The wrapped function has a function named `clear_cache` that uses `cache.clear`
- method to clear the cache. This parameter will be passed to cache's `clear` method.
+ Wraps a function to automatically cache and retrieve its results based on input parameters.
- :param callback: Every time the `cache` is used, callback is also called.
- The callback arguments are: event number (see `EVENT_MISS` or `EVENT_HIT` variables), key, and then result.
+ Args:
+ cache (BaseCacheImpl, dict, optional): Cache implementation to store results. Defaults to FIFOCache.
+ key_maker (Callable, optional): Function to generate cache keys from function arguments. Defaults to make_key.
+ clear_reuse (bool, optional): Whether to reuse cache during clearing. Defaults to False.
+ callback (Callable, optional): Function called on cache hit/miss events. Defaults to None.
+ copy_level (int, optional): Level of result copying. Defaults to 1.
- :param copy_level: The wrapped function always copies the result of your function and then returns it.
- This parameter specifies that the wrapped function has to copy which type of results.
- `0` means "never copy", `1` means "only copy `dict`, `list`, and `set` results" and
- `2` means "always copy the results".
+ Returns:
+ Callable: Decorated function with caching capabilities.
Example::
@@ -435,8 +473,6 @@ def sum_as_string(a, b):
assert len(sum_as_string.cache) == 1
sum_as_string.cache_clear()
assert len(sum_as_string.cache) == 0
-
- See more: [documentation](https://github.com/awolverp/cachebox#function-cached)
"""
if cache is None:
cache = FIFOCache(0)
@@ -447,14 +483,6 @@ def sum_as_string(a, b):
if not isinstance(cache, BaseCacheImpl):
raise TypeError("we expected cachebox caches, got %r" % (cache,))
- if always_copy is not None:
- warnings.warn(
- "'always_copy' parameter is deprecated and will be removed in future; use 'copy_level' instead",
- category=DeprecationWarning,
- )
- if always_copy is True:
- copy_level = 2
-
def decorator(func):
if inspect.iscoroutinefunction(func):
wrapper = _async_cached_wrapper(
@@ -476,10 +504,21 @@ def cachedmethod(
clear_reuse: bool = False,
callback: typing.Optional[typing.Callable[[int, typing.Any, typing.Any], typing.Any]] = None,
copy_level: int = 1,
- always_copy: typing.Optional[bool] = None,
):
"""
- this is excatly works like `cached()`, but ignores `self` parameters in hashing and key making.
+ Decorator to create a method-specific memoized cache for function results.
+
+ Similar to `cached()`, but ignores `self` parameter when generating cache keys.
+
+ Args:
+ cache (BaseCacheImpl, dict, optional): Cache implementation to store results. Defaults to FIFOCache.
+ key_maker (Callable, optional): Function to generate cache keys from function arguments. Defaults to make_key.
+ clear_reuse (bool, optional): Whether to reuse cache during clearing. Defaults to False.
+ callback (Callable, optional): Function called on cache hit/miss events. Defaults to None.
+ copy_level (int, optional): Level of result copying. Defaults to 1.
+
+ Returns:
+ Callable: Decorated method with method-specific caching capabilities.
"""
if cache is None:
cache = FIFOCache(0)
@@ -490,14 +529,6 @@ def cachedmethod(
if not isinstance(cache, BaseCacheImpl):
raise TypeError("we expected cachebox caches, got %r" % (cache,))
- if always_copy is not None:
- warnings.warn(
- "'always_copy' parameter is deprecated and will be removed in future; use 'copy_level' instead",
- category=DeprecationWarning,
- )
- if always_copy is True:
- copy_level = 2
-
def decorator(func):
if inspect.iscoroutinefunction(func):
wrapper = _async_cached_wrapper(
diff --git a/tests/__init__.py b/python/tests/__init__.py
similarity index 100%
rename from tests/__init__.py
rename to python/tests/__init__.py
diff --git a/tests/mixin.py b/python/tests/mixin.py
similarity index 84%
rename from tests/mixin.py
rename to python/tests/mixin.py
index 3e5a80f..cc45177 100644
--- a/tests/mixin.py
+++ b/python/tests/mixin.py
@@ -1,4 +1,4 @@
-from cachebox import BaseCacheImpl, LRUCache, LFUCache
+from cachebox import BaseCacheImpl, TTLCache
import dataclasses
import pytest
import typing
@@ -26,7 +26,7 @@ def __hash__(self) -> int:
return self.val
-def getsizeof(obj, use_sys=True):
+def getsizeof(obj, use_sys=True): # pragma: no cover
try:
if use_sys:
return sys.getsizeof(obj)
@@ -36,12 +36,11 @@ def getsizeof(obj, use_sys=True):
return len(obj)
-class _TestMixin:
+class _TestMixin: # pragma: no cover
CACHE: typing.Type[BaseCacheImpl]
KWARGS: dict = {}
NO_POLICY: bool = False
- ITERATOR_CLASS: typing.Optional[type] = None
def test__new__(self):
cache = self.CACHE(10, **self.KWARGS, capacity=8)
@@ -60,10 +59,6 @@ def test__new__(self):
assert cache.maxsize == sys.maxsize
assert 20 > cache.capacity() >= 8
- cache = self.CACHE(0, **self.KWARGS, capacity=0)
- assert cache.maxsize == sys.maxsize
- assert 2 >= cache.capacity() >= 0 # This is depends on platform
-
def test_overflow(self):
if not self.NO_POLICY:
return
@@ -80,7 +75,7 @@ def test___len__(self):
cache = self.CACHE(10, **self.KWARGS, capacity=10)
assert len(cache) == 0
- assert cache.is_empty()
+ assert cache.is_empty() ^ bool(cache)
cache[0] = 0
assert len(cache) == 1
@@ -100,25 +95,6 @@ def test___len__(self):
assert len(cache) == 10
assert cache.is_full()
- def test___sizeof__(self):
- cache = self.CACHE(10, **self.KWARGS, capacity=10)
-
- # all classes have to implement __sizeof__
- # __sizeof__ returns exactly allocated memory size by cache
- # but sys.getsizeof add also garbage collector overhead to that, so sometimes
- # sys.getsizeof is greater than __sizeof__
- getsizeof(cache, False)
-
- def test___bool__(self):
- cache = self.CACHE(1, **self.KWARGS, capacity=1)
-
- if cache:
- pytest.fail("bool(cache) returns invalid response")
-
- cache[1] = 1
- if not cache:
- pytest.fail("not bool(cache) returns invalid response")
-
def test___contains__(self):
cache = self.CACHE(1, **self.KWARGS, capacity=1)
@@ -146,15 +122,20 @@ def test___setitem__(self):
del cache[2]
del cache[3]
+ with pytest.raises(KeyError):
+ del cache["error"]
+
cache[0]
with pytest.raises(KeyError):
cache[2]
def test___repr__(self):
- cache = self.CACHE(2, **self.KWARGS, capacity=2)
+ cache = self.CACHE(1000, **self.KWARGS, capacity=2)
+ assert repr(cache).startswith(self.CACHE.__module__ + "." + self.CACHE.__name__)
+
+ cache.update((i, i) for i in range(1000))
assert str(cache) == repr(cache)
- assert repr(cache).startswith(self.CACHE.__name__)
def test_insert(self):
cache = self.CACHE(5, **self.KWARGS, capacity=5)
@@ -225,8 +206,8 @@ def test_clear(self):
try:
assert getsizeof(obj, False) >= cap
except AssertionError as e:
- if not isinstance(obj, (LRUCache, LFUCache)):
- raise e
+ # if not isinstance(obj, (LRUCache, LFUCache)):
+ raise e
obj[1] = 1
obj[2] = 2
@@ -240,8 +221,8 @@ def test_clear(self):
try:
assert cap != getsizeof(obj, False)
except AssertionError as e:
- if not isinstance(obj, (LRUCache, LFUCache)):
- raise e
+ # if not isinstance(obj, (LRUCache, LFUCache)):
+ raise e
def test_update(self):
obj = self.CACHE(2, **self.KWARGS, capacity=2)
@@ -299,9 +280,6 @@ def test_eq_implemetation(self):
def test_iterators(self):
obj = self.CACHE(100, **self.KWARGS, capacity=100)
- if self.ITERATOR_CLASS:
- assert isinstance(iter(obj), self.ITERATOR_CLASS)
-
for i in range(6):
obj[i] = i * 2
@@ -335,8 +313,12 @@ def test_iterators(self):
for key, value in obj.items():
assert obj[key] == value
- for key, value in obj.items():
- obj[key] = value * 2
+ try:
+ for key, value in obj.items():
+ obj[key] = value * 2
+ except RuntimeError:
+ if not isinstance(obj, TTLCache):
+ raise
with pytest.raises(RuntimeError):
for key, value in obj.items():
@@ -345,16 +327,16 @@ def test_iterators(self):
def test___eq__(self):
cache = self.CACHE(100, **self.KWARGS, capacity=100)
- with pytest.raises(NotImplementedError):
+ with pytest.raises(TypeError):
cache > cache
- with pytest.raises(NotImplementedError):
+ with pytest.raises(TypeError):
cache < cache
- with pytest.raises(NotImplementedError):
+ with pytest.raises(TypeError):
cache >= cache
- with pytest.raises(NotImplementedError):
+ with pytest.raises(TypeError):
cache <= cache
assert cache == cache
@@ -380,10 +362,6 @@ def test___eq__(self):
assert not cache == c2
assert c2 != cache
- def test_generic(self):
- obj: self.CACHE[int, int] = self.CACHE(maxsize=0, **self.KWARGS)
- _ = obj
-
def _test_pickle(self, check_order: typing.Callable):
import pickle
import tempfile
@@ -418,7 +396,7 @@ def _test_pickle(self, check_order: typing.Callable):
c1[9]
c2 = pickle.loads(pickle.dumps(c1))
- assert c1 == c2
+ assert c1 == c2, f"{c1} - {c2}"
assert c1.capacity() == c2.capacity()
check_order(c1, c2)
@@ -453,3 +431,32 @@ def _test_pickle(self, check_order: typing.Callable):
assert c1 == c2
assert c1.capacity() == c2.capacity()
check_order(c1, c2)
+
+ def test_copy(self):
+ import copy
+
+ # shallow copy
+ c1 = self.CACHE(maxsize=0, **self.KWARGS)
+ c1.insert('dict', {})
+ c2 = c1.copy()
+
+ assert c2 == c1
+ c2['dict'][1] = 1
+
+ assert c1['dict'][1] == 1
+
+ c2.insert(1, 1)
+ assert 1 not in c1
+
+ # deepcopy
+ c1 = self.CACHE(maxsize=0, **self.KWARGS)
+ c1.insert('dict', {})
+ c2 = copy.deepcopy(c1)
+
+ assert c2 == c1
+ c2['dict'][1] = 1
+
+ assert 1 not in c1['dict']
+
+ c2.insert(1, 1)
+ assert 1 not in c1
diff --git a/tests/test_caches.py b/python/tests/test_caches.py
similarity index 86%
rename from tests/test_caches.py
rename to python/tests/test_caches.py
index f374c8a..801cc69 100644
--- a/tests/test_caches.py
+++ b/python/tests/test_caches.py
@@ -1,47 +1,21 @@
from cachebox import (
- BaseCacheImpl,
Cache,
FIFOCache,
RRCache,
- TTLCache,
LRUCache,
LFUCache,
+ TTLCache,
VTTLCache,
- cache_iterator,
- fifocache_iterator,
- ttlcache_iterator,
- lrucache_iterator,
- lfucache_iterator,
)
-
+from datetime import timedelta
import pytest
-import time
-
from .mixin import _TestMixin
-
-
-def test___new__():
- with pytest.raises(NotImplementedError):
- BaseCacheImpl()
-
-
-def test_subclass():
- class _TestSubclass(BaseCacheImpl):
- def __init__(self) -> None:
- self.a = 1
-
- def inc(self, x: int):
- self.a += x
-
- t = _TestSubclass()
- t.inc(10)
- assert t.a == 11
+import time
class TestCache(_TestMixin):
CACHE = Cache
NO_POLICY = True
- ITERATOR_CLASS = cache_iterator
def test_pickle(self):
self._test_pickle(lambda c1, c2: None)
@@ -49,7 +23,6 @@ def test_pickle(self):
class TestFIFOCache(_TestMixin):
CACHE = FIFOCache
- ITERATOR_CLASS = fifocache_iterator
def test_policy(self):
cache = FIFOCache(5)
@@ -122,20 +95,193 @@ def test_first_last(self):
assert obj.first() == 1
assert obj.last() == 10
+ assert obj.first(-1) == obj.last()
+ assert obj.first(-10000) is None
class TestRRCache(_TestMixin):
CACHE = RRCache
- ITERATOR_CLASS = cache_iterator
+
+ def test_popitem(self):
+ obj = RRCache(3)
+ with pytest.raises(KeyError):
+ obj.popitem()
+ with pytest.raises(KeyError):
+ obj.random_key()
+
+ obj[1] = 1
+ assert obj.random_key() == 1
+ assert obj.popitem() == (1, 1)
def test_pickle(self):
self._test_pickle(lambda c1, c2: None)
+class TestLRUCache(_TestMixin):
+ CACHE = LRUCache
+
+ def test_policy(self):
+ obj = self.CACHE(3)
+
+ obj[1] = 1
+ obj[2] = 2
+ obj[3] = 3
+
+ assert (1, 1) == obj.popitem()
+
+ obj[1] = 1
+ obj[2]
+
+ assert (3, 3) == obj.popitem()
+
+ obj[4] = 4
+ assert 1 == obj.get(1)
+
+ obj[5] = 5
+ assert 2 not in obj
+
+ def test_ordered_iterators(self):
+ obj = self.CACHE(20, **self.KWARGS, capacity=20)
+
+ for i in range(6):
+ obj[i] = i * 2
+
+ obj[1]
+ obj[5]
+ obj[3] = 7
+
+ k = [0, 2, 4, 1, 5, 3]
+ v = [0, 4, 8, 2, 10, 7]
+ assert k == list(obj.keys())
+ assert v == list(obj.values())
+ assert list(zip(k, v)) == list(obj.items())
+
+ def test_recently_used_funcs(self):
+ obj = LRUCache(10)
+
+ for i in range(6):
+ obj[i] = i * 2
+
+ obj[1]
+ obj[5]
+ obj[3] = 7
+ obj.peek(4)
+
+ assert obj.peek(6) is None
+
+ assert obj.most_recently_used() == 3
+ assert obj.least_recently_used() == 0
+
+ def test_pickle(self):
+ def inner(c1, c2):
+ assert list(c1.items()) == list(c2.items())
+
+ self._test_pickle(inner)
+
+
+class TestLFUCache(_TestMixin):
+ CACHE = LFUCache
+
+ def test_policy(self):
+ obj = self.CACHE(5, {i: i for i in range(5)})
+
+ for i in range(5):
+ obj[i] = i
+
+ for i in range(10):
+ assert 0 == obj[0]
+ for i in range(7):
+ assert 1 == obj[1]
+ for i in range(3):
+ assert 2 == obj[2]
+ for i in range(4):
+ assert 3 == obj[3]
+ for i in range(6):
+ assert 4 == obj[4]
+
+ assert (2, 2) == obj.popitem()
+ assert (3, 3) == obj.popitem()
+
+ for i in range(10):
+ assert 4 == obj.get(4)
+
+ assert (1, 1) == obj.popitem()
+
+ assert 2 == len(obj)
+ obj.clear()
+
+ for i in range(5):
+ obj[i] = i
+
+ assert [0, 1, 2, 3, 4] == list(obj.keys())
+
+ for i in range(10):
+ obj[0] += 1
+ for i in range(7):
+ obj[1] += 1
+ for i in range(3):
+ obj[2] += 1
+ for i in range(4):
+ obj[3] += 1
+ for i in range(6):
+ obj[4] += 1
+
+ obj[5] = 4
+ assert [5, 3, 4, 1, 0] == list(obj.keys())
+
+ def test_items_with_frequency(self):
+ # no need to test completely items_with_frequency
+ # because it's tested in test_iterators
+ obj = LFUCache(10, {1: 2, 3: 4})
+ for key, val, freq in obj.items_with_frequency():
+ assert key in obj
+ assert val == obj[key]
+ assert isinstance(freq, int)
+
+ def test_least_frequently_used(self):
+ obj = LFUCache(10)
+
+ for i in range(5):
+ obj[i] = i * 2
+
+ for i in range(10):
+ obj[0] += 1
+ for i in range(7):
+ obj[1] += 1
+ for i in range(3):
+ obj[2] += 1
+ for i in range(4):
+ obj[3] += 1
+ for i in range(6):
+ obj[4] += 1
+
+ assert obj.least_frequently_used() == 2
+ assert obj.least_frequently_used(1) == 3
+ assert obj.least_frequently_used(4) == 0
+ assert obj.least_frequently_used(5) is None
+ assert obj.least_frequently_used(5) is None
+ assert obj.least_frequently_used(-len(obj)) == obj.least_frequently_used()
+ assert obj.least_frequently_used(-1000) is None
+
+ def test_pickle(self):
+ def inner(c1, c2):
+ assert list(c1.items()) == list(c2.items())
+
+ self._test_pickle(inner)
+
+
class TestTTLCache(_TestMixin):
CACHE = TTLCache
KWARGS = {"ttl": 10}
- ITERATOR_CLASS = ttlcache_iterator
+
+ def test__new__(self):
+ super().test__new__()
+
+ cache = TTLCache(0, timedelta(minutes=2, seconds=20))
+ assert cache.ttl == (2 * 60) + 20
+
+ with pytest.raises(ValueError):
+ TTLCache(0, -10)
def test_policy(self):
obj = self.CACHE(2, 0.5)
@@ -284,149 +430,14 @@ def test_popitem_with_expire(self):
with pytest.raises(KeyError):
obj.popitem_with_expire()
-
-class TestLRUCache(_TestMixin):
- CACHE = LRUCache
- ITERATOR_CLASS = lrucache_iterator
-
- def test_policy(self):
- obj = self.CACHE(3)
-
- obj[1] = 1
- obj[2] = 2
- obj[3] = 3
-
- assert (1, 1) == obj.popitem()
-
- obj[1] = 1
- obj[2]
-
- assert (3, 3) == obj.popitem()
-
- obj[4] = 4
- assert 1 == obj.get(1)
-
- obj[5] = 5
- assert 2 not in obj
-
- def test_ordered_iterators(self):
- obj = self.CACHE(20, **self.KWARGS, capacity=20)
-
- for i in range(6):
- obj[i] = i * 2
-
- obj[1]
- obj[5]
- obj[3] = 7
-
- k = [0, 2, 4, 1, 5, 3]
- v = [0, 4, 8, 2, 10, 7]
- assert k == list(obj.keys())
- assert v == list(obj.values())
- assert list(zip(k, v)) == list(obj.items())
-
- def test_recently_used_funcs(self):
- obj = LRUCache(10)
-
- for i in range(6):
- obj[i] = i * 2
-
- obj[1]
- obj[5]
- obj[3] = 7
- obj.peek(4)
-
- assert obj.most_recently_used() == 3
- assert obj.least_recently_used() == 0
- assert obj.least_recently_used(1) == 2
- assert obj.least_recently_used(5) == 3
- assert obj.least_recently_used(6) is None
-
- def test_pickle(self):
- def inner(c1, c2):
- assert list(c1.items()) == list(c2.items())
-
- self._test_pickle(inner)
-
-
-class TestLFUCache(_TestMixin):
- CACHE = LFUCache
- ITERATOR_CLASS = lfucache_iterator
-
- def test_policy(self):
- obj = self.CACHE(5, {i: i for i in range(5)})
-
- for i in range(5):
- obj[i] = i
-
- for i in range(10):
- assert 0 == obj[0]
- for i in range(7):
- assert 1 == obj[1]
- for i in range(3):
- assert 2 == obj[2]
- for i in range(4):
- assert 3 == obj[3]
- for i in range(6):
- assert 4 == obj[4]
-
- assert (2, 2) == obj.popitem()
- assert (3, 3) == obj.popitem()
-
- for i in range(10):
- assert 4 == obj.get(4)
-
- assert (1, 1) == obj.popitem()
-
- assert 2 == len(obj)
- obj.clear()
-
- for i in range(5):
- obj[i] = i
-
- assert [0, 1, 2, 3, 4] == list(obj.keys())
-
- for i in range(10):
- obj[0] += 1
- for i in range(7):
- obj[1] += 1
- for i in range(3):
- obj[2] += 1
- for i in range(4):
- obj[3] += 1
- for i in range(6):
- obj[4] += 1
-
- obj[5] = 4
- assert [5, 3, 4, 1, 0] == list(obj.keys())
-
- def test_least_frequently_used(self):
- obj = LFUCache(10)
-
- for i in range(5):
- obj[i] = i * 2
-
- for i in range(10):
- obj[0] += 1
- for i in range(7):
- obj[1] += 1
- for i in range(3):
- obj[2] += 1
- for i in range(4):
- obj[3] += 1
- for i in range(6):
- obj[4] += 1
-
- assert obj.least_frequently_used() == 2
- assert obj.least_frequently_used(1) == 3
- assert obj.least_frequently_used(4) == 0
- assert obj.least_frequently_used(5) is None
-
- def test_pickle(self):
- def inner(c1, c2):
- assert list(c1.items()) == list(c2.items())
-
- self._test_pickle(inner)
+ def test_items_with_expire(self):
+ # no need to test completely items_with_expire
+ # because it's tested in test_iterators
+ obj = TTLCache(10, 3, {1: 2, 3: 4})
+ for key, val, ttl in obj.items_with_expire():
+ assert key in obj
+ assert val == obj[key]
+ assert isinstance(ttl, float)
class TestVTTLCache(_TestMixin):
@@ -569,5 +580,14 @@ def inner(c1, c2):
c2 = pickle.loads(pickle.dumps(c1))
assert len(c2) == len(c1)
- assert c1.capacity() == c2.capacity()
+ assert abs(c2.capacity() - c1.capacity()) < 2
inner(c1, c2)
+
+ def test_items_with_expire(self):
+ # no need to test completely items_with_expire
+ # because it's tested in test_iterators
+ obj = VTTLCache(10, {1: 2, 3: 4}, ttl=10)
+ for key, val, ttl in obj.items_with_expire():
+ assert key in obj
+ assert val == obj[key]
+ assert isinstance(ttl, float)
diff --git a/tests/test_concurrency.py b/python/tests/test_concurrency.py
similarity index 100%
rename from tests/test_concurrency.py
rename to python/tests/test_concurrency.py
diff --git a/tests/test_utils.py b/python/tests/test_utils.py
similarity index 99%
rename from tests/test_utils.py
rename to python/tests/test_utils.py
index a6ba7aa..ffe2d0f 100644
--- a/tests/test_utils.py
+++ b/python/tests/test_utils.py
@@ -35,6 +35,9 @@ def test_frozen():
assert len(f) == 9
assert len(f) == len(cache)
+ f = Frozen(cache, ignore=True)
+ f.popitem()
+
def test_cached():
obj = LRUCache(3) # type: LRUCache[int, int]
diff --git a/src/bridge/baseimpl.rs b/src/bridge/baseimpl.rs
deleted file mode 100644
index 134cb78..0000000
--- a/src/bridge/baseimpl.rs
+++ /dev/null
@@ -1,41 +0,0 @@
-//! implement [`BaseCacheImpl`], the base class of all classes.
-
-use pyo3::types::PyTypeMethods;
-
-/// This is the base class of all cache classes such as Cache, FIFOCache, ...
-///
-/// Do not try to call its constructor, this is only for type-hint.
-#[pyo3::pyclass(module = "cachebox._cachebox", subclass, frozen)]
-pub struct BaseCacheImpl {}
-
-#[pyo3::pymethods]
-impl BaseCacheImpl {
- #[new]
- #[pyo3(signature = (*args, **kwargs))]
- #[classmethod]
- #[allow(unused_variables)]
- pub fn __new__(
- cls: &pyo3::Bound<'_, pyo3::types::PyType>,
- args: &pyo3::Bound<'_, pyo3::PyAny>,
- kwargs: Option<&pyo3::Bound<'_, pyo3::PyAny>>,
- ) -> pyo3::PyResult {
- let size = unsafe { pyo3::ffi::PyTuple_Size(cls.mro().as_ptr()) };
-
- // This means BaseCacheImpl is used as subclass
- // So we shouldn't raise NotImplementedError
- if size > 2 {
- Ok(Self {})
- } else {
- Err(err!(pyo3::exceptions::PyNotImplementedError, "do not call this constructor, you can subclass this implementation or use other classes."))
- }
- }
-
- #[allow(unused_variables)]
- #[classmethod]
- pub fn __class_getitem__(
- cls: &pyo3::Bound<'_, pyo3::types::PyType>,
- args: pyo3::PyObject,
- ) -> pyo3::PyObject {
- cls.clone().into()
- }
-}
diff --git a/src/bridge/cache.rs b/src/bridge/cache.rs
index cbe7390..a066567 100644
--- a/src/bridge/cache.rs
+++ b/src/bridge/cache.rs
@@ -1,484 +1,283 @@
-//! implement Cache, our simple cache without any algorithms and policies
-
-use crate::hashedkey::HashedKey;
-use crate::util::_KeepForIter;
-
-/// A simple cache that has no algorithm; this is only a hashmap.
-///
-/// [`Cache`] vs `dict`:
-/// - it is thread-safe and unordered, while `dict` isn't thread-safe and ordered (Python 3.6+).
-/// - it uses very lower memory than `dict`.
-/// - it supports useful and new methods for managing memory, while `dict` does not.
-/// - it does not support `popitem`, while `dict` does.
-/// - You can limit the size of [`Cache`], but you cannot for `dict`.
-#[pyo3::pyclass(module="cachebox._cachebox", extends=crate::bridge::baseimpl::BaseCacheImpl, frozen)]
+use crate::common::Entry;
+use crate::common::ObservedIterator;
+use crate::common::PreHashObject;
+
+#[pyo3::pyclass(module = "cachebox._core", frozen)]
pub struct Cache {
- raw: crate::mutex::Mutex,
+ raw: crate::mutex::Mutex,
+}
+
+#[allow(non_camel_case_types)]
+#[pyo3::pyclass(module = "cachebox._core")]
+pub struct cache_items {
+ pub ptr: ObservedIterator,
+ pub iter: crate::mutex::Mutex>,
}
#[pyo3::pymethods]
impl Cache {
- /// A simple cache that has no algorithm; this is only a hashmap.
- ///
- /// By maxsize param, you can specify the limit size of the cache ( zero means infinity ); this is unchangable.
- ///
- /// By iterable param, you can create cache from a dict or an iterable.
- ///
- /// If capacity param is given, cache attempts to allocate a new hash table with at
- /// least enough capacity for inserting the given number of elements without reallocating.
#[new]
- #[pyo3(signature=(maxsize, iterable=None, *, capacity=0))]
- pub fn __new__(
- py: pyo3::Python<'_>,
- maxsize: usize,
- iterable: Option,
- capacity: usize,
- ) -> pyo3::PyResult<(Self, crate::bridge::baseimpl::BaseCacheImpl)> {
- let mut raw = crate::internal::NoPolicy::new(maxsize, capacity)?;
- if iterable.is_some() {
- raw.update(py, unsafe { iterable.unwrap_unchecked() })?;
- }
+ #[pyo3(signature=(maxsize, *, capacity=0))]
+ fn __new__(maxsize: usize, capacity: usize) -> pyo3::PyResult {
+ let raw = crate::policies::nopolicy::NoPolicy::new(maxsize, capacity)?;
let self_ = Self {
raw: crate::mutex::Mutex::new(raw),
};
- Ok((self_, crate::bridge::baseimpl::BaseCacheImpl {}))
+ Ok(self_)
}
- /// Returns the cache maxsize
- #[getter]
- pub fn maxsize(&self) -> usize {
- let lock = self.raw.lock();
- lock.maxsize.get()
+ fn _state(&self) -> usize {
+ self.raw.lock().observed.get() as usize
}
- pub fn _state(&self) -> usize {
- let lock = self.raw.lock();
- lock.state.get()
+ fn maxsize(&self) -> usize {
+ self.raw.lock().maxsize()
}
- /// Returns the number of elements in the table - len(self)
- pub fn __len__(&self) -> usize {
- let lock = self.raw.lock();
- lock.table.len()
+ fn capacity(&self) -> usize {
+ self.raw.lock().capacity()
}
- /// Returns allocated memory size - sys.getsizeof(self)
- pub fn __sizeof__(&self) -> usize {
- let lock = self.raw.lock();
- let cap = lock.table.capacity();
-
- core::mem::size_of::() + cap * (crate::HASHEDKEY_SIZE + crate::PYOBJECT_SIZE)
+ fn __len__(&self) -> usize {
+ self.raw.lock().len()
}
- /// Returns true if cache not empty - bool(self)
- pub fn __bool__(&self) -> bool {
+ fn __sizeof__(&self) -> usize {
let lock = self.raw.lock();
- !lock.table.is_empty()
+ lock.capacity()
+ * (std::mem::size_of::() + std::mem::size_of::())
}
- /// Returns true if the cache have the key present - key in self
- pub fn __contains__(&self, py: pyo3::Python<'_>, key: pyo3::PyObject) -> pyo3::PyResult {
- let hk = HashedKey::from_pyobject(py, key)?;
+ fn __contains__(&self, py: pyo3::Python<'_>, key: pyo3::PyObject) -> pyo3::PyResult {
+ let key = PreHashObject::from_pyobject(py, key)?;
let lock = self.raw.lock();
- Ok(lock.contains_key(&hk))
+
+ match lock.lookup(py, &key)? {
+ Some(_) => Ok(true),
+ None => Ok(false),
+ }
}
- /// Sets self\[key\] to value.
- ///
- /// Note: raises OverflowError if the cache reached the maxsize limit,
- /// because this class does not have any algorithm.
- pub fn __setitem__(
- &self,
- py: pyo3::Python<'_>,
- key: pyo3::PyObject,
- value: pyo3::PyObject,
- ) -> pyo3::PyResult<()> {
- let hk = HashedKey::from_pyobject(py, key)?;
- let mut lock = self.raw.lock();
- lock.insert(hk, value)?;
- Ok(())
+ fn is_empty(&self) -> bool {
+ self.raw.lock().is_empty()
}
- /// Returns self\[key\]
- ///
- /// Note: raises KeyError if key not found.
- pub fn __getitem__(
+ fn is_full(&self) -> bool {
+ self.raw.lock().is_full()
+ }
+
+ fn insert(
&self,
py: pyo3::Python<'_>,
key: pyo3::PyObject,
- ) -> pyo3::PyResult {
- let hk = HashedKey::from_pyobject(py, key)?;
- let lock = self.raw.lock();
-
- match lock.get(&hk) {
- Some(val) => Ok(val.clone_ref(py)),
- None => Err(err!(pyo3::exceptions::PyKeyError, hk.key)),
- }
- }
-
- /// Deletes self[key].
- ///
- /// Note: raises KeyError if key not found.
- pub fn __delitem__(&self, py: pyo3::Python<'_>, key: pyo3::PyObject) -> pyo3::PyResult<()> {
- let hk = HashedKey::from_pyobject(py, key)?;
+ value: pyo3::PyObject,
+ ) -> pyo3::PyResult