How can I use functools.lru_cache
inside classes without leaking memory?
In the following minimal example the foo
instance won’t be released although going out of scope and having no referrer (other than the lru_cache
).
from functools import lru_cache class BigClass: pass class Foo: def __init__(self): self.big = BigClass() @lru_cache(maxsize=16) def cached_method(self, x): return x + 5 def fun(): foo = Foo() print(foo.cached_method(10)) print(foo.cached_method(10)) # use cache return 'something' fun()
But foo
and hence foo.big
(a BigClass
) are still alive
import gc; gc.collect() # collect garbage len([obj for obj in gc.get_objects() if isinstance(obj, Foo)]) # is 1
That means that Foo
/BigClass
instances are still residing in memory. Even deleting Foo
(del Foo
) will not release them.
Why is lru_cache
holding on to the instance at all? Doesn’t the cache use some hash and not the actual object?
What is the recommended way use lru_cache
s inside classes?
I know of two workarounds: Use per instance caches or make the cache ignore object (which might lead to wrong results, though)
Advertisement
Answer
This is not the cleanest solution, but it’s entirely transparent to the programmer:
import functools import weakref def memoized_method(*lru_args, **lru_kwargs): def decorator(func): @functools.wraps(func) def wrapped_func(self, *args, **kwargs): # We're storing the wrapped method inside the instance. If we had # a strong reference to self the instance would never die. self_weak = weakref.ref(self) @functools.wraps(func) @functools.lru_cache(*lru_args, **lru_kwargs) def cached_method(*args, **kwargs): return func(self_weak(), *args, **kwargs) setattr(self, func.__name__, cached_method) return cached_method(*args, **kwargs) return wrapped_func return decorator
It takes the exact same parameters as lru_cache
, and works exactly the same. However it never passes self
to lru_cache
and instead uses a per-instance lru_cache
.