When answering a recent question I repeated my assumption that one reason for using @staticmethod
was to save ram, since a static method was only ever instantised once. This assertion can be found fairly easily online, (e.g. here) and I don’t know where I first encountered it.
My reasoning was based on two assumptions, one false: a. that python instantised all methods when instantising a class (which is not the case, as a little thought would have shown, oops) and b. that staticmethods were not instantised on access, but just called directly. Thus I thought that this code:
import asyncio class Test: async def meth1(): await asyncio.sleep(10) return 78 t1= Test() t2 = Test() loop = asyncio.get_event_loop loop.create_task(t1) loop.create_task(t2) def main(): for _ in range(10): await asyncio.sleep(2) loop.run(main())
would use more ram than if I defined the class like this:
class Test: @staticmethod async def meth1(): await asyncio.sleep(10) return 78
Is this the case? Do staticmethods get instantised on access? Do classmethods get instantised on access? I know that t1.meth1 is t2.meth1
will return True
in the second case and False
in the first, but is that because python is instantising meth1
the first time and then looking it up the second, or because in both cases it merely looks it up, or because in both cases it gets a copy of the static method which is somehow the same (I presume not that?) The id
of a staticmethod appears not to change: but I’m not sure what my access to it is doing.
Is there any real world reason to care if so? I’ve seen an abundance of staticmethods in micropython code where multiple instances exist in asynchronous code at once. I assumed this was for ram saving, but I suspect I’m wrong. I’d be interested to know if there is any difference between the micropython and Cpython implementations here.
Edit
I am correct in thinking that the calling t1.meth1()
and t2.meth1()
will bind the method twice in the first instance and once in the second?
Advertisement
Answer
Methods do not get “instantiated”, they get bound – that is a fancy word for “their self
/cls
parameter is filled”, similar to partial
parameter binding. The entire point of staticmethod
is that there is no self
/cls
parameter and thus no binding is needed.
In fact, fetching a staticmethod
does nothing at all – it just returns the function unchanged:
>>> class Test: ... @staticmethod ... async def meth1(): ... await asyncio.sleep(10) ... return 78 ... >>> Test.meth1 <function __main__.Test.meth1()>
Since methods are bound on-demand, they don’t usually exist in their bound form. As such, there is no memory cost to pay for just having methods and nothing for staticmethod
to recoup. Since staticmethod
is an actual layer during lookup¹ – even if it does nothing – there is no performance gain either way from (not) using staticmethod
.
In [40]: class Test: ...: @staticmethod ...: def s_method(): ...: pass ...: def i_method(self): ...: pass ...: In [41]: %timeit Test.s_method 42.1 ns ± 0.576 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [42]: %timeit Test.i_method 40.9 ns ± 0.202 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Note that these timings may vary slightly depending on the implementation and test setup. The takeaway is that both approaches are comparably fast and performance is not relevant to choose one over the other.
¹staticmethod
works as descriptor that runs everytime the method is looked up.