Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:
def foo(a=[]): a.append(5) return a
Python novices would expect this function called with no parameter to always return a list with only one element: [5]
. The result is instead very different, and very astonishing (for a novice):
>>> foo() [5] >>> foo() [5, 5] >>> foo() [5, 5, 5] >>> foo() [5, 5, 5, 5] >>> foo()
A manager of mine once had his first encounter with this feature, and called it “a dramatic design flaw” of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don’t understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)
Edit:
Baczek made an interesting example. Together with most of your comments and Utaal’s in particular, I elaborated further:
>>> def a(): ... print("a executed") ... return [] ... >>> >>> def b(x=a()): ... x.append(5) ... print(x) ... a executed >>> b() [5] >>> b() [5, 5]
To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function, or “together” with it?
Doing the binding inside the function would mean that x
is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the def
line would be “hybrid” in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.
The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.
Advertisement
Answer
Actually, this is not a design flaw, and it is not because of internals or performance. It comes simply from the fact that functions in Python are first-class objects, and not only a piece of code.
As soon as you think of it this way, then it completely makes sense: a function is an object being evaluated on its definition; default parameters are kind of “member data” and therefore their state may change from one call to the other – exactly as in any other object.
In any case, the effbot (Fredrik Lundh) has a very nice explanation of the reasons for this behavior in Default Parameter Values in Python. I found it very clear, and I really suggest reading it for a better knowledge of how function objects work.