Skip to content
Advertisement

How to use memory_profiler (python module) with class methods?

I want to profile time and memory usage of class method. I didn’t find an out of box solution for this (are there such modules?), and I decided to use timeit for time profiling and memory_usage from memory_profiler module.

I faced a problem of profiling methods with memory_profiler. I’ve tried different variants, and none of them worked.

When I try to use partial from functools, I get this error:

File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 126, in memory_usage
  aspec = inspect.getargspec(f)
File "/usr/lib64/python2.7/inspect.py", line 815, in getargspec
  raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x252da48> is not a Python function

By the way, exactly the same approach works fine with timeit function.

When I try to use lambda as was I got this error:

File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 141, in memory_usage
  ret = parent_conn.recv()
IOError: [Errno 4] Interrupted system call

How can I handle class methods with memory_profiler?

PS: I have memory-profiler (0.26) (installed with pip).

UPD: It’s actually bug. You can check status here: https://github.com/pythonprofilers/memory_profiler/issues/47

Advertisement

Answer

If you want to see the change in memory allocated to the Python VM, you can use psutil. Here is a simple decorator using psuil that will print the change in memory:

import functools
import os
import psutil


def print_memory(fn):
    def wrapper(*args, **kwargs):
        process = psutil.Process(os.getpid())
        start_rss, start_vms = process.get_memory_info()
        try:
            return fn(*args, **kwargs)
        finally:
            end_rss, end_vms = process.get_memory_info()
            print((end_rss - start_rss), (end_vms - start_vms))
    return wrapper


@print_memory
def f():
    s = 'a'*100

In all likelihood, the output you will see will say no change in memory. This is because for small allocations, the Python VM may not need to request more memory from the OS. If you allocate a large array, you will see something different:

import numpy
@print_memory
def f():
    return numpy.zeros((512,512))

Here you should see some change in memory.

If you want to see how much memory is used by each allocated object, the only tool I know of is heapy

In [1]: from guppy import hpy; hp=hpy()

In [2]: h = hp.heap()

In [3]: h
Out[3]: 
Partition of a set of 120931 objects. Total size = 17595552 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  57849  48  6355504  36   6355504  36 str
     1  29117  24  2535608  14   8891112  51 tuple
     2    394   0  1299952   7  10191064  58 dict of module
     3   1476   1  1288416   7  11479480  65 dict (no owner)
     4   7683   6   983424   6  12462904  71 types.CodeType
     5   7560   6   907200   5  13370104  76 function
     6    858   1   770464   4  14140568  80 type
     7    858   1   756336   4  14896904  85 dict of type
     8    272   0   293504   2  15190408  86 dict of class
     9    304   0   215064   1  15405472  88 unicode
<501 more rows. Type e.g. '_.more' to view.>

I have not used it in a long time, so I recommend experimenting and reading the documentation. Note that for an application using a large amount of memory, it can be extremely slow to calculate this information.

User contributions licensed under: CC BY-SA
1 People found this is helpful
Advertisement