Skip to content
Advertisement

Tag: performance

Why is statistics.mean() so slow?

I compared the performance of the mean function of the statistics module with the simple sum(l)/len(l) method and found the mean function to be very slow for some reason. I used timeit with the two code snippets below to compare them, does anyone know what causes the massive difference in execution speed? I’m using Python 3.5. The code above executes

Why are Python’s arrays slow?

I expected array.array to be faster than lists, as arrays seem to be unboxed. However, I get the following result: What could be the cause of such a difference? Answer The storage is “unboxed”, but every time you access an element Python has to “box” it (embed it in a regular Python object) in order to do anything with it.

Python program using feedparser slows over time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 7 years ago. Improve this question I have a Python program which is running in a loop and downloading 20k RSS feeds using feedparser and inserting

Fast arbitrary distribution random sampling (inverse transform sampling)

The random module (http://docs.python.org/2/library/random.html) has several fixed functions to randomly sample from. For example random.gauss will sample random point from a normal distribution with a given mean and sigma values. I’m looking for a way to extract a number N of random samples between a given interval using my own distribution as fast as possible in python. This is what

Efficient calculation of Fibonacci series

I’m working on a Project Euler problem: the one about the sum of the even Fibonacci numbers. My code: The problem’s solution can be easily found by printing sum(list2). However, it is taking a lot of time to come up with the list2 I’m guessing. Is there any way to make this faster? Or is it okay even this way…

Updating a python dictionary while adding to existing keys?

I’m looking for the most efficient and pythonic (mainly efficient) way to update a dictionary but keep the old values if an existing key is present. For example… notice how the key ‘2’ exists in both dictionaries and used to have values (‘3’, ‘1’) but now it has the values from it’s key in myDict2 (‘5’, ‘4’)? Is there a

Faster alternative to Python’s zipfile module?

Is there a noticeably faster alternative to Python 2.7.4 zipfile module (with ZIP_DEFLATED) for zipping a large number of files into a single zip file? I had a look at czipfile https://pypi.python.org/pypi/czipfile/1.0.0, but that appears to be focused on faster decrypting (not compressing). I am routinely having to process a large number of image files (~12,000 files of a combination

A faster strptime?

I have code which reads vast numbers of dates in ‘YYYY-MM-DD’ format. Parsing all these dates, so that it can add one, two, or three days then write back in the same format is slowing things down quite considerably. Any suggestions how to speed it up a bit (or a lot)? Answer Python 3.7+: fromisoformat() Since Python 3.7, the datetime

heavy regex – really time consuming

I have the following regex to detect start and end script tags in the html file: meaning in short it will catch: <script “NOT THIS</s” > “NOT THIS</s” </script> it works but needs really long time to detect <script>, even minutes or hours for long strings The lite version works perfectly even for long string: however, the extended pattern I

Advertisement