I have a large df (14*1’000’000) and I want to subset it. The calculation seems to take unsurprisingly a lot of time though and I wonder how to improve the speed.
What I want is to subset for each Name
the lowest value of Total_time
while ignoring zero values and picking only the first one if there is more than one row has the lowest value of Total_time
. And then I want it to be all appended into a new dataframe unique
.
Is there a general mistake in my code that makes it inefficient?
JavaScript
x
11
11
1
unique = pd.DataFrame([])
2
i=0
3
for pair in df['Name'].unique():
4
i=i+1
5
temp =df[df["Name"]== pair]
6
temp2 = temp.loc[df['Total_time'] != 0]
7
lowest = temp2['Total_time'].min()
8
temp3 = temp2[temp2["Total_time"] == lowest].head(1)
9
unique = unique.append(temp3)
10
print("finished "+ pair + " "+ str(i))
11
Advertisement
Answer
in general, you don’t want to iterate over each item.
if you want the Name with the smallest time:
JavaScript
1
3
1
new_df = df[df["Total_time"] != 0].copy() # you seem to be throwing away 0
2
out = new_df.groupby("Name")["Total_time"].min()
3
If you need the rest of the columns:
JavaScript
1
2
1
new_df.loc[new_df.groupby("Name")["total_time"].idxmin()]
2