Skip to content
Advertisement

How to set value of first several rows in a Pandas Dataframe for each Group

I am a noob to groupby methods in Pandas and can’t seem to get my head wrapped around it. I have data with ~2M records and my current code will take 4 days to execute – due to the inefficient use of ‘append’.

I am analyzing data from manufacturing with 2 flags for indicating problems with the test specimens. The first few flags from each Test_ID should be set to False. (Reason: there is not sufficient data to accurately analyze these first few of each group)

My inefficient attempt (right result, but not fast enought for 2M rows):

df = pd.DataFrame({'Test_ID' : ['foo', 'foo', 'foo', 'foo', 
                                'bar', 'bar', 'bar'],
                'TEST_Date' : ['2020-01-09 09:49:31',
                                '2020-01-09 12:16:15',
                                '2020-01-09 12:47:44',
                                '2020-01-09 14:39:05',
                                '2020-01-09 17:39:47',
                                '2020-01-09 20:44:58',
                                '2020-01-10 18:40:47'],
                'Flag1' : [True, False, True, False, True, False, False],
                'Flag2' : [True, False, False, False, True, False, False],
                })
 
#generate a list of Test_IDs
Test_IDs = list(df['Test_ID'].unique())  

#generate a list of columns in the dataframe
cols = list(df)  

#generate a new dataframe with the same columns as the original
df_output = pd.DataFrame(columns = cols) 

for i in Test_IDs:
    #split the data into groups, iterate over each group
    df_2 = df[df['Test_ID'] == i].copy()   
    
    #set the first two rows of Flag1 to False for each group
    df_2.iloc[:2, df_2.columns.get_loc('Flag1')] = 0  
    
    #set the first three rows of Flag2 to False for each group
    df_2.iloc[:3, df_2.columns.get_loc('Flag2')] = 0
    
    df_output = df_output.append(df_2)   #add the latest group onto the output df
print(df_output)

Input:

   Flag1  Flag2            TEST_Date Test_ID
0   True   True  2020-01-09 09:49:31     foo
1  False  False  2020-01-09 12:16:15     foo
2   True  False  2020-01-09 12:47:44     foo
3  False  False  2020-01-09 14:39:05     foo
4   True   True  2020-01-09 17:39:47     bar
5  False  False  2020-01-09 20:44:58     bar
6  False  False  2020-01-10 18:40:47     bar

Output:

   Flag1  Flag2            TEST_Date Test_ID
0  False  False  2020-01-09 09:49:31     foo
1  False  False  2020-01-09 12:16:15     foo
2   True  False  2020-01-09 12:47:44     foo
3  False  False  2020-01-09 14:39:05     foo
4  False  False  2020-01-09 17:39:47     bar
5  False  False  2020-01-09 20:44:58     bar
6  False  False  2020-01-10 18:40:47     bar

Advertisement

Answer

Let’s do groupby().cumcount():

# enumeration of rows within each `Test_ID`
enum = df.groupby('Test_ID').cumcount()

# overwrite the Flags
df.loc[enum < 2, 'Flag1'] = False
df.loc[enum < 3, 'Flag2'] = False

Output:

  Test_ID            TEST_Date  Flag1  Flag2
0     foo  2020-01-09 09:49:31  False  False
1     foo  2020-01-09 12:16:15  False  False
2     foo  2020-01-09 12:47:44   True  False
3     foo  2020-01-09 14:39:05  False  False
4     bar  2020-01-09 17:39:47  False  False
5     bar  2020-01-09 20:44:58  False  False
6     bar  2020-01-10 18:40:47  False  False
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement