Skip to content
Advertisement

Create a new list of dictionary from the index in dataframe Python with the fastest way

I have a ~200mil data in dictionary index_data:

index_data = [
    {3396623046050748: [0, 1],
     3749192045350356: [2],
     4605074846433127: [3],
     112884719857303: [4],
     507466746864539: [5],
     .....
    }
]

Key is a value in CustId and Value is an index of CustID in df_data:

I have a DataFrame df_data:

CustID               Score     Number1     Number2     Phone
3396623046050748     2           2           3         0000
3396623046050748     6           2           3        0000
3749192045350356     1           56          23        2222
4605074846433127     67          532         321       3333
112884719857303      3           11          66        4444
507466746864539      7           22          96        5555

NOTE: If CustID is duplicate, only column Score have different data in each row

I want to create a new list of dict(Total_Score is an avg Score of each CustID, Number is Number2 divide Number1):

result = [
     {'CustID'     :3396623046050748,
      'Total_Score': 4,
      'Number'     : 1.5,
      'Phone'      : 0000
     },
     {'CustID'     :3749192045350356,
      'Total_Score': 1,
      'Number'     : 0.41,
      'Phone'      : 2222
     },
     {'CustID'     :4605074846433127,
      'Total_Score': 67,
      'Number'     : 0.6,
      'Phone'      : 3333
     },
     .........
]

My solution is to loop my dictionary and use multiprocessing from multiprocessing import Process, Manager

def calculateTime(ns, value):
    # get data with share of each process
    df_data2 = ns.df_data
    result2 = ns.result

    # Create new DF from index and old DF
    df_sampleresult = df_data2.loc[value].reset_index(drop = True)

    # create sample list to save data need to append in final result
    dict_sample['CustID'] = df_sampleresult['CustID'][0]
    dict_sample['Time_Score'] = df_sampleresult['Score'].mean()

    result2.append(dict_sample)
    ns.result = result2
    ns.df_data = df_data


if __name__ == '__main__':
    result = list()
    manager = Manager()
    ns = manager.Namespace()
    ns.df = df_data
    ns.result = result

    job = [Process(target = calculateTime, args=(ns,value)) for key,value in 
    index_data.items()]
    _ = [p.start() for p in job]
    _ = [p.join() for p in job]

But It’s not working. Performance is slow and higher memory? Is my setup multiprocess is right? Have another way to do that?

Advertisement

Answer

In [353]: df
Out[353]: 
             CustID  Score  Number1  Number2  Phone
0  3396623046050748      2        2        3   0000
1  3396623046050748      6        2        3   0000
2  3749192045350356      1       56       23   2222
3  4605074846433127     67      532      321   3333
4   112884719857303      3       11       66   4444
5   507466746864539      7       22       96   5555

    
In [351]: d = df.groupby(['CustID', 'Phone', round(df.Number2.div(df.Number1), 2)])['Score'].mean().reset_index(name='Total_Score').rename(columns={'level_2': 'Number'}).to_dict('records')

In [352]: d
Out[352]: 
[{'CustID': 112884719857303, 'Phone': 4444, 'Number': 6.0, 'Total_Score': 3},
 {'CustID': 507466746864539, 'Phone': 5555, 'Number': 4.36, 'Total_Score': 7},
 {'CustID': 3396623046050748, 'Phone': 0000, 'Number': 1.5, 'Total_Score': 4},
 {'CustID': 3749192045350356, 'Phone': 2222, 'Number': 0.41, 'Total_Score': 1},
 {'CustID': 4605074846433127, 'Phone': 3333, 'Number': 0.6, 'Total_Score': 67}]
User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement