I have a pandas dataframe and want to write it as a parquet file to the Azure file storage.
So far I have not been able to transform the dataframe directly into a bytes which I then can upload to Azure. My current workaround is to save it as a parquet file to the local drive, then read it as a bytes object which I can upload to Azure.
Can anyone tell me how I can transform a pandas dataframe directly to a “parquet file”-bytes object without writing it to a disk? The I/O operation is really slowing things down and it feels a lot like really ugly code…
# Transform the data_frame into a parquet file on the local drive data_frame.to_parquet('temp_p.parquet', engine='auto', compression='snappy') # Read the parquet file as bytes. with open("temp_p.parquet", mode='rb') as f: fileContent = f.read() # Upload the bytes object to Azure service.create_file_from_bytes(share_name, file_path, file_name, fileContent, index=0, count=len(fileContent))
I’m looking to implement something like this, where the transform_functionality returns a bytes object:
my_bytes = data_frame.transform_functionality() service.create_file_from_bytes(share_name, file_path, file_name, my_bytes, index=0, count=len(my_bytes))
Advertisement
Answer
I have found a solution, I will post it here in case anyone needs to do the same task. After writing it with the to_parquet
file to a buffer, I get the bytes object out of the buffer with the .getvalue()
functionality as follows:
buffer = BytesIO() data_frame.to_parquet(buffer, engine='auto', compression='snappy') service.create_file_from_bytes(share_name, file_path, file_name, buffer.getvalue(), index=0, count=buffer.getbuffer().nbytes )