I was trying to save my dataset in a CSV file with the following script:
with open(data_path+'Furough.csv', 'w',encoding="utf-8") as f0: df = pd.DataFrame(columns=['title','poem','year']) for f in onlyfiles: poem=[] title="" year=0 with open(mypath+f,"r",encoding="utf-8") as f1: for line in f1: if line.__contains__("TIMESTAMP"): year=int(line[12:15]) continue if line.__contains__('TITLE'): title=line[7:] if line!="": poem.append(line) df = df.append({ 'title': title, 'poem':poem, 'year': int(float(year)) }, ignore_index=True) df.to_csv(f0, index=False,encoding='utf-8-sig')
but the result is confusing, write some unknown chars to CSV file instead of Farsi chars: Can anyone help me?
I want to write all these files in a CSV: example of what I have in one of them and want to write:
[V_START] بر پردههای درهم امیال سرکشم [HEM] نقش عجیب چهرۀ یک ناشناس بود [V_END] [V_START] نقشی ز چهرهای که چو میجستمش به شوق [HEM] پیوسته میرمید و بمن رخ نمینمود [V_END] [V_START] یک شب نگاه خستۀ مردی به روی من [HEM] لغزید و سست گشت و همان جا خموش ماند [V_END] [V_START] تا خواستم که بگسلم این رشتۀ نگاه [HEM] قلبم تپید و باز مرا سوی او کشاند [V_END]
but result:
Advertisement
Answer
To add to Cimbali’s answer, another method to add a UTF8 BOM is by using the encoding “utf-8-sig” instead of “utf-8”, as it will automatically take care of it for you.
Further information is in this question: Unable to Save Arabic Decoded Unicode to CSV File Using Python