Skip to content
Advertisement

How can I drop duplicates in pandas without dropping NaN values

I have a dataframe which I query and I want to get only unique values out of a certain column.
I tried to do that executing this code:

    database = pd.read_csv(db_file, sep='t')
    query = database.loc[database[db_specifications[0]].isin(elements)].drop_duplicates(subset=db_specification[1])

db_specification is just a list containing two columns that I query.
Some of the values are NaN and I don’t want to consider them duplicates of each other, how can I achieve that?

Advertisement

Answer

You can start by selecting all NaN and then drop duplicate on the rest of the dataframe.

mask = data.isna().any()
data = pd.concat([data[mask], data[~mask]])
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement