I’m trying to filter a PySpark dataframe that has None
as a row value:
df.select('dt_mvmt').distinct().collect() [Row(dt_mvmt=u'2016-03-27'), Row(dt_mvmt=u'2016-03-28'), Row(dt_mvmt=u'2016-03-29'), Row(dt_mvmt=None), Row(dt_mvmt=u'2016-03-30'), Row(dt_mvmt=u'2016-03-31')]
and I can filter correctly with an string value:
df[df.dt_mvmt == '2016-03-31'] # some results here
but this fails:
df[df.dt_mvmt == None].count() 0 df[df.dt_mvmt != None].count() 0
But there are definitely values on each category. What’s going on?
Advertisement
Answer
You can use Column.isNull
/ Column.isNotNull
:
df.where(col("dt_mvmt").isNull()) df.where(col("dt_mvmt").isNotNull())
If you want to simply drop NULL
values you can use na.drop
with subset
argument:
df.na.drop(subset=["dt_mvmt"])
Equality based comparisons with NULL
won’t work because in SQL NULL
is undefined so any attempt to compare it with another value returns NULL
:
sqlContext.sql("SELECT NULL = NULL").show() ## +-------------+ ## |(NULL = NULL)| ## +-------------+ ## | null| ## +-------------+ sqlContext.sql("SELECT NULL != NULL").show() ## +-------------------+ ## |(NOT (NULL = NULL))| ## +-------------------+ ## | null| ## +-------------------+
The only valid method to compare value with NULL
is IS
/ IS NOT
which are equivalent to the isNull
/ isNotNull
method calls.