Skip to content
Advertisement

PySpark udf returns null when function works in Pandas dataframe

I’m trying to create a user-defined function that takes a cumulative sum of an array and compares the value to another column. Here is a reproducible example:

JavaScript

In Pandas, this is the output:

JavaScript

In Spark using temp_sdf.withColumn('len', test_function_udf('x_ary', 'y')), all of len ends up being null.

Would anyone know why this is the case?

Also, replacing cumsum_array = np.cumsum(np.flip(x_ary)) fails in pySpark with error AttributeError: module 'numpy' has no attribute 'flip', but I know it exists as I can run it fine with Pandas dataframe.
Can this issue be resolved, or is there a better way to flip arrays with pySpark?

Thanks in advance for your help.

Advertisement

Answer

Since test_function returns integer not List/Array. You will get null values as have you mentioned wrong return type. So please remove “ArrayType from udf” or replace return type as LongType() then it will work as given below. :

Note: You can optionally set the return type of your UDF else the default return type is StringType.

Option1:

JavaScript

Option2:

JavaScript
User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement