Skip to content
Advertisement

Pandas UDF throws error not of required length

I have a delta table which has thrift data from kafka and I am using a UDF to deserialize it. I have no issues when I use regular UDF, but I get an error when I try to use Pandas UDF.

This runs fine i.e. ruglar UDF

def decoder(thrift_data):
  
  schema_file = thriftpy2.load("/dbfs/FileStore/schema_file.thrift")
  schema = schema_file.SchemaClass()
  decoded_payload = deserialize(schema, thrift_data, TCyBinaryProtocolFactory())
  json_data = proto.struct_to_json(decoded_payload)
  return json.dumps(json_data)
       

decoder_udf = udf(decoder, StringType())
data = spark.sql("""SELECT value FROM data_table""")
data = data.withColumn('decoded_json', decoder_udf(data.value))

But when I use Pandas UDF

def decoder(thrift_data: pd.Series) -> pd.Series:
  
  schema_file = thriftpy2.load("/dbfs/FileStore/schema_file.thrift")
  schema = schema_file.SchemaClass()
  decoded_payload = deserialize(schema, thrift_data, TCyBinaryProtocolFactory())
  json_data = proto.struct_to_json(decoded_payload)
  return json.dumps(json_data)
       

decoder_udf = pandas_udf(decoder, returnType=StringType())
data = spark.sql("""SELECT value FROM data_table""")
data = data.withColumn('decoded_json', decoder_udf(data.value))

I get an error PythonException: 'RuntimeError: Result vector from pandas_udf was not the required length: expected 5000, got 651'.

Advertisement

Answer

Figured out the solution, we have to return the output as a series

Advertisement