Skip to content
Advertisement

Tag: apache-spark-sql

Get the most consecutive day from Date column with PySpark

Original dataframe: member_id AccessDate 111111 2020-02-03 111111 2022-03-05 222222 2015-03-04 333333 2021-11-23 333333 2021-11-24 333333 2021-11-25 333333 2022-10-11 333333 2022-10-12 333333 2022-10-13 333333 2022-07-07 444444 2019-01-21 444444 2019-04-21 444444 2019-04-22 444444 2019-04-23 444444 2019-04-24 444444 2019-05-05 444444 2019-05-06 444444 2019-05-07 Result dataframe: member_id Most_Consecutive_AccessDate total 111111 2022-03-05 1 222222 2015-03-04 1 333333 2022-10-11, 2022-10-12, 2022-10-13 3 444444 2019-04-21, 2019-04-22, 2019-04-23,

How to write this pandas logic for pyspark.sql.dataframe.DataFrame without using pandas on spark API?

I’m totally new to Pyspark, as Pyspark doesn’t have loc feature how can we write this logic. I tried by specifying conditions but couldn’t get the desirable result, any help would be greatly appreciated! Answer For a data like the following You’re actually updating total column in each statement, not in an if-then-else way. Your code can be replicated (as

Not able to perform operations on resulting dataframe after “join” operation in PySpark

Here I have created three dataframes: df,rule_df and query_df. I’ve performed inner join on rule_df and query_df, and stored the resulting dataframe in join_df. However, when I try to simply print the columns of the join_df dataframe, I get the following error- The resultant dataframe is not behaving as one, I’m not able to perform any dataframe operations on it.

Advertisement