Skip to content

Tag: apache-spark

Caching a PySpark Dataframe

Suppose we have a PySpark dataframe df with ~10M rows. Also let the columns be [col_a, col_b]. Which would be faster: or Would caching df_test make sense here? Answer It won’t make much difference. it is just one loop where you can skip cache like below Here spark is loading Data once in memory. If you …