How to write a universal function to join two PySpark dataframes? I want to write a function that performs inner join on two dataframes and also eliminates the repeated common column after joining. As far as I’m aware there is no way to do that, as we always need to define common columns manually while joining. Or is there a