When Spark performs a join, it first checks the size of the datasets involved. If one dataset is small, Spark uses a broadcast join, sending the small dataset to all worker nodes to avoid expensive data shuffling. This makes the join operation faster and reduces network usage. If both datasets are large, Spark uses shuffle or sort-merge joins, which involve redistributing data across nodes and are slower but can handle large data. The choice of join strategy directly affects performance by balancing computation and data movement. This process is shown step-by-step in the execution table and variable tracker, illustrating how Spark decides and executes the join efficiently.