The Map phase in Hadoop starts by taking a split of input data. Each record in this split is processed by the map function. For example, a text record is split into words. For each word, the map function emits a key-value pair, typically the word and the count 1. These emitted pairs are collected and later shuffled and sorted to prepare for the reduce phase. The execution table shows step-by-step how records are split and pairs emitted. Variables like 'record', 'words', and 'emitted_pairs' change as the map function runs. Beginners often wonder why multiple pairs come from one record; this is because each word is processed separately. After map, the data is ready for the next phase in Hadoop's data processing pipeline.