Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to print the main purpose of Hadoop.
Hadoop
print('Hadoop is used for [1] large data sets across clusters of computers.')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing unrelated actions like painting or cooking.
✗ Incorrect
Hadoop is a framework for processing large data sets across many computers.
2fill in blank
mediumComplete the code to show the two main components of Hadoop.
Hadoop
components = ['HDFS', '[1]'] print(components)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Confusing Hadoop components with unrelated software names.
✗ Incorrect
Hadoop mainly consists of HDFS for storage and MapReduce for processing data.
3fill in blank
hardFix the error in the code to create a Hadoop cluster description.
Hadoop
cluster_description = 'Hadoop runs on a [1] of computers to handle big data.'
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using single or desktop which do not represent multiple computers.
✗ Incorrect
Hadoop works on a cluster, which is a group of computers working together.
4fill in blank
hardFill both blanks to describe Hadoop's data storage and processing.
Hadoop
Hadoop stores data in [1] and processes it using [2].
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up storage and processing components.
✗ Incorrect
HDFS is Hadoop's storage system, and MapReduce is its processing model.
5fill in blank
hardFill all three blanks to complete the Hadoop description.
Hadoop
Hadoop is an [1] framework that uses [2] for storage and [3] for processing.
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Choosing closed-source or mixing storage and processing terms.
✗ Incorrect
Hadoop is open-source software using HDFS for storage and MapReduce for processing.