0
0
Hadoopdata~20 mins

Memory and container sizing in Hadoop - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Memory Mastery in Hadoop
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding Container Memory Allocation in Hadoop

In Hadoop YARN, if you set yarn.scheduler.maximum-allocation-mb to 8192 and yarn.nodemanager.resource.memory-mb to 16384, what is the maximum memory a single container can request?

A8192 MB
B16384 MB
C24576 MB
D4096 MB
Attempts:
2 left
💡 Hint

Think about the maximum allocation limit per container set by the scheduler.

data_output
intermediate
2:00remaining
Calculating Number of Containers per Node

Given a node with 64 GB RAM and yarn.nodemanager.resource.memory-mb set to 61440 MB, if each container requests 4096 MB, how many containers can run simultaneously on this node?

A16
B12
C15
D14
Attempts:
2 left
💡 Hint

Divide the total available memory by the container memory request.

Predict Output
advanced
2:00remaining
YARN Container Memory Calculation Output

What is the output of the following Python code simulating container memory allocation?

total_node_memory = 32768  # in MB
container_memory_request = 4096  # in MB
max_containers = total_node_memory // container_memory_request
print(max_containers)
A6
B7
C9
D8
Attempts:
2 left
💡 Hint

Use integer division to find how many containers fit.

🔧 Debug
advanced
2:00remaining
Identify the Error in Container Memory Configuration

Consider this YARN configuration snippet:

yarn.nodemanager.resource.memory-mb=8192
yarn.scheduler.maximum-allocation-mb=16384

What issue will this cause when running containers?

AContainers may request more memory than the node provides, causing allocation failures.
BNo issue; containers will run normally.
CYARN will ignore the maximum allocation setting.
DNodeManager will allocate double the memory requested.
Attempts:
2 left
💡 Hint

Check if maximum allocation exceeds node memory.

🚀 Application
expert
3:00remaining
Optimizing Container Memory for Mixed Workloads

You have a Hadoop cluster with nodes having 128 GB RAM each. You want to run two types of workloads:

  • Heavy jobs needing 16 GB per container
  • Light jobs needing 4 GB per container

Given yarn.nodemanager.resource.memory-mb is set to 122880 MB, which container memory configuration allows running the maximum number of containers simultaneously without exceeding node memory?

ASet container memory to 16 GB for all jobs, run 7 containers per node
BSet container memory to 4 GB for all jobs, run 30 containers per node
CSet container memory to 8 GB for all jobs, run 15 containers per node
DMix containers with 16 GB and 4 GB memory, run 10 heavy and 10 light containers per node
Attempts:
2 left
💡 Hint

Calculate total memory used by mixed containers and compare with node memory.