0
0
Intro to Computingfundamentals~20 mins

Search engines and how they find information in Intro to Computing - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Search Engine Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does a search engine find web pages?

Imagine a search engine as a librarian who needs to find books in a huge library. Which step below best describes how the search engine finds web pages?

AIt only searches pages stored on the user's computer.
BIt waits for users to send web pages directly to it.
CIt guesses web pages based on popular topics without visiting them.
DIt uses a program called a crawler to visit web pages and collect information about them.
Attempts:
2 left
💡 Hint

Think about how a librarian collects books to organize them.

trace
intermediate
2:00remaining
What does the crawler find on a web page?

Look at the simplified flowchart below showing a crawler visiting a web page:

Start -> Visit URL -> Read page content -> Extract links -> Store data -> End

What information does the crawler collect to help the search engine?

AOnly the images on the page.
BThe text on the page and the links to other pages.
CThe user's personal data on the page.
DThe color scheme of the page.
Attempts:
2 left
💡 Hint

Think about what helps the search engine find and connect pages.

Comparison
advanced
2:00remaining
Comparing indexing and crawling

Which statement correctly compares the roles of crawling and indexing in search engines?

ACrawling finds and reads pages; indexing organizes the information for fast searching.
BIndexing finds pages; crawling organizes the information.
CBoth crawling and indexing do the same job of finding pages.
DNeither crawling nor indexing is used by search engines.
Attempts:
2 left
💡 Hint

Think of crawling as collecting books and indexing as making a catalog.

identification
advanced
2:00remaining
Identify the error in this crawler behavior

A crawler visits pages but never follows links to new pages. What problem will this cause?

AThe crawler will collect personal user data accidentally.
BThe crawler will find too many pages and slow down the search engine.
CThe search engine will only know about the first pages visited and miss many others.
DThe crawler will index pages twice causing duplicates.
Attempts:
2 left
💡 Hint

Think about how the crawler discovers new pages.

🚀 Application
expert
3:00remaining
Predict the output of a simplified crawler simulation

Given this simplified crawler code simulation:

pages = {"A": ["B", "C"], "B": ["C"], "C": ["A"]}
crawled = set()
def crawl(page):
    if page not in crawled:
        crawled.add(page)
        for link in pages.get(page, []):
            crawl(link)
crawl("A")
print(sorted(crawled))

What will be printed?

Intro to Computing
pages = {"A": ["B", "C"], "B": ["C"], "C": ["A"]}
crawled = set()
def crawl(page):
    if page not in crawled:
        crawled.add(page)
        for link in pages.get(page, []):
            crawl(link)
crawl("A")
print(sorted(crawled))
A["A", "B", "C"]
B["A", "B"]
C["B", "C"]
D[]
Attempts:
2 left
💡 Hint

Trace the calls and see which pages get added to the set.