0
0
Selenium Pythontesting~15 mins

Network log capture in Selenium Python - Build an Automation Script

Choose your learning style9 modes available
Capture network logs during page load
Preconditions (3)
Step 1: Open Chrome browser with logging enabled for performance
Step 2: Navigate to 'https://example.com'
Step 3: Wait until the page is fully loaded
Step 4: Capture the browser's network logs from performance logs
Step 5: Verify that at least one network request was made to 'https://example.com'
✅ Expected Result: Network logs contain at least one request to 'https://example.com' indicating page resources were loaded
Automation Requirements - Selenium with Python
Assertions Needed:
Assert that network logs are not empty
Assert that at least one network request URL contains 'https://example.com'
Best Practices:
Use ChromeOptions to enable performance logging
Use explicit waits to ensure page load completion
Parse performance logs carefully to extract network events
Avoid hardcoded sleeps; use WebDriverWait
Handle exceptions gracefully
Automated Solution
Selenium Python
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import json

# Setup Chrome options to enable performance logging
chrome_options = Options()
chrome_options.add_experimental_option('w3c', False)  # Disable W3C to access performance logs
chrome_options.set_capability('goog:loggingPrefs', {'performance': 'ALL'})

# Initialize WebDriver
service = Service()
driver = webdriver.Chrome(service=service, options=chrome_options)

try:
    driver.get('https://example.com')

    # Wait until the page's main element is loaded (example: <h1> tag)
    WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.TAG_NAME, 'h1'))
    )

    # Get performance logs
    logs = driver.get_log('performance')

    # Parse logs to find network requests
    network_requests = []
    for entry in logs:
        message = json.loads(entry['message'])['message']
        if message['method'] == 'Network.requestWillBeSent':
            url = message['params']['request']['url']
            network_requests.append(url)

    # Assertions
    assert len(network_requests) > 0, 'No network requests found in logs'
    assert any('https://example.com' in url for url in network_requests), 'No request to https://example.com found'

finally:
    driver.quit()

This script starts by setting Chrome options to enable performance logging, which is necessary to capture network logs.

We disable W3C mode because ChromeDriver currently requires this to access performance logs.

After launching the browser, it navigates to 'https://example.com' and waits explicitly for an <h1> element to appear, indicating the page loaded.

Then it retrieves the performance logs and parses each log entry to find network requests by checking for the 'Network.requestWillBeSent' event.

It collects all request URLs and asserts that there is at least one network request and that one of them contains 'https://example.com'.

Finally, it closes the browser to clean up.

Common Mistakes - 4 Pitfalls
{'mistake': 'Not enabling performance logging in Chrome options', 'why_bad': 'Without enabling performance logging, network logs cannot be captured, so the test will fail or have no data.', 'correct_approach': "Always set ChromeOptions with 'goog:loggingPrefs' for 'performance' to 'ALL' before starting the driver."}
Using hardcoded sleep instead of explicit waits
{'mistake': 'Parsing logs without checking event types', 'why_bad': 'Performance logs contain many event types; parsing all without filtering leads to incorrect data and false assertions.', 'correct_approach': "Filter logs by 'Network.requestWillBeSent' to get only network request events."}
Not quitting the driver in a finally block
Bonus Challenge

Now add data-driven testing with 3 different URLs: 'https://example.com', 'https://www.wikipedia.org', and 'https://www.python.org'. For each URL, capture network logs and verify requests.

Show Hint