0
0
Postmantesting~15 mins

Monitor scheduling in Postman - Build an Automation Script

Choose your learning style9 modes available
Verify Postman Monitor Scheduling and Execution
Preconditions (3)
Step 1: Navigate to the Postman Monitors tab
Step 2: Click 'Create a monitor' button
Step 3: Select the existing collection to monitor
Step 4: Set the monitor schedule to run every day at 9:00 AM
Step 5: Save the monitor
Step 6: Verify the monitor appears in the list with correct schedule
Step 7: Wait for the scheduled time or trigger the monitor manually
Step 8: Check the monitor run results for success or failure
✅ Expected Result: The monitor is created with the specified schedule, runs at the scheduled time, and the run results show the collection executed successfully.
Automation Requirements - Postman Collection Runner with Newman and Postman API
Assertions Needed:
Monitor creation response status is 201
Monitor schedule matches the requested schedule
Monitor run status is 'completed' and successful
Run results contain expected request responses
Best Practices:
Use Postman API to create and manage monitors programmatically
Use Newman CLI to run collections and validate results
Use explicit assertions on API responses and monitor run data
Handle asynchronous monitor run status checks with retries
Automated Solution
Postman
import requests
import time
import json

# Constants
API_KEY = 'YOUR_POSTMAN_API_KEY'
WORKSPACE_ID = 'YOUR_WORKSPACE_ID'
COLLECTION_ID = 'YOUR_COLLECTION_ID'

headers = {
    'X-Api-Key': API_KEY,
    'Content-Type': 'application/json'
}

# Step 1: Create a monitor with schedule
monitor_data = {
    "name": "Daily Monitor Test",
    "collection": COLLECTION_ID,
    "environment": None,
    "schedule": {
        "interval": 1,
        "unit": "day",
        "time": "09:00"
    },
    "workspace": WORKSPACE_ID
}

create_monitor_response = requests.post(
    'https://api.getpostman.com/monitors',
    headers=headers,
    data=json.dumps({"monitor": monitor_data})
)

assert create_monitor_response.status_code == 201, f"Monitor creation failed: {create_monitor_response.text}"

monitor_id = create_monitor_response.json()['monitor']['id']

# Step 2: Verify monitor schedule
assert create_monitor_response.json()['monitor']['schedule']['interval'] == 1
assert create_monitor_response.json()['monitor']['schedule']['unit'] == 'day'
assert create_monitor_response.json()['monitor']['schedule']['time'] == '09:00'

# Step 3: Trigger a manual run
run_response = requests.post(
    f'https://api.getpostman.com/monitors/{monitor_id}/run',
    headers=headers
)
assert run_response.status_code == 201, f"Monitor run trigger failed: {run_response.text}"

run_id = run_response.json()['run']['id']

# Step 4: Poll for run completion
for _ in range(10):
    time.sleep(5)  # wait 5 seconds before checking
    status_response = requests.get(
        f'https://api.getpostman.com/monitors/{monitor_id}/runs/{run_id}',
        headers=headers
    )
    assert status_response.status_code == 200
    run_status = status_response.json()['run']['status']
    if run_status == 'completed':
        break
else:
    assert False, 'Monitor run did not complete in expected time'

# Step 5: Verify run results
run_data = status_response.json()['run']
assert run_data['status'] == 'completed'
assert run_data['summary']['failed'] == 0, f"Some requests failed: {run_data['summary']}"

print('Monitor scheduling and run test passed successfully.')

This script uses the Postman API to automate monitor scheduling and execution verification.

First, it creates a monitor with a daily schedule at 9:00 AM and asserts the creation response is successful (status 201).

Then, it verifies the schedule details in the response to ensure correctness.

Next, it triggers a manual run of the monitor and checks the run creation response.

It polls the run status every 5 seconds up to 10 times to wait for completion.

Finally, it asserts the run completed successfully with no failed requests.

This approach uses explicit assertions and retries to handle asynchronous monitor runs, following best practices for API test automation.

Common Mistakes - 4 Pitfalls
Hardcoding API keys directly in the script
Not checking the monitor creation response status before proceeding
Using fixed sleep times without polling for run completion
Ignoring failed requests in monitor run results
Bonus Challenge

Now add data-driven testing to create monitors with 3 different schedules (daily at 9:00, hourly at 15 minutes past, weekly on Monday at 10:00).

Show Hint