0
0
Postmantesting~15 mins

Reporter options (CLI, HTML, JUnit) in Postman - Build an Automation Script

Choose your learning style9 modes available
Verify Postman Newman CLI, HTML, and JUnit reporters generate correct output
Preconditions (3)
Step 1: Run Newman with CLI reporter on 'sample_collection.json'
Step 2: Verify the CLI output shows test run summary with pass/fail counts
Step 3: Run Newman with HTML reporter and specify output file 'report.html'
Step 4: Verify 'report.html' file is created and contains HTML report content
Step 5: Run Newman with JUnit reporter and specify output file 'junit-report.xml'
Step 6: Verify 'junit-report.xml' file is created and contains valid XML with test results
✅ Expected Result: Newman runs the collection and produces correct CLI output, HTML report file, and JUnit XML report file with accurate test results
Automation Requirements - Node.js with child_process to run Newman CLI
Assertions Needed:
CLI output contains 'iterations', 'requests', 'tests', 'assertions', 'failed' counts
HTML report file exists and contains '<html>' tag
JUnit XML report file exists and contains '<testsuite>' tag
Best Practices:
Use child_process.exec or spawn to run Newman commands
Check file existence and read content for validation
Use assertions to verify output correctness
Clean up generated report files after test
Automated Solution
Postman
import { exec } from 'child_process';
import { promises as fs } from 'fs';
import assert from 'assert';

const collectionFile = 'sample_collection.json';
const htmlReport = 'report.html';
const junitReport = 'junit-report.xml';

async function runNewmanCLI() {
  return new Promise((resolve, reject) => {
    exec(`newman run ${collectionFile}`, (error, stdout, stderr) => {
      if (error) return reject(error);
      resolve(stdout);
    });
  });
}

async function runNewmanHTML() {
  return new Promise((resolve, reject) => {
    exec(`newman run ${collectionFile} -r html --reporter-html-export ${htmlReport}`, (error) => {
      if (error) return reject(error);
      resolve();
    });
  });
}

async function runNewmanJUnit() {
  return new Promise((resolve, reject) => {
    exec(`newman run ${collectionFile} -r junit --reporter-junit-export ${junitReport}`, (error) => {
      if (error) return reject(error);
      resolve();
    });
  });
}

async function testReporters() {
  // Run CLI reporter and check output
  const cliOutput = await runNewmanCLI();
  assert(cliOutput.includes('iterations'), 'CLI output missing iterations count');
  assert(cliOutput.includes('requests'), 'CLI output missing requests count');
  assert(cliOutput.includes('tests'), 'CLI output missing tests count');
  assert(cliOutput.includes('assertions'), 'CLI output missing assertions count');

  // Run HTML reporter and verify file
  await runNewmanHTML();
  const htmlContent = await fs.readFile(htmlReport, 'utf-8');
  assert(htmlContent.includes('<html'), 'HTML report missing <html> tag');

  // Run JUnit reporter and verify file
  await runNewmanJUnit();
  const junitContent = await fs.readFile(junitReport, 'utf-8');
  assert(junitContent.includes('<testsuite'), 'JUnit report missing <testsuite> tag');

  // Cleanup
  await fs.unlink(htmlReport);
  await fs.unlink(junitReport);

  console.log('All reporter tests passed');
}

// Execute test

testReporters().catch(err => {
  console.error('Test failed:', err);
  process.exit(1);
});

This script uses Node.js to automate running Newman with different reporters.

First, it runs Newman with the default CLI reporter and checks the output text for key words like 'iterations' and 'tests' to confirm the summary is shown.

Next, it runs Newman with the HTML reporter and exports the report to 'report.html'. It reads the file and asserts it contains the <html> tag, confirming a valid HTML report.

Then, it runs Newman with the JUnit reporter and exports to 'junit-report.xml'. It reads this file and asserts it contains the <testsuite> tag, confirming a valid XML report.

Finally, it cleans up the generated report files to keep the environment clean.

Assertions ensure the test fails if any expected output is missing, giving clear feedback.

Common Mistakes - 3 Pitfalls
Not waiting for Newman process to finish before checking output files
Hardcoding absolute file paths without considering environment
Checking only file existence without validating content
Bonus Challenge

Now add data-driven testing to run the Newman reporters on 3 different Postman collections

Show Hint