0
0
CppProgramBeginner · 2 min read

C++ Program to Remove Duplicates from Array

You can remove duplicates from an array in C++ by sorting the array and then using a loop to copy only unique elements to a new array; for example, sort the array and then use a loop to compare each element with the previous one, adding it to the result only if it is different.
📋

Examples

Input[1, 2, 2, 3, 4, 4, 5]
Output[1, 2, 3, 4, 5]
Input[5, 5, 5, 5]
Output[5]
Input[]
Output[]
🧠

How to Think About It

To remove duplicates from an array, first sort the array so that duplicate values come together. Then, go through the array and keep only the first occurrence of each value by comparing each element with the one before it. This way, you build a new array with unique elements.
📐

Algorithm

1
Sort the input array.
2
Create an empty array or list to store unique elements.
3
Add the first element of the sorted array to the unique list.
4
Loop through the sorted array starting from the second element.
5
For each element, compare it with the previous element; if different, add it to the unique list.
6
Return or print the unique list.
💻

Code

cpp
#include <iostream>
#include <algorithm>
#include <vector>

int main() {
    int arr[] = {1, 2, 2, 3, 4, 4, 5};
    int n = sizeof(arr) / sizeof(arr[0]);

    if (n == 0) {
        std::cout << "Array after removing duplicates: " << std::endl;
        return 0;
    }

    std::sort(arr, arr + n);

    std::vector<int> unique;
    unique.push_back(arr[0]);

    for (int i = 1; i < n; i++) {
        if (arr[i] != arr[i - 1]) {
            unique.push_back(arr[i]);
        }
    }

    std::cout << "Array after removing duplicates: ";
    for (int val : unique) {
        std::cout << val << " ";
    }
    std::cout << std::endl;

    return 0;
}
Output
Array after removing duplicates: 1 2 3 4 5
🔍

Dry Run

Let's trace the array [1, 2, 2, 3, 4, 4, 5] through the code.

1

Sort the array

Array after sorting: [1, 2, 2, 3, 4, 4, 5]

2

Initialize unique list

unique = [1]

3

Loop through array from second element

Compare each element with previous one

4

Check elements and add unique

i=1: 2 != 1, add 2; unique=[1,2] i=2: 2 == 2, skip i=3: 3 != 2, add 3; unique=[1,2,3] i=4: 4 != 3, add 4; unique=[1,2,3,4] i=5: 4 == 4, skip i=6: 5 != 4, add 5; unique=[1,2,3,4,5]

IndexCurrent ElementPrevious ElementActionUnique List
121Add 2[1, 2]
222Skip[1, 2]
332Add 3[1, 2, 3]
443Add 4[1, 2, 3, 4]
544Skip[1, 2, 3, 4]
654Add 5[1, 2, 3, 4, 5]
💡

Why This Works

Step 1: Sorting the array

Sorting groups duplicate elements together, making it easy to identify and remove duplicates by comparing neighbors.

Step 2: Using a vector for unique elements

A vector stores the unique elements dynamically as we find them, avoiding fixed size limitations.

Step 3: Comparing adjacent elements

By comparing each element with the previous one, we only add elements that are different, effectively removing duplicates.

🔄

Alternative Approaches

Using std::set
cpp
#include <iostream>
#include <set>

int main() {
    int arr[] = {1, 2, 2, 3, 4, 4, 5};
    int n = sizeof(arr) / sizeof(arr[0]);

    std::set<int> unique(arr, arr + n);

    std::cout << "Array after removing duplicates: ";
    for (int val : unique) {
        std::cout << val << " ";
    }
    std::cout << std::endl;

    return 0;
}
Using std::set automatically removes duplicates and sorts elements, but it may be slower for large data due to tree balancing.
Using std::unordered_set to preserve insertion order
cpp
#include <iostream>
#include <unordered_set>
#include <vector>

int main() {
    int arr[] = {1, 2, 2, 3, 4, 4, 5};
    int n = sizeof(arr) / sizeof(arr[0]);

    std::unordered_set<int> seen;
    std::vector<int> unique;

    for (int i = 0; i < n; i++) {
        if (seen.find(arr[i]) == seen.end()) {
            unique.push_back(arr[i]);
            seen.insert(arr[i]);
        }
    }

    std::cout << "Array after removing duplicates: ";
    for (int val : unique) {
        std::cout << val << " ";
    }
    std::cout << std::endl;

    return 0;
}
This method keeps the original order but uses extra memory for the hash set.

Complexity: O(n log n) time, O(n) space

Time Complexity

Sorting the array takes O(n log n) time, and the single pass to remove duplicates takes O(n), so overall O(n log n).

Space Complexity

We use extra space for the vector to store unique elements, which can be up to O(n) in the worst case.

Which Approach is Fastest?

Using std::set also takes O(n log n) but may be slower due to tree overhead; unordered_set offers average O(n) but uses more memory and does not sort.

ApproachTimeSpaceBest For
Sorting + vectorO(n log n)O(n)Sorted unique output, simple code
std::setO(n log n)O(n)Automatic sorting and uniqueness
std::unordered_setO(n)O(n)Fast uniqueness, original order preserved
💡
Sort the array first to easily identify duplicates by comparing neighbors.
⚠️
Not sorting the array first can cause missing duplicates that are not adjacent.