PowerShell Script to Remove Duplicates from Array
Use
$uniqueArray = $array | Select-Object -Unique to remove duplicates from an array in PowerShell.Examples
Input[1, 2, 2, 3]
Output[1, 2, 3]
Input['apple', 'banana', 'apple', 'orange']
Output['apple', 'banana', 'orange']
Input[]
Output[]
How to Think About It
To remove duplicates, think of filtering the array so only the first occurrence of each item remains. PowerShell's
Select-Object -Unique command does this by scanning the array and keeping unique values in order.Algorithm
1
Get the input array.2
Scan each element in order.3
Keep only the first occurrence of each element.4
Return the filtered array without duplicates.Code
powershell
$array = @(1, 2, 2, 3, 4, 4, 5) $uniqueArray = $array | Select-Object -Unique Write-Output $uniqueArray
Output
1
2
3
4
5
Dry Run
Let's trace the array @(1, 2, 2, 3) through the code.
1
Input array
1, 2, 2, 3
2
Apply Select-Object -Unique
Scans and keeps first 1, then first 2, skips second 2, keeps 3
3
Output unique array
1, 2, 3
| Element | Action | Resulting Array |
|---|---|---|
| 1 | Keep | [1] |
| 2 | Keep | [1, 2] |
| 2 | Skip (duplicate) | [1, 2] |
| 3 | Keep | [1, 2, 3] |
Why This Works
Step 1: Pipeline usage
The array is sent through the pipeline to Select-Object -Unique, which processes each item.
Step 2: Unique filtering
Select-Object -Unique keeps only the first occurrence of each item, removing duplicates.
Step 3: Output
The filtered array with unique elements is output and can be stored or displayed.
Alternative Approaches
Using .NET HashSet
powershell
$array = @(1, 2, 2, 3, 4, 4, 5) $hashSet = New-Object System.Collections.Generic.HashSet[int] foreach ($item in $array) { $hashSet.Add($item) | Out-Null } $uniqueArray = $hashSet.ToArray() Write-Output $uniqueArray
This method uses a HashSet for uniqueness and may be faster for large arrays but does not preserve order.
Using Group-Object
powershell
$array = @(1, 2, 2, 3, 4, 4, 5) $uniqueArray = $array | Group-Object | ForEach-Object { $_.Name } Write-Output $uniqueArray
Groups elements and extracts unique keys; preserves order but is slightly more complex.
Complexity: O(n) time, O(n) space
Time Complexity
The command scans each element once, so time grows linearly with array size.
Space Complexity
Extra space is needed to store unique elements, proportional to the number of unique items.
Which Approach is Fastest?
Select-Object -Unique is simple and efficient for most cases; HashSet is faster for very large arrays but loses order.
| Approach | Time | Space | Best For |
|---|---|---|---|
| Select-Object -Unique | O(n) | O(n) | Simple scripts, preserves order |
| HashSet | O(n) | O(n) | Large arrays, performance critical, order not important |
| Group-Object | O(n) | O(n) | When grouping is also needed, preserves order |
Use
Select-Object -Unique for a simple and readable way to remove duplicates in PowerShell.Beginners often try to remove duplicates by manually looping and comparing items instead of using built-in commands.