Removing Duplicates from JavaScript Arrays Guide
Learn every method to remove duplicates from JavaScript arrays. Covers Set, filter with indexOf, reduce, Map for objects, performance comparisons, and real-world deduplication patterns with practical code examples.
Duplicate values in arrays cause incorrect counts, redundant API calls, and broken UI displays. Removing them is a task you will face repeatedly when working with user input, merged datasets, API responses, and search results. JavaScript provides multiple approaches, from the one-line Set method to custom logic for complex objects.
This guide covers every deduplication technique, explains when to use each one, compares performance, and shows real-world patterns for handling arrays of primitive values and arrays of objects.
Method 1: Set (Recommended for Primitives)
A Set stores only unique values. Converting an array to a Set and back to an array removes all duplicates in one line:
const numbers = [1, 2, 2, 3, 3, 3, 4, 4, 5];
const unique = [...new Set(numbers)];
console.log(unique); // [1, 2, 3, 4, 5]
// Works with strings too
const tags = ["react", "javascript", "react", "node", "javascript"];
const uniqueTags = [...new Set(tags)];
console.log(uniqueTags); // ["react", "javascript", "node"]You can also use Array.from() instead of the spread operator:
const unique = Array.from(new Set(numbers));| Aspect | Set Approach |
|---|---|
| Syntax | [...new Set(arr)] |
| Preserves order | Yes (insertion order) |
| Works with primitives | Yes (numbers, strings, booleans) |
| Works with objects | No (compares by reference, not value) |
| Performance | Excellent, O(n) |
Method 2: filter() with indexOf()
The filter method keeps only the first occurrence of each value by comparing each element's index with its first index:
const colors = ["red", "blue", "green", "red", "blue", "yellow"];
const unique = colors.filter((color, index) => {
return colors.indexOf(color) === index;
});
console.log(unique); // ["red", "blue", "green", "yellow"]How It Works
For each element, indexOf() returns the index of its first occurrence. If the current index matches, it is the first occurrence and passes the filter. If the current index is higher, it is a duplicate and gets filtered out.
// "red" at index 0: indexOf("red") = 0 === 0 -> keep
// "blue" at index 1: indexOf("blue") = 1 === 1 -> keep
// "green" at index 2: indexOf("green") = 2 === 2 -> keep
// "red" at index 3: indexOf("red") = 0 !== 3 -> remove (duplicate)
// "blue" at index 4: indexOf("blue") = 1 !== 4 -> remove (duplicate)
// "yellow" at index 5: indexOf("yellow") = 5 === 5 -> keepMethod 3: reduce() with Accumulator
Use reduce to build a new array, adding each element only if it is not already present:
const words = ["hello", "world", "hello", "javascript", "world"];
const unique = words.reduce((acc, word) => {
if (!acc.includes(word)) {
acc.push(word);
}
return acc;
}, []);
console.log(unique); // ["hello", "world", "javascript"]Reduce with Case-Insensitive Deduplication
const emails = [
"alice@example.com",
"Alice@Example.com",
"BOB@example.com",
"bob@example.com"
];
const uniqueEmails = emails.reduce((acc, email) => {
const normalized = email.toLowerCase();
if (!acc.some((e) => e.toLowerCase() === normalized)) {
acc.push(email); // Keep the first occurrence
}
return acc;
}, []);
console.log(uniqueEmails);
// ["alice@example.com", "BOB@example.com"]Method 4: Map for Objects (Deduplicate by Key)
For arrays of objects, Set will not work because it compares references, not property values. Use a Map keyed by the unique property:
const users = [
{ id: 1, name: "Alice", role: "admin" },
{ id: 2, name: "Bob", role: "editor" },
{ id: 1, name: "Alice", role: "admin" },
{ id: 3, name: "Charlie", role: "viewer" },
{ id: 2, name: "Bob", role: "editor" }
];
const uniqueUsers = [
...new Map(users.map((user) => [user.id, user])).values()
];
console.log(uniqueUsers);
// [
// { id: 1, name: "Alice", role: "admin" },
// { id: 2, name: "Bob", role: "editor" },
// { id: 3, name: "Charlie", role: "viewer" }
// ]How It Works
users.map(user => [user.id, user])creates key-value pairs:[[1, {id:1,...}], [2, {id:2,...}], ...]new Map(...)builds a Map; duplicate keys overwrite, keeping the last occurrence.values()extracts the unique objects[...]converts the Map iterator back to an array
Keeping the First Occurrence Instead
// Reverse the array before creating the Map
const uniqueFirst = [
...new Map(users.reverse().map((user) => [user.id, user])).values()
].reverse();Using reduce for Object Deduplication
const products = [
{ sku: "A001", name: "Widget", price: 9.99 },
{ sku: "B002", name: "Gadget", price: 24.99 },
{ sku: "A001", name: "Widget", price: 11.99 },
{ sku: "C003", name: "Gizmo", price: 14.99 }
];
const uniqueProducts = products.reduce((acc, product) => {
const exists = acc.find((p) => p.sku === product.sku);
if (!exists) {
acc.push(product);
}
return acc;
}, []);
console.log(uniqueProducts.length); // 3
console.log(uniqueProducts[0].price); // 9.99 (first occurrence kept)Method 5: for Loop (Manual Approach)
A manual for loop with explicit tracking works well when you need maximum control:
const input = [5, 3, 8, 3, 1, 5, 8, 2, 1];
const seen = new Set();
const unique = [];
for (let i = 0; i < input.length; i++) {
if (!seen.has(input[i])) {
seen.add(input[i]);
unique.push(input[i]);
}
}
console.log(unique); // [5, 3, 8, 1, 2]This approach has O(n) performance (same as the Set-spread method) but gives you a place to add custom logic during the deduplication process.
Performance Comparison
| Method | Time Complexity | Space Complexity | Best For |
|---|---|---|---|
[...new Set(arr)] | O(n) | O(n) | Primitives, simplicity |
filter + indexOf | O(n^2) | O(n) | Small arrays, readability |
reduce + includes | O(n^2) | O(n) | Custom logic, transformations |
Map (by key) | O(n) | O(n) | Arrays of objects |
for + Set | O(n) | O(n) | Maximum control |
// Benchmark: 100,000 elements with ~50% duplicates
const large = Array.from({ length: 100000 }, () =>
Math.floor(Math.random() * 50000)
);
console.time("Set");
const r1 = [...new Set(large)];
console.timeEnd("Set"); // ~3ms
console.time("filter+indexOf");
const r2 = large.filter((v, i) => large.indexOf(v) === i);
console.timeEnd("filter+indexOf"); // ~2500ms
console.time("for+Set");
const seen = new Set();
const r3 = [];
for (const v of large) {
if (!seen.has(v)) { seen.add(v); r3.push(v); }
}
console.timeEnd("for+Set"); // ~4msThe O(n^2) methods (filter+indexOf and reduce+includes) become unusable on arrays over 10,000 elements.
Real-World Patterns
Deduplicating Merged API Responses
async function fetchAllUsers() {
const [page1, page2, page3] = await Promise.all([
fetch("/api/users?page=1").then((r) => r.json()),
fetch("/api/users?page=2").then((r) => r.json()),
fetch("/api/users?page=3").then((r) => r.json())
]);
// Merge and deduplicate by ID
const allUsers = [...page1, ...page2, ...page3];
const uniqueUsers = [
...new Map(allUsers.map((u) => [u.id, u])).values()
];
return uniqueUsers;
}Deduplicating User Tags
const userTags = ["React", "react", "JavaScript", "REACT", "javascript", "Node"];
const normalized = [...new Set(userTags.map((tag) => tag.toLowerCase()))];
console.log(normalized); // ["react", "javascript", "node"]
// If you need to preserve original casing (first occurrence)
const uniqueOriginal = userTags.reduce((acc, tag) => {
if (!acc.some((t) => t.toLowerCase() === tag.toLowerCase())) {
acc.push(tag);
}
return acc;
}, []);
console.log(uniqueOriginal); // ["React", "JavaScript", "Node"]Deduplicating by Multiple Properties
const orders = [
{ userId: 1, product: "Laptop", date: "2026-01-15" },
{ userId: 1, product: "Laptop", date: "2026-01-15" },
{ userId: 2, product: "Phone", date: "2026-01-16" },
{ userId: 1, product: "Phone", date: "2026-01-17" }
];
// Create a composite key from multiple properties
const uniqueOrders = [
...new Map(
orders.map((o) => [`${o.userId}-${o.product}-${o.date}`, o])
).values()
];
console.log(uniqueOrders.length); // 3Common Mistakes
Using Set on Object Arrays
const items = [{ id: 1 }, { id: 1 }, { id: 2 }];
// Bug: Set compares references, not values
const unique = [...new Set(items)];
console.log(unique.length); // 3 -- all kept because different references
// Fix: use Map with a key property
const uniqueById = [...new Map(items.map((i) => [i.id, i])).values()];
console.log(uniqueById.length); // 2Forgetting Case Sensitivity
const tags = ["React", "react", "REACT"];
// Bug: Set treats different cases as different values
const unique = [...new Set(tags)];
console.log(unique); // ["React", "react", "REACT"]
// Fix: normalize before deduplication
const uniqueNormalized = [...new Set(tags.map((t) => t.toLowerCase()))];
console.log(uniqueNormalized); // ["react"]Best Practices
- Default to Set for primitives.
[...new Set(arr)]is the fastest and most readable one-liner. - Use Map for object deduplication. Key by the unique identifier (ID, SKU, email) for O(n) performance.
- Normalize data before deduplication. Lowercase strings, trim whitespace, and standardize formats before comparing.
- Avoid O(n^2) methods on large arrays.
filter+indexOfandreduce+includesshould only be used on arrays under 1,000 elements. - Decide which occurrence to keep. Map keeps the last occurrence by default; reverse the array first if you need the first.
Rune AI
Key Insights
- Set is the default choice for primitives:
[...new Set(arr)]removes duplicates in O(n) time with one line of code. - Map handles object arrays efficiently: key by the unique property (
id,sku,email) for O(n) deduplication. - Normalize before comparing: lowercase strings, trim whitespace, and create composite keys for multi-property deduplication.
- Avoid O(n^2) methods at scale:
filter+indexOfandreduce+includesare fine for small arrays but collapse on 10,000+ elements. - Set preserves insertion order: the first occurrence of each value is kept, and subsequent duplicates are silently ignored.
Frequently Asked Questions
What is the fastest way to remove duplicates from a JavaScript array?
Does Set preserve the original order?
How do I remove duplicates from an array of objects?
Can I remove duplicates without creating a new array?
How do I handle case-insensitive deduplication?
Conclusion
Array deduplication in JavaScript ranges from one-line Set operations for primitive values to Map-based approaches for complex objects. The Set method handles the majority of cases with excellent performance and readability. For object arrays, Map keyed by a unique identifier provides O(n) deduplication. Always normalize your data (lowercase, trim, standardize) before comparing, and avoid O(n^2) methods like filter+indexOf on arrays larger than a few hundred elements. Choose the approach that best matches your data type and performance requirements.
More in this topic
OffscreenCanvas API in JS for UI Performance
Master the OffscreenCanvas API to offload rendering from the main thread. Covers worker-based 2D and WebGL rendering, animation loops inside workers, bitmap transfer, double buffering, chart rendering pipelines, image processing, and performance measurement strategies.
Advanced Web Workers for High Performance JS
Master Web Workers for truly parallel JavaScript execution. Covers dedicated and shared workers, structured cloning, transferable objects, SharedArrayBuffer with Atomics, worker pools, task scheduling, Comlink RPC patterns, module workers, and performance profiling strategies.
JavaScript Macros and Abstract Code Generation
Master JavaScript code generation techniques for compile-time and runtime metaprogramming. Covers AST manipulation, Babel plugin authorship, tagged template literals as macros, code generation pipelines, source-to-source transformation, compile-time evaluation, and safe eval alternatives.