Published on Jun 11, 2025 5 min read

Remove Duplicates from Python Lists: 7 Easy Methods You Should Know

Have you ever found yourself staring at a Python list full of duplicate values? Duplicates can be a hassle, whether you’re working with large datasets or just tidying up some data. Fortunately, Python provides several straightforward methods to eliminate duplicates from your lists.

There are various ways to keep your lists tidy and free from duplicates. From basic built-in features to more tailored options, here are seven effective methods to remove duplicate items from a list in Python. Each approach has its advantages, allowing you to choose the one that best fits your use case.

Top 7 Ways to Remove Duplicates from a List in Python

1. Using set()

One of the simplest and most Pythonic ways to remove duplicates from a list is to convert it into a set. Sets in Python automatically eliminate repeated elements, making this process quick and efficient.

Here’s an example:

my_list = [1, 2, 3, 4, 4, 5, 5]
my_list = list(set(my_list))
print(my_list)

In this situation, set(my_list) removes the duplicates, and list() converts it back into a list. However, note that sets do not maintain the original order of elements. If preserving the order is important, this method might not be suitable. Nevertheless, the speed and ease of using a set make it hard to beat for fast deduplication.

2. With a For Loop and a Temporary List

For Loop

If you want to maintain the order when removing duplicates, using a for loop is a good approach. This method involves iterating through the list and adding elements to another list if they haven’t been added before.

my_list = [1, 2, 3, 4, 4, 5, 5]
unique_list = []

for item in my_list:
    if item not in unique_list:
        unique_list.append(item)

print(unique_list)

This approach preserves the order of the original list, making it a great option when sequence matters. However, the in keyword inside the loop makes this solution less efficient for large lists since it checks if the element is already in the unique_list every time it iterates.

3. Using dict.fromkeys()

A lesser-known but clever way to remove duplicates while preserving the order of elements is using dict.fromkeys(). Since dictionaries in Python 3.7+ preserve insertion order, this method can be a neat trick for deduplication.

my_list = [1, 2, 3, 4, 4, 5, 5]
my_list = list(dict.fromkeys(my_list))
print(my_list)

Here, dict.fromkeys(my_list) creates a dictionary where the list elements are keys (which are unique by nature), and then you convert it back to a list. This method maintains order and is more efficient than a for loop with the in check.

4. Using List Comprehension

List comprehension combined with a temporary set can provide a more concise and Pythonic solution. This method allows for an elegant solution while maintaining both performance and order.

my_list = [1, 2, 3, 4, 4, 5, 5]
seen = set()
unique_list = [item for item in my_list if item not in seen and not seen.add(item)]
print(unique_list)

In this example, the seen set keeps track of the elements already encountered. The not seen.add(item) part is a trick that adds an item to the set and returns None, ensuring that only the first occurrence of each element is added to unique_list. This method is efficient because checking membership in a set is much faster than in a list.

5. Using collections.OrderedDict()

For Python versions older than 3.7, where the regular dict doesn’t preserve order, collections.OrderedDict() was the go-to solution to remove duplicates while maintaining order. Even in modern Python, this method is still valid and ensures elements are ordered.

from collections import OrderedDict

my_list = [1, 2, 3, 4, 4, 5, 5]
my_list = list(OrderedDict.fromkeys(my_list))
print(my_list)

The OrderedDict.fromkeys() method removes duplicates while preserving insertion order. It’s not as efficient as some other options, but it’s still reliable and works in older versions of Python.

6. Using itertools.groupby()

Groupby

If the list is already sorted or you can afford to sort it, the groupby() function from the itertools module can be a neat option. groupby() groups consecutive identical elements together, allowing for efficient removal of duplicates in a sorted list.

from itertools import groupby

my_list = [1, 2, 3, 4, 4, 5, 5, 6, 7, 8]
my_list.sort()  # groupby needs a sorted list
my_list = [key for key, _ in groupby(my_list)]
print(my_list)

This method works well if you don’t mind sorting your list. It’s faster than manually checking for duplicates in an unsorted list, but sorting does come with a time complexity cost.

7. Using a Custom Function with a Set

Finally, for more control over the deduplication process, you can write a custom function that uses a set to track seen elements while iterating through the list. This method combines clarity, flexibility, and efficiency.

def remove_duplicates(my_list):
    seen = set()
    result = []
    for item in my_list:
        if item not in seen:
            seen.add(item)
            result.append(item)
    return result

my_list = [1, 2, 3, 4, 4, 5, 5,6,7]
my_list = remove_duplicates(my_list)
print(my_list)

This highly readable custom function can be modified for more complex deduplication requirements. It also preserves the order of the elements in the original list, making it a versatile choice for various situations.

Conclusion

Removing duplicates from a list in Python doesn’t have to be a complex task. With these seven methods, you can choose the best option for your situation. Whether you need simplicity, performance, or to maintain the order of your list, Python offers multiple efficient and easy-to-understand solutions. From converting to sets to using custom functions, there’s a method for every type of data handling. Depending on your needs, you can easily scale these methods to larger lists or more complex data structures. Whichever method you choose, you’ll be able to remove duplicates efficiently and get back to writing clean, readable code!

Related Articles

Popular Articles