Detect Cycle in Linked List Floyd's Algorithm in DSA Python - Time & Space Complexity
We want to understand how the time needed to detect a cycle in a linked list grows as the list gets longer.
Specifically, how many steps does Floyd's Algorithm take to find a cycle or confirm there is none?
Analyze the time complexity of the following code snippet.
class Node:
def __init__(self, val):
self.val = val
self.next = None
def has_cycle(head):
slow = head
fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast:
return True
return False
This code uses two pointers moving at different speeds to detect if a linked list has a cycle.
Identify the loops, recursion, array traversals that repeat.
- Primary operation: The while loop that moves the slow pointer one step and the fast pointer two steps each time.
- How many times: At most, the loop runs once per node until the fast pointer reaches the end or the pointers meet.
As the linked list grows, the number of steps grows roughly in a straight line with the number of nodes.
| Input Size (n) | Approx. Operations |
|---|---|
| 10 | About 10 steps |
| 100 | About 100 steps |
| 1000 | About 1000 steps |
Pattern observation: The steps increase directly with the number of nodes, so doubling nodes roughly doubles steps.
Time Complexity: O(n)
This means the time to detect a cycle grows linearly with the size of the linked list.
[X] Wrong: "Because the fast pointer moves twice as fast, the algorithm runs in constant time O(1)."
[OK] Correct: Even though the fast pointer moves faster, it still needs to traverse nodes, so the total steps depend on the list size.
Understanding this linear time complexity helps you explain why Floyd's Algorithm is efficient and preferred for cycle detection in interviews.
"What if we used a hash set to store visited nodes instead? How would the time complexity change?"