[2026] Time Complexity Optimization Checklist for Coding Interviews | Escape TLE
이 글의 핵심
Reduce time complexity in coding interviews: patterns to convert O(N²) to O(N log N), eliminate duplicate calculations, and data structure selection checklist.
Introduction
Reducing time complexity in coding interviews starts with the habit of matching constraints (N range) with current complexity, rather than memorizing faster algorithms. Most TLE (Time Limit Exceeded) comes from unnecessary nested loops or repeatedly calculating same ranges. This guide is organized as a checklist for immediate use in exams. Numbers are approximate upper bounds and may vary with constants, language, and judge environment. The checklist is most effective when used as a 30-second scan habit after reading the problem, rather than memorization.
After Reading This
- Intuitively grasp allowed complexity ranges by N scale
- Recall typical patterns to reduce O(N²) to O(N log N)
- Quickly choose between hash map, sorting, prefix sum, or binary search
Table of Contents
- Concepts
- Practical Implementation
- Advanced Usage
- Performance Comparison
- Real-World Cases
- Troubleshooting
- Conclusion
Concepts
Rough Guide (1 second limit, simple operations assumed)
| N (approx) | Safe | Risky |
|---|---|---|
| ≤ 20 | O(N!) barely | — |
| ≤ 200 | O(N³) | O(N⁴) |
| ≤ 2×10³ | O(N²) | O(N² log N) usually OK |
| ≤ 10⁵ | O(N log N) | O(N²) almost impossible |
| ≤ 10⁶ | O(N) | O(N log N) often possible |
| ≥ 10⁷ | O(N) linear/simple | O(N log N) can be tight |
| First Step: Underline N, Q (query count), string length in problem, and write current solution’s complexity in one line. This is the first step in reducing coding interview time complexity. |
Practical Implementation
✅ Step 1: Are Nested Loops Checking “All Pairs”?
- All (i, j) pairs in array → baseline O(N²).
- Check if you can reduce one axis with sorting + one-direction scan, two pointers, or binary search. 아래 코드는 python를 사용한 구현 예제입니다. 함수를 통해 로직을 구현합니다. 코드를 직접 실행해보면서 동작을 확인해보세요.
# Bad: All pairs — O(n²)
def two_sum_naive(a, t):
for i in range(len(a)):
for j in range(i + 1, len(a)):
if a[i] + a[j] == t:
return i, j
아래 코드는 python를 사용한 구현 예제입니다. 함수를 통해 로직을 구현합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.
# Good: Sorting + two pointers — O(n log n)
def two_sum_sorted(a, t):
a = sorted(enumerate(a), key=lambda x: x[1])
i, j = 0, len(a) - 1
while i < j:
s = a[i][1] + a[j][1]
if s == t:
return a[i][0], a[j][0]
if s < t:
i += 1
else:
j -= 1
아래 코드는 python를 사용한 구현 예제입니다. 함수를 통해 로직을 구현합니다. 코드를 직접 실행해보면서 동작을 확인해보세요.
# Better: Hash map — O(n)
def two_sum_hash(a, t):
seen = {}
for i, num in enumerate(a):
complement = t - num
if complement in seen:
return seen[complement], i
seen[num] = i
✅ Step 2: Recalculating Same Range Sum/Min Every Time?
- Prefix sum for range sum in O(1).
- Sliding window for fixed-length range min/max in O(N). 다음은 python를 활용한 상세한 구현 코드입니다. 함수를 통해 로직을 구현합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.
# Bad: Recalculate sum — O(n*k)
def max_sum_naive(arr, k):
max_sum = 0
for i in range(len(arr) - k + 1):
current_sum = sum(arr[i:i+k]) # Recalculate!
max_sum = max(max_sum, current_sum)
return max_sum
# Good: Sliding window — O(n)
def max_sum_window(arr, k):
window_sum = sum(arr[:k])
max_sum = window_sum
for i in range(k, len(arr)):
window_sum = window_sum - arr[i-k] + arr[i]
max_sum = max(max_sum, window_sum)
return max_sum
✅ Step 3: Using Linear Search for “Existence/Frequency”?
- Hash map (dict / unordered_map) for average O(1) lookup.
- TreeMap for O(log N) when order needed — only when sorted keys required. 아래 코드는 python를 사용한 구현 예제입니다. 필요한 모듈을 import하고, 함수를 통해 로직을 구현합니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.
# Bad: Linear search — O(n) per lookup
def count_frequency_naive(arr, target):
count = 0
for num in arr:
if num == target:
count += 1
return count
# Good: Hash map — O(1) per lookup
from collections import Counter
def count_frequency_hash(arr):
freq = Counter(arr)
return freq # O(1) lookup
✅ Step 4: Scanning Entire Array Per Query?
- If offline, use Mo’s algorithm.
- If fixed range values, use cumulative frequency array.
- For range min, use segment tree to reduce cost per query.
✅ Step 5: Is BFS/DFS Enough for Graph?
- For edge weights/shortest path, Dijkstra etc. changes complexity class. Connect with BFS vs DFS.
Advanced Usage
- For repeated range sum/min/max updates, consider segment tree or Fenwick tree.
- For bit operations/subset problems, bitset can reduce both space and time. Even in competitions/practice, remembering “changing data structure changes complexity class” makes direction easier.
Performance Comparison
| Need | Choice | Typical Cost |
|---|---|---|
| Key existence/count | Hash map | Average O(1) |
| Sorted order/next element | Tree / sorted+binary | O(log N) |
| Range sum | Prefix sum | Preprocess O(N), query O(1) |
| Range min/update | Segment tree | Query/update O(log N) |
| Priority | Heap | push/pop O(log N) |
Real-World Cases
- Log aggregation: Don’t scan entire log for daily counts, use date index array or map + accumulation.
- API rate limiting: Sliding window + queue for O(request count) processing.
Troubleshooting
“Correct Algorithm But Still TLE”
- I/O: Python uses sys.stdin.buffer for large input with
input(). - Recursion depth:
sys.setrecursionlimitor convert to iterative BFS. 아래 코드는 python를 사용한 구현 예제입니다. 필요한 모듈을 import하고. 코드를 직접 실행해보면서 동작을 확인해보세요.
# Python I/O optimization
import sys
input = sys.stdin.readline
# Or for very large input
import sys
data = sys.stdin.buffer.read().decode()
lines = data.split('\n')
”O(N log N) But Still Worried”
- Constants:
dictlookup vslistindex, remove unnecessary sorting. - Language choice: C++ is 10-100x faster than Python for same algorithm. 아래 코드는 python를 사용한 구현 예제입니다. 코드를 직접 실행해보면서 동작을 확인해보세요.
# Bad: Sort multiple times
for query in queries:
arr.sort() # O(n log n) per query!
# process...
# Good: Sort once
arr.sort()
for query in queries:
# process sorted arr...
Conclusion
Reducing coding interview time complexity is half done by writing N range and current solution’s order in one line. The rest is checking with just two axes: eliminate duplicate calculations and correct data structure. For pattern practice, continue with Two Pointers & Sliding Window.
Quick Reference
아래 코드는 text를 사용한 구현 예제입니다. 각 부분의 역할을 이해하면서 코드를 살펴보시기 바랍니다.
N ≤ 20: O(N!) or O(2^N) acceptable
N ≤ 200: O(N³) acceptable
N ≤ 2000: O(N²) acceptable
N ≤ 100000: O(N log N) required
N ≤ 1000000: O(N) or O(N log N)
N ≥ 10^7: O(N) linear required
Common Optimizations:
O(N²) → O(N log N): Sorting + two pointers/binary search
O(N²) → O(N): Hash map, sliding window
O(N) → O(1): Prefix sum, math formula
Related Posts
Keywords
Time Complexity, Optimization, Coding Interview, TLE, Algorithm, Big O, Hash Map, Two Pointers, Sliding Window, Binary Search, Prefix Sum