Skip to main content

Command Palette

Search for a command to run...

Sliding Window, Two Pointers, and Prefix Sums: The Pattern Trio That Cracks Array Interviews

Published
22 min read
Sliding Window, Two Pointers, and Prefix Sums: The Pattern Trio That Cracks Array Interviews

There's a moment every developer hits — usually mid-interview — where they realize a brute-force nested loop is the wrong instinct. The array is in front of you, the constraint is clear, but something feels off about scanning it twice. That instinct is correct. The fix is pattern recognition.

Three patterns solve the overwhelming majority of array and substring problems you'll face: sliding window, two pointers, and prefix sums. They're not tricks — they're principled techniques built on one shared insight: if your data is sequential, you can often process it in a single linear pass by maintaining state as you go.

This guide covers all three patterns rigorously — fixed and dynamic window variants, pointer convergence, and prefix arithmetic — with full derivations of six classic problems and three LeetCode hard problems analyzed at the end.


Pattern 1: Sliding Window

The sliding window pattern applies when you need to find or optimize something over a contiguous subarray or substring. Instead of recomputing from scratch for every possible window, you slide a boundary and update incrementally.

There are two variants. Fixed-size windows are simpler: the window length is given. Dynamic windows require you to expand and shrink based on a constraint — and they're where most interview problems live.

Mental model

Think of a physical window sliding along an array. As you move right, a new element enters from the right. Depending on the variant, either the left end advances in lockstep (fixed) or it advances only when the window violates a constraint (dynamic).

arr = [2, 1, 5, 1, 3, 2],  k = 3

Fixed window (size 3):
  [2, 1, 5] 1  3  2   →  sum = 8
   2 [1, 5, 1] 3  2   →  sum = 7   (subtract 2, add 1)
   2  1 [5, 1, 3] 2   →  sum = 9   (subtract 1, add 3)
   2  1  5 [1, 3, 2]  →  sum = 6   (subtract 5, add 2)

The critical operation: remove the outgoing element from the left, add the incoming element from the right. O(1) per step instead of O(k) recomputation.


Problem 1: Maximum Sum Subarray of Size K (Fixed Window)

Problem: Given an array of integers and a number k, find the maximum sum of any contiguous subarray of length exactly k.

Brute force: For every starting index i, sum arr[i..i+k-1]. That's O(n·k).

Sliding window insight: The window [i, i+k-1] and [i+1, i+k] share k-1 elements. Don't recompute the shared part. Just subtract arr[i] and add arr[i+k].

def max_sum_subarray(arr: list[int], k: int) -> int:
    n = len(arr)
    if n < k:
        return -1

    # Build the first window
    window_sum = sum(arr[:k])
    max_sum = window_sum

    # Slide: subtract left element, add right element
    for i in range(k, n):
        window_sum += arr[i] - arr[i - k]
        max_sum = max(max_sum, window_sum)

    return max_sum

Complexity: O(n) time, O(1) space. Each element is added once and subtracted once.

Trace for arr = [2, 1, 5, 1, 3, 2], k = 3:

Initial window:  [2, 1, 5]      sum = 8   max = 8
i=3: +1, -2  →  [1, 5, 1]      sum = 7   max = 8
i=4: +3, -1  →  [5, 1, 3]      sum = 9   max = 9
i=5: +2, -5  →  [1, 3, 2]      sum = 6   max = 9

Answer: 9

Problem 2: Longest Substring Without Repeating Characters (Dynamic Window)

Problem: Given a string s, find the length of the longest substring that contains no duplicate characters.

This is a dynamic window problem because the window size isn't fixed — it grows until a constraint is violated (a duplicate appears), then shrinks from the left until the constraint is satisfied again.

Brute force: Check all O(n²) substrings, verify each in O(n). Total: O(n³) or O(n²) with optimization.

Sliding window insight: Maintain a window [left, right] with no duplicates. When right encounters a character already in the window, advance left past its previous occurrence. Use a hash map to track the last seen index of each character.

def length_of_longest_substring(s: str) -> int:
    char_index = {}   # character → last seen index
    max_len = 0
    left = 0

    for right in range(len(s)):
        char = s[right]

        # If char is in current window, shrink from the left
        if char in char_index and char_index[char] >= left:
            left = char_index[char] + 1

        char_index[char] = right
        max_len = max(max_len, right - left + 1)

    return max_len

Complexity: O(n) time, O(min(n, σ)) space — where σ is the alphabet size (26 for lowercase letters, 128 for ASCII). Each character is visited by right once; left only moves forward, so total steps ≤ 2n.

The key line: if char_index[char] >= left — this guards against jumping left backward. If 'a' was seen before the current window started, it's not a duplicate inside the current window.

Trace for s = "abcabcbb":

right=0: 'a' → window="a",    left=0, len=1
right=1: 'b' → window="ab",   left=0, len=2
right=2: 'c' → window="abc",  left=0, len=3
right=3: 'a' → 'a' seen at 0 ≥ left(0), left→1, window="bca", len=3
right=4: 'b' → 'b' seen at 1 ≥ left(1), left→2, window="cab", len=3
right=5: 'c' → 'c' seen at 2 ≥ left(2), left→3, window="abc", len=3
right=6: 'b' → 'b' seen at 4 ≥ left(3), left→5, window="cb",  len=2
right=7: 'b' → 'b' seen at 6 ≥ left(5), left→7, window="b",   len=1

Answer: 3

Dynamic Window Template

Most dynamic window problems follow this exact structure. Memorize it, then adapt the constraint logic:

def dynamic_window(arr, constraint_fn, update_fn):
    left = 0
    state = initial_state()
    best = 0

    for right in range(len(arr)):
        state = update_fn(state, arr[right])          # expand window

        while constraint_violated(state, constraint):
            state = shrink_fn(state, arr[left])       # shrink from left
            left += 1

        best = max(best, right - left + 1)            # update answer

    return best

The while loop inside the for loop looks like O(n²) — it's not. left never moves backward, so across the entire run it advances at most n times total. The inner loop's iterations are bounded by n globally, not per outer iteration.


Pattern 2: Two Pointers

Two pointers is a family of techniques where you maintain two index variables — often starting at opposite ends of a sorted array or at the beginning of two different sequences — and move them toward each other (or in the same direction) based on a condition.

The power comes from the sorted order invariant: when you eliminate a candidate (by moving a pointer), you're making a provably correct decision — not just an optimistic guess.

When sorted order enables O(n)

Consider finding a pair that sums to a target in a sorted array. With a hash map: O(n) time, O(n) space. With two pointers: O(n) time, O(1) space. That's the typical trade.

Key insight: if arr[left] + arr[right] < target, moving left right increases the sum (since the array is sorted). If the sum is too large, move right left. Every move eliminates at least one candidate provably. After at most n moves, you've either found the pair or proven it doesn't exist.


Problem 3: Container With Most Water

Problem: Given n vertical lines at positions 0..n-1 with heights height[i], find two lines that together with the x-axis forms a container that holds the most water. You cannot slant the container.

Water held by lines i and j (where i < j): min(height[i], height[j]) × (j - i).

Brute force: Check all O(n²) pairs. O(n²) time.

Two-pointer insight: Start with the widest possible container (left=0, right=n-1). To increase water volume, you need either a greater width or greater height. Width only decreases as pointers move inward. So your only hope is finding a taller line. Move the pointer pointing to the shorter line inward — keeping the taller line guarantees you don't miss a potentially larger container.

def max_area(height: list[int]) -> int:
    left, right = 0, len(height) - 1
    max_water = 0

    while left < right:
        h = min(height[left], height[right])
        width = right - left
        max_water = max(max_water, h * width)

        # Move the shorter line inward
        if height[left] <= height[right]:
            left += 1
        else:
            right -= 1

    return max_water

Complexity: O(n) time, O(1) space.

Why moving the shorter pointer is correct: Suppose height[left] < height[right]. Any container formed by left with a right pointer closer than right has a smaller width AND is bounded by height[left] (since height[left] is still the minimum). It cannot exceed the current container. So we can safely discard left and move on. This argument is the proof that the algorithm is exhaustive.


Problem 4: 3Sum

Problem: Find all unique triplets [a, b, c] in the array such that a + b + c = 0.

Brute force: Three nested loops: O(n³).

Pattern: Fix one element (a = arr[i]), then use two pointers to find pairs in the remaining sorted subarray that sum to -a. This is the standard reduction: convert a 3-variable problem into a 1-variable + 2-variable problem.

def three_sum(nums: list[int]) -> list[list[int]]:
    nums.sort()                   # O(n log n) — enables two pointers
    result = []

    for i in range(len(nums) - 2):
        # Skip duplicates for the fixed element
        if i > 0 and nums[i] == nums[i - 1]:
            continue

        target = -nums[i]
        left, right = i + 1, len(nums) - 1

        while left < right:
            s = nums[left] + nums[right]

            if s == target:
                result.append([nums[i], nums[left], nums[right]])
                # Skip duplicates for left and right
                while left < right and nums[left] == nums[left + 1]:
                    left += 1
                while left < right and nums[right] == nums[right - 1]:
                    right -= 1
                left += 1
                right -= 1

            elif s < target:
                left += 1
            else:
                right -= 1

    return result

Complexity: O(n²) time (outer loop O(n) × inner two-pointer O(n)), O(n log n) or O(1) auxiliary space depending on sort implementation. The three duplicate-skip blocks are what prevent inserting duplicate triplets into result.

Why sort first? Sorting is what gives two pointers their power. Without a monotonic ordering, you can't reason about which direction to move a pointer. The sort cost (O(n log n)) is dominated by the O(n²) two-pointer phase.


Problem 5: Trapping Rain Water

Problem: Given an array of non-negative integers representing an elevation map, compute how much water can be trapped after raining.

The water at any position i is determined by: min(max_left[i], max_right[i]) - height[i]. That is, water is bounded by the shorter of the tallest walls on each side.

Brute force: For each position, scan left and right for the max. O(n²). With prefix max arrays: O(n) time but O(n) space.

Two-pointer approach: O(n) time, O(1) space. Maintain the maximum seen so far from the left (left_max) and from the right (right_max). At any step, if left_max < right_max, the water at left is determined by left_max (since even if there's something taller to the right, left_max is the binding constraint). Process left and move it inward. Mirror reasoning for the right side.

def trap(height: list[int]) -> int:
    left, right = 0, len(height) - 1
    left_max = right_max = 0
    water = 0

    while left < right:
        if height[left] <= height[right]:
            if height[left] >= left_max:
                left_max = height[left]    # new left wall found
            else:
                water += left_max - height[left]   # trapped between walls
            left += 1
        else:
            if height[right] >= right_max:
                right_max = height[right]
            else:
                water += right_max - height[right]
            right -= 1

    return water

Complexity: O(n) time, O(1) space. Each element is processed exactly once.

Why this works: When height[left] <= height[right], we know right_max >= height[right] >= height[left]. So the right side already provides a wall at least as tall as left_max (since right_max >= height[right] and height[right] >= height[left]). The water at left is therefore exactly left_max - height[left]. No right-side uncertainty can improve this — the right side is already "good enough."

Trace for height = [0,1,0,2,1,0,1,3,2,1,2,1]:

The famous input. Answer: 6.

The water fills in the valleys:
  - Position 2 holds 1 unit (walls: height[1]=1, height[3]=2)
  - Positions 4,5 hold 1+2=3 units (bounded by height[3]=2, height[7]=3)
  - Position 9 holds 1 unit (bounded by height[8]=2, height[10]=2)
  - Position 11 holds 1 unit (bounded by height[10]=2 — wait, no wall to right)
  
  Total = 6

Pattern 3: Prefix Sums

Prefix sums transform range sum queries from O(n) to O(1) by precomputing cumulative sums. They're the array equivalent of a lookup table.

The construction

arr    = [3,  1,  4,  1,  5,  9,  2]
prefix = [0,  3,  4,  8,  9, 14, 23, 25]
          ↑
          sentinel zero makes the formula uniform

prefix[i] stores the sum of arr[0..i-1]. With a leading zero sentinel, the sum of any subarray arr[l..r] (inclusive, 0-indexed) is:

range_sum(l, r) = prefix[r+1] - prefix[l]

This works because prefix[r+1] includes everything from index 0 to r, and prefix[l] includes everything from index 0 to l-1. Their difference is exactly the subarray from l to r.


Problem 6: Range Sum Query — Immutable

Problem: Given an integer array, handle multiple queries of the form "return the sum of elements between indices left and right inclusive."

This is the canonical prefix sum application. Precompute once, answer each query in O(1).

class NumArray:
    def __init__(self, nums: list[int]):
        n = len(nums)
        self.prefix = [0] * (n + 1)
        for i in range(n):
            self.prefix[i + 1] = self.prefix[i] + nums[i]

    def sum_range(self, left: int, right: int) -> int:
        return self.prefix[right + 1] - self.prefix[left]

Complexity: O(n) preprocessing, O(1) per query. If you have q queries, total cost is O(n + q) versus O(n·q) for brute force. For large q, this is transformative.


Problem 7: Subarray Sum Equals K

Problem: Given an integer array nums and an integer k, return the number of contiguous subarrays whose sum equals k.

This is where prefix sums combine with a hash map to produce an elegant O(n) solution.

Key identity: sum(arr[l..r]) = prefix[r+1] - prefix[l]. We want this to equal k, so prefix[l] = prefix[r+1] - k.

For each new prefix sum we compute, we ask: "how many previous prefix sums equal current_prefix - k?" Each such previous index l gives a valid subarray ending at r.

def subarray_sum(nums: list[int], k: int) -> int:
    count = 0
    prefix_sum = 0
    freq = {0: 1}   # prefix sum 0 seen once (the empty prefix before index 0)

    for num in nums:
        prefix_sum += num

        # How many previous prefixes give us a subarray summing to k?
        if prefix_sum - k in freq:
            count += freq[prefix_sum - k]

        freq[prefix_sum] = freq.get(prefix_sum, 0) + 1

    return count

Complexity: O(n) time, O(n) space for the hash map.

Why freq = {0: 1}? Consider a subarray starting at index 0. Its sum is prefix[r+1] - prefix[0] = prefix[r+1] - 0. For this to equal k, we need prefix[r+1] - k = 0. Without the sentinel {0: 1}, we'd miss subarrays that start from the beginning.

Trace for nums = [1, 1, 1], k = 2:

freq = {0: 1},  prefix_sum = 0

i=0: num=1 → prefix=1,  look for 1-2=-1 in freq → 0. freq={0:1, 1:1}
i=1: num=1 → prefix=2,  look for 2-2=0  in freq → 1. count=1. freq={0:1, 1:1, 2:1}
i=2: num=1 → prefix=3,  look for 3-2=1  in freq → 1. count=2. freq={0:1, 1:1, 2:1, 3:1}

Answer: 2  (subarrays [1,1] at indices 0-1 and 1-2)

Handling negative numbers: Unlike sliding window, prefix sums + hash map work even when the array contains negatives (where a dynamic window would need to know which direction to shrink). This is a key differentiator when choosing between patterns.


2D Prefix Sums (Extension)

For matrix problems, prefix sums extend naturally. The sum of any rectangular submatrix (r1,c1) to (r2,c2) is:

prefix[r2+1][c2+1] - prefix[r1][c2+1] - prefix[r2+1][c1] + prefix[r1][c1]

This is inclusion-exclusion on the four corners. It powers problems like "max sum rectangle" and "count submatrices with target sum" — both of which reduce to 1D prefix sum problems over column ranges.


Pattern Selection Guide

Before choosing a pattern, ask these four questions:

Question Answer Pattern
Fixed window size given? Yes Fixed sliding window
Find longest/shortest subarray with constraint? Yes Dynamic sliding window
Array is sorted, find pairs/triplets? Yes Two pointers (from ends)
Need range sums or "count subarrays with sum = k"? Yes Prefix sum + hash map
Negatives in array + need subarray sums? Yes Prefix sum (not sliding window)
Same direction scan, variable distance? Yes Fast/slow pointers

3 LeetCode Hard Problems: Full Analysis

Hard Problem 1: Sliding Window Maximum (LeetCode 239)

Problem: Given an array and a sliding window of size k, return the maximum value in each window position as it slides from left to right.

Why it's hard: A fixed window slides across the array — easy. But finding the max in each window naively is O(k) per step → O(n·k) total. Can you do O(n)?

Pattern: Fixed sliding window + monotonic deque.

The insight: maintain a deque of indices in decreasing order of their values. The front of the deque always holds the maximum of the current window. When a new element enters from the right, pop all smaller elements from the back (they can never be the max while the new element is in the window). When the front index falls outside the window, pop it.

from collections import deque

def max_sliding_window(nums: list[int], k: int) -> list[int]:
    dq = deque()    # stores indices, values are decreasing
    result = []

    for i in range(len(nums)):
        # Remove indices outside the window
        while dq and dq[0] < i - k + 1:
            dq.popleft()

        # Maintain decreasing order: remove smaller elements from the back
        while dq and nums[dq[-1]] < nums[i]:
            dq.pop()

        dq.append(i)

        # Window is fully formed starting from index k-1
        if i >= k - 1:
            result.append(nums[dq[0]])

    return result

Complexity: O(n) time — each element is added and removed from the deque at most once. O(k) space for the deque.

Why monotonic deque? When we add a new element and pop smaller elements from the back, we're not losing information. Those smaller elements are dominated by the new element AND they entered the window earlier (so they'll leave earlier). They can never be the maximum in any future window. The deque compresses the window into only the candidates that could possibly be the max.

The deeper pattern: Monotonic stack/deque is the key to O(n) "range max/min" queries in a sliding window. Next Greater Element, Maximum Rectangle in Histogram, and this problem all follow the same structure.


Hard Problem 2: Minimum Window Substring (LeetCode 76)

Problem: Given strings s and t, find the minimum length substring of s that contains all characters of t (including duplicates).

Why it's hard: You need to track a multi-character frequency constraint across a dynamically sized window. The "window is valid" condition involves all character counts simultaneously.

Pattern: Dynamic sliding window with frequency maps.

Key invariant: track formed — the number of unique characters in t whose required frequency is currently satisfied in the window. When formed == required (all characters satisfied), the window is valid. Try to shrink from the left while maintaining validity.

from collections import Counter

def min_window(s: str, t: str) -> str:
    if not t or not s:
        return ""

    need = Counter(t)           # required frequencies
    required = len(need)        # unique chars needed

    left = 0
    formed = 0                  # unique chars with satisfied frequency
    window_counts = {}

    best = float("inf"), 0, 0   # (length, left, right)

    for right in range(len(s)):
        char = s[right]
        window_counts[char] = window_counts.get(char, 0) + 1

        # Check if this char's frequency is now satisfied
        if char in need and window_counts[char] == need[char]:
            formed += 1

        # Shrink window from left while it remains valid
        while left <= right and formed == required:
            # Update best if this window is smaller
            if right - left + 1 < best[0]:
                best = (right - left + 1, left, right)

            # Remove left character from window
            left_char = s[left]
            window_counts[left_char] -= 1
            if left_char in need and window_counts[left_char] < need[left_char]:
                formed -= 1
            left += 1

    return "" if best[0] == float("inf") else s[best[1]: best[2] + 1]

Complexity: O(|s| + |t|) time — each character in s is added and removed at most once. O(|s| + |t|) space for the frequency maps.

The formed counter trick: Instead of checking all characters in need on every window contraction (which would be O(|t|) per step), we maintain a single integer formed that summarizes validity. It increments when a character's count hits its target, decrements when it falls below. This compresses the full validity check into O(1).

Common mistakes: (1) Only checking window_counts[char] == need[char] on addition, not >= — but this is fine since formed only triggers on exact match, and once exceeded it stays satisfied. (2) Forgetting to check left_char in need before decrementing formed — characters not in t shouldn't affect the validity count.


Hard Problem 3: Count of Subarrays with Bounded Maximum (LeetCode 795)

Problem: Count the number of contiguous non-empty subarrays such that the maximum element is at least left and at most right.

Why it's hard: You can't directly use prefix sums because the constraint is on the maximum, not the sum. The counting requires careful boundary arithmetic and an indirect decomposition.

Pattern: Prefix sum + counting decomposition.

Key insight: Instead of directly counting subarrays with left ≤ max ≤ right, decompose it:

count(left ≤ max ≤ right) = count(max ≤ right) - count(max ≤ left - 1)

Define count_at_most(bound) = number of subarrays where all elements ≤ bound. This is straightforward with a single linear pass: maintain the length of the current "valid" segment (no element exceeds bound). Each time you add a valid element, all subarrays ending at this element within the current segment are valid — that's (current_length) new subarrays.

def num_subarray_bounded_max(nums: list[int], left: int, right: int) -> int:

    def count_at_most(bound: int) -> int:
        count = 0
        current = 0   # length of current valid segment
        for num in nums:
            if num <= bound:
                current += 1    # extend current segment
            else:
                current = 0     # reset — this element breaks any valid subarray
            count += current    # subarrays ending here within valid segment
        return count

    return count_at_most(right) - count_at_most(left - 1)

Complexity: O(n) time (two linear passes), O(1) space.

Why count_at_most works: When processing index i with a current valid segment of length curr, the subarrays ending at i that satisfy the constraint are arr[i..i], arr[i-1..i], ..., arr[i-curr+1..i] — exactly curr subarrays. Summing across all positions gives the total count. When an element exceeds bound, it can't appear in any valid subarray, so we reset curr = 0.

The broader principle — inclusion-exclusion on constraints: This decomposition (f(L,R) = f(∞,R) - f(∞,L-1)) is a recurring technique. You'll see it in "number of subarrays with GCD in range", "count subarrays with max XOR ≤ k", and similar problems. Whenever the target constraint is a range [L, R], check if you can express it as atMost(R) - atMost(L-1). If atMost is computable in O(n), you just solved an O(n²) problem in O(n).


Pattern Comparison

PATTERN          | TIME   | SPACE | REQUIRES SORT | HANDLES NEGATIVES
-----------------+--------+-------+---------------+------------------
Fixed window     | O(n)   | O(1)  | No            | Yes
Dynamic window   | O(n)   | O(k)  | No            | No (sums only)
Two pointers     | O(n)   | O(1)  | Yes (pairs)   | Depends
Prefix sum       | O(n)   | O(n)  | No            | Yes
Prefix + hashmap | O(n)   | O(n)  | No            | Yes

The "handles negatives" distinction matters more than people expect. A dynamic window with sum = k will break on negative values because shrinking from the left doesn't guarantee the sum decreases. Prefix sum + hash map has no such restriction.


Problem Pattern Recognition: Quick Reference

"maximum/minimum sum subarray of size k"     → fixed sliding window
"longest substring with at most k distinct"  → dynamic sliding window
"count subarrays with sum = k"               → prefix sum + hash map
"find pair/triplet summing to target"        → two pointers (sort first)
"trapping water / container problem"         → two pointers from ends
"range sum query (multiple)"                 → prefix sum array
"sliding window maximum/minimum"             → monotonic deque
"minimum window containing all of t"        → dynamic window + freq map
"count subarrays with bounded max"           → atMost decomposition

Key Takeaways

The three patterns share a common philosophy: instead of recomputing from scratch, maintain state incrementally. Sliding window does this spatially (a contiguous region). Two pointers do this directionally (sorted order as a guide). Prefix sums do this cumulatively (partial sums as a lookup structure).

Master these before reaching for more complex data structures. In a significant fraction of array interviews — including hard LeetCode problems — one of these three patterns, or a combination of two, is the intended solution. The question is whether you recognize which one applies.