Contiguous Memory & Random Access
An array stores elements in a contiguous block of memory. Given the base address and element size, any element can be located in O(1) time using simple arithmetic: addr = base + index * size. This makes arrays the fastest structure for random access.
However, inserting or deleting in the middle requires shifting elements, costing O(n). Arrays are also cache-friendly because sequential elements sit next to each other in memory, exploiting spatial locality for fast iteration.
Dynamic Arrays
A dynamic array (Java ArrayList, C++ vector, Python list) wraps a fixed-size array and resizes when capacity is exceeded. The typical strategy is to double the capacity, yielding amortized O(1) appends. Individual resize operations copy all elements (O(n)), but they happen infrequently enough that the average cost per append stays constant.
| Operation | Static Array | Dynamic Array |
|---|---|---|
| Access by index | O(1) | O(1) |
| Append (end) | N/A (fixed) | O(1) amortized |
| Insert (middle) | O(n) | O(n) |
| Delete (middle) | O(n) | O(n) |
| Search (unsorted) | O(n) | O(n) |
| Space overhead | None | Up to 2x |
Pointer-Based Sequences
A linked list stores each element in a separate node, connected by pointers. Unlike arrays, nodes can be scattered across memory. This makes insertion and deletion at a known position O(1) (just re-wire pointers), but searching for a node is O(n) since you must traverse from the head.
Variants
Each node has a next pointer. Traversal is one-way. Simplest form, least memory overhead per node.
Each node has next and prev pointers. Enables O(1) deletion of a given node and backward traversal.
The tail's next points back to the head. Useful for round-robin scheduling and ring-based algorithms.
Restricted-Access Structures
Stacks and queues limit how you interact with the data, enforcing a specific order of access. This restriction is their strength: it makes them simple, efficient, and exactly right for a large class of problems.
Stack (LIFO)
Last In, First Out. Three operations, all O(1): push (add to top), pop (remove from top), and peek (view top without removing). Typically implemented with a dynamic array or a singly linked list.
Use cases: function call stack, expression evaluation (infix to postfix), DFS traversal, undo/redo functionality, matching parentheses, backtracking algorithms.
Queue (FIFO)
First In, First Out. enqueue adds to the back, dequeue removes from the front, both O(1). Best implemented with a linked list or a circular buffer to avoid O(n) shifts that a naive array approach would require.
Use cases: BFS traversal, task scheduling, print queues, message buffers, rate limiting (sliding window).
Specialized Variants
Deque (Double-Ended Queue)
Supports insertion and removal at both ends in O(1). Can serve as both a stack and a queue. Python's collections.deque and Java's ArrayDeque use a circular buffer internally. Useful for sliding window problems and work-stealing schedulers.
Priority Queue
Elements are dequeued by priority, not insertion order. The standard implementation is a binary heap, giving O(log n) insert and O(log n) extract-min/max. Used in Dijkstra's algorithm, Huffman coding, event-driven simulations, and OS task schedulers.
| Operation | Queue | Deque | Priority Queue (Heap) |
|---|---|---|---|
| Insert | O(1) back | O(1) both ends | O(log n) |
| Remove | O(1) front | O(1) both ends | O(log n) min/max |
| Peek | O(1) front | O(1) both ends | O(1) min/max |
| Search | O(n) | O(n) | O(n) |
Ring Buffer (Circular Buffer)
A fixed-size array where the write position wraps around using modular arithmetic: pos = (pos + 1) % capacity. When the buffer is full, the oldest data is overwritten. No memory allocation after initialization.
Use cases: I/O buffers, audio/video streaming, logging systems, producer-consumer patterns. Ring buffers are favored in real-time systems because they have zero allocation overhead and predictable memory usage.
Bloom Filters, Skip Lists & More
A probabilistic structure that tests set membership. Uses multiple hash functions mapping to a bit array. Can report false positives but never false negatives. Used in databases, caches, and spell checkers to avoid expensive lookups.
A layered linked list with express lanes. Each level skips over nodes, giving O(log n) search, insert, and delete on average. A randomized alternative to balanced BSTs, used in Redis sorted sets and LevelDB.
Combines a doubly linked list with a hash map. The list maintains access order; the map gives O(1) lookups. On access, move the node to the front. On eviction, remove from the tail. O(1) for all operations.
Tracks elements partitioned into disjoint sets. Supports find (which set?) and union (merge two sets). With path compression and union by rank, both operations run in near-O(1) amortized time. Key to Kruskal's MST algorithm.
Trade-offs & Decision Guide
There is no universally best data structure. The right choice depends on your access pattern, frequency of operations, memory constraints, and whether you need ordering.
| Need | Best Choice | Why |
|---|---|---|
| Fast random access | Array / Dynamic array | O(1) index, cache-friendly |
| Frequent insert/delete at ends | Deque / Linked list | O(1) at both ends |
| Frequent insert/delete in middle | Linked list | O(1) at known position |
| LIFO ordering | Stack | O(1) push/pop |
| FIFO ordering | Queue / Ring buffer | O(1) enqueue/dequeue |
| Priority-based ordering | Priority queue (heap) | O(log n) insert/extract |
| Fixed-size streaming buffer | Ring buffer | Zero alloc, wrap-around |
| Set membership (probabilistic) | Bloom filter | Space-efficient, no false negatives |
| Recently-used cache | LRU cache | O(1) access + eviction |
| Dynamic connectivity | Union-Find | Near O(1) union/find |