What are the lesser known but useful data structures?


3 177

There are some data structures around that are really useful but are unknown to most programmers. Which ones are they?

Everybody knows about linked lists, binary trees, and hashes, but what about Skip lists and Bloom filters for example. I would like to know more data structures that are not so common, but are worth knowing because they rely on great ideas and enrich a programmer's tool box.

PS: I am also interested in techniques like Dancing links which make clever use of properties of a common data structure.

EDIT: Please try to include links to pages describing the data structures in more detail. Also, try to add a couple of words on why a data structure is cool (as Jonas Kölker already pointed out). Also, try to provide one data-structure per answer. This will allow the better data structures to float to the top based on their votes alone.


Posted 2009-02-01T11:12:25.403

Reputation: 23 603



Tries, also known as prefix-trees or crit-bit trees, have existed for over 40 years but are still relatively unknown. A very cool use of tries is described in "TRASH - A dynamic LC-trie and hash data structure", which combines a trie with a hash function.

David Phillips

Posted 2009-02-01T11:12:25.403

Reputation: 7 409

Are tries really relatively unknown? – Gravity – 2011-12-07T02:09:51.280

The regex engine in Perl 5.10 automatically creates tries. – Brad Gilbert – 2009-12-14T23:56:16.080

In my experience tries are painfully expensive, given that a pointer is generally longer than a char, which is a shame. They're only suitable for certain data-sets. – Joe – 2010-01-29T12:06:24.377

We make use of tries in the project I work on. We use them for partitioning 2D space and then quickly determining which partition contains a given point. – Scottie T – 2010-02-17T19:05:16.523

@Joe: Those are problems with naive trie implementations. In practice, you usually have a much higher branching factor that a single char and you usually compress lines in the tree in order to store common sequences of chars (like syllables) in a single node. – Jon Harrop – 2010-05-12T01:29:06.797

Tries are a good one for sure, only remember what they are as its was on my Data Structures and Algorithms exam. – Mark Davidson – 2009-02-01T11:27:59.147

12very commonly used by spell-checkers – Steven A. Lowe – 2009-02-01T17:32:59.520

Burst tries are also an interesting variant, where you use only a prefix of the strings as nodes and otherwise store lists of strings in the nodes. – Torsten Marek – 2009-02-01T19:16:31.433

Tries are used in the T9 cell-phone auto-complete, yes? – Paul Nathan – 2010-07-22T18:38:18.870


Since no SO question, regardless of topic, is complete without someone mentioning jQuery.... John Resig, creator of jQuery, has an interesting data structure series of posts where he looks at various trie implementations among others: http://ejohn.org/blog/revised-javascript-dictionary-search/

– Oskar Austegard – 2011-03-24T20:18:41.343

4"very commonly used by spell-checkers" - it's at least funny to see my spell checker claim not to know tries. – dascandy – 2011-07-06T17:38:55.513


Bloom filter: Bit array of m bits, initially all set to 0.

To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.

To check if an item is in the set, compute the k indices and check if they are all set to 1.

Of course, this gives some probability of false-positives (according to wikipedia it's about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.

Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.


Posted 2009-02-01T11:12:25.403

Reputation: 1 368

good link explaining bloom filters http://www.igvita.com/2008/12/27/scalable-datasets-bloom-filters-in-ruby/

– Pete Brumm – 2011-12-19T14:39:13.327

1Bloom filters are so cool that I sometimes want to explain them to non-programmers. You're a graffiti artist in a city where everyone uses black spraypaint on white walls. Your always paint your tag the same way. So does every other tagger. Everyone paints over existing tags on a wall. You need to decide whether you've already tagged this wall. You can rule out a wall that has white in areas your tag would have covered. But you can't guarantee you've tagged a wall even if every part of your tag is black. If you understand why this must be true, you understand Bloom filters. – sowbug – 2011-12-19T17:21:49.007

8Google cites the use of Bloom filters in there implementation of BigTable. – Brian Gianforcaro – 2010-05-22T23:33:01.737

4So this is useful because it allows us to cheaply test for the existence of an element in a set? (I'm new to bloom filters.) – Petrus Theron – 2010-05-24T16:10:25.157

16@FreshCode It actually lets you cheaply test for the absence of an element in the set since you can get false positives but never false negatives – Tom Savage – 2010-05-24T17:19:29.660

26@FreshCode As @Tom Savage said, it's more useful when checking for negatives. For example, you can use it as a fast and small (in terms of memory usage) spell checker. Add all of the words to it and then try to look up words the user enters. If you get a negative it means it's misspelled. Then you can run some more expensive check to find closest matches and offer corrections. – lacop – 2010-05-25T20:06:28.720

1Google Chrome implements the Safe Browsing filter using a bloom filter. Isn't this case more appropriate for checking for positives? – Abhinav Sarkar – 2010-06-08T19:41:23.153

5@abhin4v: Bloom filters are often used when most requests are likely to return an answer of "no" (such as the case here), meaning that the small number of "yes" answers can be checked with a slower exact test. This still results in a big reduction in the average query response time. Don't know if Chrome's Safe Browsing does that, but that would be my guess. – j_random_hacker – 2010-06-09T14:32:57.977

20You forget to mention their use with dictionaries :) You can squeeze a full dictionary into a bloom filter with about 512k, like a hashtable without the values – Chris S – 2009-03-24T21:36:45.740

Set union/intersection are pretty straightforward as well. (bitwise OR and AND respectively) – fulmicoton – 2011-03-24T13:35:56.323

I learnt about Bloom filters through this Prag Prog Code Kata.

– Skilldrick – 2011-03-24T16:30:21.073

@ Skilldrick: thanks for sharing link – Bhanu Krishnan – 2011-04-07T09:54:09.467


Rope: It's a string that allows for cheap prepends, substrings, middle insertions and appends. I've really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.


Posted 2009-02-01T11:12:25.403

Reputation: 57 199


Without knowing what is was called I recently wrote something very similar to this for Java - performance has been excellent:


– mikera – 2010-05-23T22:01:26.133

There was a nice article about ropes on Good Math, Bad Math: http://scienceblogs.com/goodmath/2009/01/ropes_twining_together_strings.php

– Kuroki Kaze – 2010-07-23T08:48:43.193

I've had thoughts of something like this for my own uses. Nice to know it's already been implemented somewhere else. – Kibbee – 2009-02-18T20:32:42.620


There's an implementation in the SGI STL (1998): http://www.sgi.com/tech/stl/Rope.html

– quark – 2009-02-18T21:17:27.800


Mikera's link is stale, here's the current.

– aptwebapps – 2011-03-24T15:04:09.633


Skip lists are pretty neat.

A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).

They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.

If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT's Introduction to Algorithms lecture on them.

Also, here is a Java applet demonstrating Skip Lists visually.


Posted 2009-02-01T11:12:25.403

Reputation: 49 010

Interesting side-note: If you add enough levels to your skip lists, you essentially end up with a B-tree. – Riyad Kalla – 2011-12-19T14:35:24.293

+1 Qt uses skip lists rather than RB-trees for its sorted maps & sets. Yep, they're nifty (in imperative languages, anyway). – Michael Ekstrand – 2010-12-17T13:28:37.467

2Redis uses skip lists to implement "Sorted Sets". – antirez – 2011-03-24T12:11:33.757

Skip lists are probably my favorite data structure to use when I need a good data structure and I have no guarantees as to the order of the data, and I want a simpler implementation than other "balanced" data structures. Such a good thing. – earino – 2011-03-24T15:48:48.423


Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.

Bit Arrays store individual bits compactly and allow fast bit operations.

Yuval F

Posted 2009-02-01T11:12:25.403

Reputation: 19 007

Ah if only I had found this sooner that would have been very useful. Excellent data structures and complementary algorithms in here. – Robert Massaioli – 2012-01-11T11:37:52.293

6Spatial indices are also useful for N-body simulations involving long-range forces like gravity. – Justin Peel – 2010-05-25T16:13:05.613


Zippers - derivatives of data structures that modify the structure to have a natural notion of 'cursor' -- current location. These are really useful as they guarantee indicies cannot be out of bound -- used, e.g. in the xmonad window manager to track which window has focused.

Amazingly, you can derive them by applying techniques from calculus to the type of the original data structure!

Don Stewart

Posted 2009-02-01T11:12:25.403

Reputation: 127 384

2this is only useful in functional programming (in imperative languages you just keep a pointer or an index). Also tbh I still don't get how Zippers really work. – Stefan Monov – 2010-05-24T14:23:12.500

4@Stefan the point is that you don't need to keep a separate index or pointer now. – Don Stewart – 2010-05-24T16:03:47.263


Here are a few:

  • Suffix tries. Useful for almost all kinds of string searching (http://en.wikipedia.org/wiki/Suffix_trie#Functionality). See also suffix arrays; they're not quite as fast as suffix trees, but a whole lot smaller.

  • Splay trees (as mentioned above). The reason they are cool is threefold:

    • They are small: you only need the left and right pointers like you do in any binary tree (no node-color or size information needs to be stored)
    • They are (comparatively) very easy to implement
    • They offer optimal amortized complexity for a whole host of "measurement criteria" (log n lookup time being the one everybody knows). See http://en.wikipedia.org/wiki/Splay_tree#Performance_theorems
  • Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it's a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it's not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.

  • A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here's how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v's neighbor list and for v in u's neighbor list. Both have size at most 6, so this is O(1).

By the above algorithm, if u and v are neighbors, you won't have both u in v's list and v in u's list. If you need this, just add each node's missing neighbors to that node's neighbor list, but store how much of the neighbor list you need to look through for fast lookup.

Jonas Kölker

Posted 2009-02-01T11:12:25.403

Reputation: 5 860

2A suffix trie is almost but not quite the same as the much cooler suffix tree, which has strings and not individual letters on its edges and can be built in linear time(!). Also despite being asymptotically slower, in practice suffix arrays are often much faster than suffix trees for many tasks because of their smaller size and fewer pointer indirections. Love the O(1) planar graph lookup BTW! – j_random_hacker – 2010-06-09T14:40:59.770

@j_random_hacker: suffix arrays are not asymptotically slower. Here is ~50 lines of code for linear suffix array construction: http://www.cs.helsinki.fi/u/tpkarkka/publications/icalp03.pdf

– Edward KMETT – 2010-07-23T11:42:57.367

1@Edward Kmett: I have in fact read that paper, it was quite a breakthrough in suffix array construction. (Although it was already known that linear time construction was possible by going "via" a suffix tree, this was the 1st undeniably practical "direct" algorithm.) But some operations outside of construction are still asymptotically slower on a suffix array unless a LCA table is also built. That can also be done in O(n), but you lose the size and locality benefits of the pure suffix array by doing so. – j_random_hacker – 2010-07-26T00:39:12.567

@j_random_hacker: Fair enough. Out of context it wasn't clear to me which asymptotics you were referring to. – Edward KMETT – 2010-07-26T13:34:51.237

79You couldn't just list one per answer, could ya? Makes it easier for the single, good data structures to float to the top. – KingNestor – 2009-02-18T20:11:44.557

6I could, but I'm still not /quite/ getting the hang of this weird wiki/forum hybrid. I'm not gonna edit right now (eat-sleep-rinse-repeat), and I'm probably gonna forget to do it later ;) – Jonas Kölker – 2009-02-18T23:10:11.243

The Heap ordered search tree is called a treap. One trick you can do with these is change the priority of a node to push it to the bottom of the tree where its easier to delete. – paperhorse – 2009-02-19T05:43:27.583

1"The Heap ordered search tree is called a treap." -- In the definition I've heard, IIRC, a treap is a heap-ordered search tree with random priorities. You could choose other priorities, depending on the application... – Jonas Kölker – 2009-02-19T12:32:34.560


I think lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked.
They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.

Here's some links
http://www.research.ibm.com/people/m/michael/podc-1996.pdf [Links to PDF]

Mike Acton's (often provocative) blog has some excellent articles on lock-free design and approaches


Posted 2009-02-01T11:12:25.403

Reputation: 4 986

@deadalnix Yeah, just been reading about disruptors. Interesting stuff! – zebrabox – 2012-01-05T12:09:49.977

Lock-free alternatives are so important in todays multi-core, very parallel, scalability addicted world :-) – earino – 2011-03-24T15:49:48.047

Well, a disruptor does actually a better job in most cases. – deadalnix – 2011-10-13T16:46:32.517


I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan's Function, if I recall my data structures class correctly).


Posted 2009-02-01T11:12:25.403

Reputation: 19 205

8This is also called the "union-find data structure." I was in awe when I first learned about this clever data structure in algorithms class... – BlueRaja - Danny Pflughoeft – 2010-01-28T19:54:05.030

I wouldn't really call it 'lesser known', but +1 for mentioning it. – MAK – 2010-01-28T20:12:50.063

union-find-delete extensions allow a constant-time delete as well. – Peaker – 2011-03-24T15:48:12.907

4I used a Disjoint Set for my Dungeon generator, to ensure all rooms are reachable by passages :) – goldenratio – 2011-03-24T21:56:32.023


Fibonacci heaps

They're used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra's algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.

Adam Rosenfield

Posted 2009-02-01T11:12:25.403

Reputation: 302 206

From my experience with Fibonacci heaps, I found out that costly operation of memory allocations makes it less efficient than a simple binary heap backended by an array. – jutky – 2012-01-15T22:02:33.320

High constant factor as you said, and hard to implement well according to a friend who had to. Fianally not that cool, but still, maybe worth knowing. – p4bl0 – 2010-05-23T07:30:22.877

These guys here made them run competetive in comparison to other heap kinds: http://www.cphstl.dk/Presentation/SEA2010/SEA-10.pdf

There is a related data structure called Pairing Heaps that's easier to implement and that offers pretty good practical performance. However, the theoretical analysis is partially open.

– Manuel – 2010-07-22T22:06:30.707


Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it's the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.

Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.

In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.

Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.


Posted 2009-02-01T11:12:25.403

Reputation: 18 037

2John Carmac FTW – BlueRaja - Danny Pflughoeft – 2010-01-28T19:57:01.137

... and the related octrees and kd-trees. – Lloeki – 2011-08-04T07:49:13.130


Huffman trees - used for compression.

Lurker Indeed

Posted 2009-02-01T11:12:25.403

Reputation: 1 156

Although it is interesting, isn't this sort of an 'Intro to Algorithms', here-is-an-example-of-a-greedy-algo type topic? – rshepherd – 2011-03-24T18:53:56.730


Have a look at Finger Trees, especially if you're a fan of the previously mentioned purely functional data structures. They're a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.

As per the original article:

Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.

A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.


Posted 2009-02-01T11:12:25.403

Reputation: 10 469

I recommend this excellent video explaining finger trees in Clojure

– FerD – 2011-03-30T14:31:09.177

Have a look at this duplicate answer, it's well worth reading !

– huitseeker – 2011-06-30T07:28:13.817


Circular or ring buffer - used for streaming, among other things.


Posted 2009-02-01T11:12:25.403

Reputation: 24 816

Based on reading the link, I don't think the data structure itself is patented, but some invention based on it. I agree that this is definitely a very under-used data structure. – Gravity – 2011-12-07T02:17:23.807

6This is most common data structure for buffering data. I think that this one is first one introduced in our class. – Luka Rahne – 2010-09-18T15:02:43.090


Also, disgustingly, somehow managed to be patented (at least when used for video). http://ip.com/patent/USRE36801

– David Eison – 2011-03-24T18:32:27.603


I'm surprised no one has mentioned Merkle trees (ie. Hash Trees).

Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.

BlueRaja - Danny Pflughoeft

Posted 2009-02-01T11:12:25.403

Reputation: 57 213

I thought of 'hash trees' when I first learned about P2P. I didn't know they had such an obvious name. :) – Mateen Ulhaq – 2011-10-29T23:37:23.617


<zvrba> Van Emde-Boas trees

I think it'd be useful to know why they're cool. In general, the question "why" is the most important to ask ;)

My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.

Jonas Kölker

Posted 2009-02-01T11:12:25.403

Reputation: 5 860

3I fully agree that "why" is important, but that was not included in the question ;) – zvrba – 2009-02-02T15:30:35.647

They are nice from a theoretical point of view. In practice, however, it's quite tough to get competetive performance out of them. The paper I know got them to work well up to 32 bit keys (http://citeseerx.ist.psu.edu/viewdoc/summary?doi= but the approach will not scale to more than maybe 34-35 bits or so and there is no implementation of that.

– Manuel – 2010-07-22T22:02:21.990

Another reason why they are cool is that they are a key building block for a number of cache-oblivious algorithms. – Edward KMETT – 2010-07-26T20:44:48.283


How about splay trees?

Also, Chris Okasaki's purely functional data structures come to mind.


Posted 2009-02-01T11:12:25.403

Reputation: 44 551


An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.

A. Levy

Posted 2009-02-01T11:12:25.403

Reputation: 10 972

5Check hopscotch hashing claimed to be faster. – chmike – 2010-05-23T06:43:37.023


A min-max heap is a variation of a heap that implements a double-ended priority queue. It achieves this by by a simple change to the heap property: A tree is said to be min-max ordered if every element on even (odd) levels are less (greater) than all childrens and grand children. The levels are numbered starting from 1.



Posted 2009-02-01T11:12:25.403

Reputation: 85 062

Tricky to implement. Even the best programmers can get it wrong.

– finnw – 2011-05-09T23:21:20.943

2min-max heap is beautiful! Used in AI as well :) – Ricko M – 2011-05-12T13:59:58.560


I like Cache Oblivious datastructures. The basic idea is to lay out a tree in recursively smaller blocks so that caches of many different sizes will take advantage of blocks that convenient fit in them. This leads to efficient use of caching at everything from L1 cache in RAM to big chunks of data read off of the disk without needing to know the specifics of the sizes of any of those caching layers.


Posted 2009-02-01T11:12:25.403

Reputation: 26 063

Interesting transcription from that link: "The key is the van Emde Boas layout, named after the van Emde Boas tree data structure conceived in 1977 by Peter van Emde Boas" – sergiol – 2012-02-23T00:14:00.927

1+1 Of course!!! – Jon Harrop – 2011-03-24T22:55:11.590

+1 – imho, currently underappreciated. – klickverbot – 2011-07-21T21:04:38.200


Left Leaning Red-Black Trees. A significantly simplified implementation of red-black trees by Robert Sedgewick published in 2008 (~half the lines of code to implement). If you've ever had trouble wrapping your head around the implementation of a Red-Black tree, read about this variant.

Very similar (if not identical) to Andersson Trees.


Posted 2009-02-01T11:12:25.403

Reputation: 6 992


Work Stealing Queue

Lock-free data structure for dividing the work equaly among multiple threads Implementation of a work stealing queue in C/C++?

Marko Tintor

Posted 2009-02-01T11:12:25.403

Reputation: 28


Bootstrapped skew-binomial heaps by Gerth Stølting Brodal and Chris Okasaki:

Despite their long name, they provide asymptotically optimal heap operations, even in a function setting.

  • O(1) size, union, insert, minimum
  • O(log n) deleteMin

Note that union takes O(1) rather than O(log n) time unlike the more well-known heaps that are commonly covered in data structure textbooks, such as leftist heaps. And unlike Fibonacci heaps, those asymptotics are worst-case, rather than amortized, even if used persistently!

There are multiple implementations in Haskell.

They were jointly derived by Brodal and Okasaki, after Brodal came up with an imperative heap with the same asymptotics.

Edward KMETT

Posted 2009-02-01T11:12:25.403

Reputation: 26 819


Ball Trees. Just because they make people giggle.

A ball tree is a data structure that indexes points in a metric space. Here's an article on building them. They are often used for finding nearest neighbors to a point or accelerating k-means.


Posted 2009-02-01T11:12:25.403


Think you could give a link too? – j_random_hacker – 2010-07-23T00:36:24.983

1There you go. They are a little obscure, I grant you. – None – 2010-07-23T02:29:42.560

Thanks! (pad, pad, pad) – j_random_hacker – 2010-07-26T00:42:02.730

These are also commonly known as "vantage point" trees or vp-trees. http://en.wikipedia.org/wiki/Vp-tree

– Edward KMETT – 2010-07-26T20:37:26.670


  • Kd-Trees, spatial data structure used (amongst others) in Real-Time Raytracing, has the downside that triangles that cross intersect the different spaces need to be clipped. Generally BVH's are faster because they are more lightweight.
  • MX-CIF Quadtrees, store bounding boxes instead of arbitrary point sets by combining a regular quadtree with a binary tree on the edges of the quads.
  • HAMT, hierarchical hash map with access times that generally exceed O(1) hash-maps due to the constants involved.
  • Inverted Index, quite well known in the search-engine circles, because it's used for fast retrieval of documents associated with different search-terms.

Most, if not all, of these are documented on the NIST Dictionary of Algorithms and Data Structures

Jasper Bekkers

Posted 2009-02-01T11:12:25.403

Reputation: 6 161

What happened to "one data structure per answer"? – Marius Gedminas – 2011-12-19T16:14:43.157

Added links to the datastructures for you. – mmcdole – 2009-02-18T20:17:34.123


Not really a data structure; more of a way to optimize dynamically allocated arrays, but the gap buffers used in Emacs are kind of cool.


Posted 2009-02-01T11:12:25.403

Reputation: 618

For anyone interested, this is exactly how the Document (e.g. PlainDocument) models backing the Swing text components are implemented as well; before 1.2 I believe the document models were straight Arrays, which lead to horrible insertion performance for large documents; as soon as they moved to Gap Buffers, all was right with the world again. – Riyad Kalla – 2011-12-19T14:16:14.647

1I would definitely consider that to be a data structure. – Christopher Barber – 2011-03-28T18:53:34.037


Fenwick Tree. It's a data structure to keep count of the sum of all elements in a vector, between two given subindexes i and j. The trivial solution, precalculating the sum since the begining doesn't allow to update a item (you have to do O(n) work to keep up).

Fenwick Trees allow you to update and query in O(log n), and how it works is really cool and simple. It's really well explained in Fenwick's original paper, freely available here:


Its father, the RQM tree is also very cool: It allows you to keep info about the minimum element between two indexes of the vector, and it also works in O(log n) update and query. I like to teach first the RQM and then the Fenwick Tree.


Posted 2009-02-01T11:12:25.403

Reputation: 187

@eordano: dead link ... – sergiol – 2012-02-23T00:45:48.007

I'm afraid this is a duplicate. Perhaps you'd want to add to the previous answer ?

– huitseeker – 2011-01-15T12:00:34.423

Also related are Segment Trees, which are useful for doing all sorts of range queries. – dhruvbird – 2011-03-25T04:11:42.647


Van Emde-Boas trees. I have even a C++ implementation of it, for up to 2^20 integers.


Posted 2009-02-01T11:12:25.403

Reputation: 20 689

1duplicate answer – huitseeker – 2011-01-14T01:39:34.480


Nested sets are nice for representing trees in the relational databases and running queries on them. For instance, ActiveRecord (Ruby on Rails' default ORM) comes with a very simple nested set plugin, which makes working with trees trivial.


Posted 2009-02-01T11:12:25.403

Reputation: 2 460

but are they lesser known? – naugtur – 2011-12-19T22:52:49.533

+1 Yep, nested sets rule ;-) – alexander.biskop – 2010-07-21T07:17:05.097


It's pretty domain-specific, but half-edge data structure is pretty neat. It provides a way to iterate over polygon meshes (faces and edges) which is very useful in computer graphics and computational geometry.


Posted 2009-02-01T11:12:25.403

Reputation: 120 158


Scapegoat trees. A classic problem with plain binary trees is that they become unbalanced (e.g. when keys are inserted in ascending order.)

Balanced binary trees (AKA AVL trees) waste a lot of time balancing after each insertion.

Red-Black trees stay balanced, but require a extra bit of storage for each node.

Scapegoat trees stay balanced like red-black trees, but don't require ANY additional storage. They do this by analyzing the tree after each insertion, and making minor adjustments. See http://en.wikipedia.org/wiki/Scapegoat_tree.


Posted 2009-02-01T11:12:25.403

Reputation: 2 394


An unrolled linked list is a variation on the linked list which stores multiple elements in each node. It can drastically increase cache performance, while decreasing the memory overhead associated with storing list metadata such as references. It is related to the B-tree.

record node {
    node next       // reference to next node in list
    int numElements // number of elements in this node, up to maxElements
    array elements  // an array of numElements elements, with space allocated for maxElements elements


Posted 2009-02-01T11:12:25.403

Reputation: 85 062


2-3 Finger Trees by Hinze and Paterson are a great functional data structure swiss-army knife with great asymptotics for a wide range of operations. While complex, they are much simpler than the imperative structures by Persistent lists with catenation via recursive slow-down by Kaplan and Tarjan that preceded them.

They work as a catenable deque with O(1) access to either end, O(log min(n,m)) append, and provide O(log min(n,length - n)) indexing with direct access to a monoidal prefix sum over any portion of the sequence.

Implementations exist in Haskell, Coq, F#, Scala, Java, C, Clojure, C# and other languages.

You can use them to implement priority search queues, interval maps, ropes with fast head access, maps, sets, catenable sequences or pretty much any structure where you can phrase it as collecting a monoidal result over a quickly catenable/indexable sequence.

I also have some slides describing their derivation and use.

Edward KMETT

Posted 2009-02-01T11:12:25.403

Reputation: 26 819

while you add valuable info, your answer would perhaps have been better formulated by editing the one it is duplicating

– huitseeker – 2011-01-14T01:42:33.540

I didn't spot yours until after I'd written mine, and the prospect of merging 20-odd references together and dealing with whatever feelings I'd bruise by dropping the reference to markcc's incorrect original summary article in someone else's post were more than I'd cared to fiddle with. Feel free to fold it in and I'll delete this one though. – Edward KMETT – 2011-01-14T23:29:52.847


One lesser known, but pretty nifty data structure is the Fenwick Tree (also sometimes called a Binary Indexed Tree or BIT). It stores cumulative sums and supports O(log(n)) operations. Although cumulative sums might not sound very exciting, it can be adapted to solve many problems requiring a sorted/log(n) data structure.

IMO, its main selling point is the ease with which can be implemented. Very useful in solving algorithmic problems that would involve coding a red-black/avl tree otherwise.


Posted 2009-02-01T11:12:25.403

Reputation: 18 897

2+1, can't believe so few people know of this structure. Its extremely easy to implement, its pretty much a magical Unicorn that can solve any problem. – Martín Fixman – 2010-06-15T02:38:33.337


Pairing heaps are a type of heap data structure with relatively simple implementation and excellent practical amortized performance.

Marko Tintor

Posted 2009-02-01T11:12:25.403

Reputation: 329


The source code of the book "Data Structures and Algorithm Analysis in Java/C++" seems to include implementations of Pairing-Heaps



– f3lix – 2009-02-01T15:00:25.507

I have written an implementation of pairing heap based on Weiss's book. But i don't use the extra array in extract-min as Weiss did. So it looks more neat and easy to understand. It is on my blog (the text is in Chinese, but the code is pure English) if anyone is interested in it: http://blog.csdn.net/ljsspace/article/details/6751900

– jscoot – 2011-09-06T10:17:59.673


I really really love Interval Trees. They allow you to take a bunch of intervals (ie start/end times, or whatever) and query for which intervals contain a given time, or which intervals were "active" during a given period. Querying can be done in O(log n) and pre-processing is O(n log n).


Posted 2009-02-01T11:12:25.403

Reputation: 5 706

1Cool. This ought to be standard in all RDBMS. Know of any that supports it? – Esben Skov Pedersen – 2011-07-06T13:14:00.503

I'm afraid I don't! I had to implement my own in Python. I understand they're widely used in bioinformatics for gene finding also. – Jonathan – 2011-07-08T11:06:22.767


XOR Linked List uses two XOR'd pointers to lessen the storage requirements for doubly-linked list. Kind of obscure but neat!


Posted 2009-02-01T11:12:25.403

Reputation: 588


Splash Tables are great. They're like a normal hash table, except they guarantee constant-time lookup and can handle 90% utilization without losing performance. They're a generalization of the Cuckoo Hash (also a great data structure). They do appear to be patented, but as with most pure software patents I wouldn't worry too much.

David Seiler

Posted 2009-02-01T11:12:25.403

Reputation: 8 630


The Region Quadtree

(quoted from Wikipedia)

The region quadtree represents a partition of space in two dimensions by decomposing the region into four equal quadrants, subquadrants, and so on with each leaf node containing data corresponding to a specific subregion. Each node in the tree either has exactly four children, or has no children (a leaf node).

Quadtrees like this are good for storing spatial data, e.g. latitude and longitude or other types of coordinates.

This was by far my favorite data structure in college. Coding this guy and seeing it work was pretty cool. I highly recommend it if you're looking for a project that will take some thought and is a little off the beaten path. Anyway, it's a lot more fun than the standard BST derivatives that you're usually assigned in your data structures class!

In fact, as a bonus, I've found the notes from the lecture leading up to the class project (from Virginia Tech) here (pdf warning).

Andrew Whitaker

Posted 2009-02-01T11:12:25.403

Reputation: 106 217


This was already (implicitly) mentioned here. Personally, I find quadtrees to be the obvious solution - R-trees are usually more efficient, and in my opinion much cooler.

– BlueRaja - Danny Pflughoeft – 2010-11-24T19:26:52.200


Enhanced hashing algorithms are quite interesting. Linear hashing is neat, because it allows splitting one "bucket" in your hash table at a time, rather than rehashing the entire table. This is especially useful for distributed caches. However, with most simple splitting policies, you end up splitting all buckets in quick succession, and the load factor of the table oscillates pretty badly.

I think that spiral hashing is really neat too. Like linear hashing, one bucket at a time is split, and a little less than half of the records in the bucket are put into the same new bucket. It's very clean and fast. However, it can be inefficient if each "bucket" is hosted by a machine with similar specs. To utilize the hardware fully, you want a mix of less- and more-powerful machines.


Posted 2009-02-01T11:12:25.403

Reputation: 218 845

1I had to use linear hashing in a database class! Cool stuff. – Thomas Eding – 2010-07-23T03:49:13.497


Binary decision diagram is one of my favorite data structures, or in fact Reduced Ordered Binary Decision Diagram (ROBDD).

These kind of structures can for instance be used for:

  • Representing sets of items and performing very fast logical operations on those sets.
  • Any boolean expression, with the intention of finding all solutions for the expression

Note that many problems can be represented as a boolean expression. For instance the solution to a suduku can be expressed as a boolean expression. And building a BDD for that boolean expression will immediately yield the solution(s).


Posted 2009-02-01T11:12:25.403

Reputation: 1 084

I thought sudoku was NP-hard? – Autodidact – 2010-07-14T06:50:32.443

Knuth even talks about these. (Under Computer Musings) http://scpd.stanford.edu/knuth/index.jsp Also, there are Zero Suppressed BDDs, which also have useful properties.

– Theo Belaire – 2011-03-24T21:06:41.370


I like treaps - for the simple, yet effective idea of superimposing a heap structure with random priority over a binary search tree in order to balance it.

Rafał Dowgird

Posted 2009-02-01T11:12:25.403

Reputation: 30 996


I sometimes use Inversion LIsts to store ranges, and they are often used to store character classes in regular expressions. See for example http://www.ibm.com/developerworks/linux/library/l-cpinv.html

Another nice use case is for weighted random decisions. Suppose you have a list of symbols and associated probabilites, and you want to pick them at random according to these probabilities

   a => 0.1
   b => 0.5
   c => 0.4

Then you do a running sum of all the probabilities:

  (0.1, 0.6, 1.0)

This is your inversion list. You generate a random number between 0 and 1, and find the index of the next higher entry in the list. You can do that with a binary search, because it's sorted. Once you've got the index, you can look up the symbol in the original list.

If you have n symbols, you have O(n) preparation time, and then O(log(n)) acess time for each randomly chosen symbol - independently of the distribution of weights.

A variation of inversion lists uses negative numbers to indicate the endpoint of ranges, which makes it easy to count how many ranges overlap at a certain point. See http://www.perlmonks.org/index.pl?node_id=841368 for an example.


Posted 2009-02-01T11:12:25.403

Reputation: 10 197


Arne Andersson trees are a simpler alternative to red-black trees, in which only right links can be red. This greatly simplifies maintenance, while keeping performance on par with red-black trees. The original paper gives a nice and short implementation for insertion and deletion.


Posted 2009-02-01T11:12:25.403

Reputation: 10 469

IIRC, unbox the red nodes and you have a 2-3 finger tree. – Jon Harrop – 2011-03-24T22:53:50.827


DAWGs are a special kind of Trie where similar child trees are compressed into single parents. I extended modified DAWGs and came up with a nifty data structure called ASSDAWG (Anagram Search Sorted DAWG). The way this works is whenever a string is inserted into the DAWG, it is bucket-sorted first and then inserted and the leaf nodes hold an additional number indicating which permutations are valid if we reach that leaf node from root. This has 2 nifty advantages:

  1. Since I sort the strings before insertion and since DAWGs naturally collapse similar sub trees, I get high level of compression (e.g. "eat", "ate", "tea" all become 1 path a-e-t with a list of numbers at the leaf node indicating which permutations of a-e-t are valid).
  2. Searching for anagrams of a given string is super fast and trivial now as a path from root to leaf holds all the valid anagrams of that path at the leaf node using permutation-numbers.


Posted 2009-02-01T11:12:25.403

Reputation: 10 964

nice idea! also, don't miss the gaddag [http://en.wikipedia.org/wiki/GADDAG]

– Martin DeMello – 2011-03-24T19:45:38.177

(you do, however, lose the ability to do pattern matches. and i'm not entirely sure how you'd do anagrams with . and * in them) – Martin DeMello – 2011-03-24T19:46:21.237

I used this to develop a multi-language Scrabble program. To handle blank tiles - when you are descending down from root to leaf, at each node, check how many blanks you have and you can choose to use up a blank to go to a different letter. – pathikrit – 2011-03-24T20:36:04.260


Counted unsorted balanced btrees.

Perfect for text editor buffers.



Posted 2009-02-01T11:12:25.403



Fast Compact tries:


Posted 2009-02-01T11:12:25.403

Reputation: 1 157

http://nothings.org/computer/judy/ – Jon Harrop – 2010-05-12T01:43:05.053


BK-Trees, or Burkhard-Keller Trees are a tree-based data structure which can be used to quickly find near-matches to a string.


Posted 2009-02-01T11:12:25.403

Reputation: 350


Fenwick trees (or binary indexed trees) are a worthy addition to ones toolkit. If you have an array of counters and you need to constantly update them while querying for cumulative counts (as in PPM compression), Fenwick trees will do all operations in O(log n) time and require no extra space. See also this topcoder tutorial for a good introduction.

Sriram Srinivasan

Posted 2009-02-01T11:12:25.403

Reputation: 760


Zobrist Hashing is a hash function generally used for representing a game board position (like in Chess) but surely has other uses. One nice things about it is that is can be incrementally updated as the board is updated.


Posted 2009-02-01T11:12:25.403

Reputation: 2 840


I like suffix tree and arrays for string processing, skip lists for balanced lists and splay trees for automatic balancing trees


Posted 2009-02-01T11:12:25.403



Take a look at the sideways heap, presented by Donald Knuth.


Yngve Sneen Lindal

Posted 2009-02-01T11:12:25.403

Reputation: 628


You can use a min-heap to find the minimum element in constant time, or a max-heap to find the the maximum element. But what if you wanted to do both operations? You can use a Min-Max to do both operations in constant time. It works by using min max ordering: alternating between min and max heap comparison between consecutive tree levels.

Firas Assaad

Posted 2009-02-01T11:12:25.403

Reputation: 18 370

1This is one of my favourite data structures. There is even a variant called a min-max-median heap which allows O(1) retrieval of any of the three. – Martin – 2010-02-19T00:25:07.620

How are these related to finger trees? – Jon Harrop – 2010-05-12T01:44:09.840


Vaibhav Bajpai

Posted 2009-02-01T11:12:25.403

Reputation: 9 996


B* tree

It's a variety of B-tree that is efficient for searching at the cost of a more expensive insertion.


Posted 2009-02-01T11:12:25.403

Reputation: 73 537


Splay Trees are cool. They reorder themselves in a way that moves the most often queried elements closer to the root.


Posted 2009-02-01T11:12:25.403

Reputation: 3 786

this is a duplicate answer Perhaps you'd want to add complementary elements to the previous answer ?

– huitseeker – 2011-01-15T11:26:57.963


Per the Bloom Filter mentions, Deletable Bloom Filters (DlBF) are in some ways better than basic counting variants. See http://arxiv.org/abs/1005.0352


Posted 2009-02-01T11:12:25.403



Getting away from all these graph structures, I just love the simple Ring-Buffer.

When properly implemented you can seriously reduce your memory footprint while maintaining performance and sometimes even improving it.


Posted 2009-02-01T11:12:25.403


2Duplicates an existing answer above. – Steve Guidi – 2009-12-14T23:27:02.317

6Explaining the properties of a Ring-Buffer or adding a link to more information would be helpful to people who don't know what it is...which is kind of the point of this question! – A. Levy – 2009-06-17T23:20:51.087


Priority deque is cheaper than having to maintain 2 separate priority queues with different orderings. http://www.alexandria.ucsb.edu/middleware/javadoc/edu/ucsb/adl/middleware/PriorityDeque.html http://cphstl.dk/Report/Priority-deque/cphstl-report-2001-14.pdf


Posted 2009-02-01T11:12:25.403

Reputation: 3 638


I think the FM-index by Paolo Ferragina and Giovanni Manzini is really cool. Especially in bioinformatics. It's essentially a compressed full text index that utilizes a combination of a suffix array and a burrows-wheeler transform of the reference text. The index can be searched without decompressing the whole index.


Posted 2009-02-01T11:12:25.403

Reputation: 32 123


Ternary Search Tree

  • Quick prefix search (for incremental autocomplete,etc)
  • Partial Matching (When you want to find all words within X hamming distance of a string)
  • Wildcard Searches

Quite Easy to implement.


Posted 2009-02-01T11:12:25.403

Reputation: 29 130

Endorsed by the venerable Jon Bentley! – luser droog – 2011-08-20T05:06:06.230


A queue implemented using 2 stacks is pretty space efficient (as opposed to using a linked list which will have at least a 1 extra pointer/reference overhead).

How to implement a queue using two stacks?

This has worked well for me when the queues are huge. If I save 8 bytes on a pointer, it means that queues with a million entries save about 8MB of RAM.


Posted 2009-02-01T11:12:25.403

Reputation: 3 310


Skip lists are actually pretty awesome: http://en.wikipedia.org/wiki/Skip_list


Posted 2009-02-01T11:12:25.403



A proper string data structure. Almost every programmer settles for whatever native support that a language has for the structure and that's usually inefficient (especially for building strings, you need a separate class or something else).

The worst is treating a string as a character array in C and relying on the NULL byte for safety.

Rudolf Olah

Posted 2009-02-01T11:12:25.403

Reputation: 8 289

That's one of the reasons I say wonders of C#. It's very rare the case where you need anything beyond string or StringBuilder native types for text. – sergiol – 2012-02-24T23:21:26.550

3The other extreme is C++, where every self-respecting library comes with its own string data type, which is of course incompatible with all the other string types except const char*. Personally, I prefer the environments where I don't have to spend so much time converting strings from one type to another. – Niki – 2010-09-23T06:48:41.713



Posted 2009-02-01T11:12:25.403

Reputation: 464


I personally find sparse matrix data structures to be very interesting. http://www.netlib.org/linalg/html_templates/node90.html

The famous BLAS libraries use these. And when you deal with linear systems that contain 100,000's of rows and columns, it becomes critical to use these. Some of these also resemble the compact grid (basically like a bucket-sorted grid) which is common in computer graphics. http://www.cs.kuleuven.be/~ares/publications/LD08CFRGRT/LD08CFRGRT.pdf

Also as far as computer graphics is concerned, MAC grids are somewhat interesting, but only because they're clever. http://www.seas.upenn.edu/~cis665/projects/Liquation_665_Report.pdf


Posted 2009-02-01T11:12:25.403

Reputation: 195


Delta list/delta queue are used in programs like cron or event simulators to work out when the next event should fire. http://everything2.com/title/delta+list http://www.cs.iastate.edu/~cs554/lec_notes/delta_clock.pdf


Posted 2009-02-01T11:12:25.403

Reputation: 3 638

How is it better than a priority queue? – Bruno Martinez – 2011-03-24T23:58:58.153


Bucket Brigade

They are used extensively in Apache. Basically they are a linked list that loops around on itself in a ring. I am not sure if they are used outside of Apache and Apache modules but they fit the bill as a cool yet lesser known data structure. A bucket is a container for some arbitrary data and a bucket brigade is a collection of buckets. The idea is that you want to be able to modify and insert data at any point in the structure.

Lets say that you have a bucket brigade that contains an html document with one character per bucket. You want to convert all the < and > symbols into &lt; and &gt; entities. The bucket brigade allows you to insert some extra buckets in the brigade when you come across a < or > symbol in order to fit the extra characters required for the entity. Because the bucket brigade is in a ring you can insert backwards or forwards. This is much easier to do (in C) than using a simple buffer.

Some reference on bucket brigades below:

Apache Bucket Brigade Reference

Introduction to Buckets and Brigades

John Scipione

Posted 2009-02-01T11:12:25.403

Reputation: 1 583

7Sounds like a marketing name for a circular linked list – BlueRaja - Danny Pflughoeft – 2010-07-22T18:24:33.797

Yeah, it sounds like a circular linked list with a variant record type along with some O(1) insertion properties. – Paul Nathan – 2010-07-22T18:34:23.167

This is also a popular way to implement queues and deques. – Jon Harrop – 2011-03-24T22:51:51.683


A corner-stitched data structure. From the summary:

Corner stitching is a technique for representing rectangular two-dimensional objects. It appears to be especially well-suited for interactive editing systems for VLSI layouts. The data structure has two important features: first, empty space is represented explicitly; and second, rectangular areas are stitched together at their corners like a patchwork quilt. This organization results in fast algorithms (linear time or better) for searching, creation, deletion, stretching, and compaction. The algorithms are presented under a simplified model of VLSI circuits, and the storage requirements of the structure are discussed. Measurements indicate that corner stitching requires approximately three times as much memory space as the simplest possible representation.

Trey Jackson

Posted 2009-02-01T11:12:25.403

Reputation: 66 090


Burrows–Wheeler transform (block-sorting compression)

Its essential algorithm for compression. Let say that you want to compress lines on text files. You would say that if you sort the lines, you lost information. But BWT works like this - it reduces entropy a lot by sorting input, keeping integer indexes to recover the original order.

Lukáš Nalezenec

Posted 2009-02-01T11:12:25.403

Reputation: 18

2BWT is purely an algorithm and not a data structure though. – Jon Harrop – 2011-03-24T22:54:48.517

2@Jon, you're technically right, but why make the distinction? In designing data structures, I often find that a data structure and the algorithm around it go hand-in-hand. That one implies the other. Put another way, isn't the whole point of a data structure the operations you can perform on it and their runtime and memory use? I could say that the data structure for the Burrows-Wheeler transform plus run-length encoding is a data structure for representing arbitrary strings whose memory use (unlike a regular character array) is less than O(n) for many strings. And that's interesting. – Jonathan Tran – 2011-07-02T20:33:42.957


PATRICIA - Practical Algorithm to Retrieve Information Coded in Alphanumeric, D.R.Morrison (1968).

A PATRICIA tree is related to a Trie. The problem with Tries is that when the set of keys is sparse, i.e. when the actual keys form a small subset of the set of potential keys, as is very often the case, many (most) of the internal nodes in the Trie have only one descendant. This causes the Trie to have a high space-complexity.



Posted 2009-02-01T11:12:25.403

Reputation: 1 411


I am not sure if this data structure has a name, but the proposed tokenmap data structure for inclusion into Boost is kind of interesting. It is a dynamically resizable map where look-ups are not only O(1), they are simple array accesses. I wrote most of the background material on this data structure which describes the fundamental principle behind how it works.

Something like a tokenmap is used by operating systems to map file or resource handles to data structures representing the file or resource.

Daniel Trebbien

Posted 2009-02-01T11:12:25.403

Reputation: 27 596


Disjoint Set Forests allow fast membership queries and union operations and are most famously used in Kruskal's Algorithm for minimum spanning trees.

The really cool thing is that both operations have amortized running time proportional to the inverse of the Ackermann Function, making this the "fastest" non-constant time data structure.


Posted 2009-02-01T11:12:25.403

Reputation: 49 378


Binomial heap's have a lot of interesting properties, most useful of which is merging.


Posted 2009-02-01T11:12:25.403

Reputation: 9 291


Environment tracking recursive structures.

Compilers use a structure that is recursive but not like a tree. Inner scopes have a pointer to an enclosing scope so the nesting is inside-out. Verifying whether a variable is in scope is a recursive call from the inside scope to the enclosing scope.

public class Env
    HashMap<String, Object> map;
    Env                     outer;

        outer = null;
        map = new HashMap();

    Env(Env o)
        outer = o;
        map = new HashMap();

    void put(String key, Object value)
        map.put(key, value);

    Object get(String key)
        if (map.containsKey(key))
            return map.get(key);
        if (outer != null)
            return outer.get(key);
        return null;

    Env push()
        return new Env(this);

    Env pop()
        return outer;

I'm not sure if this structure even has a name. I call it an inside-out list.

Kelly S. French

Posted 2009-02-01T11:12:25.403

Reputation: 10 419

1Isn't this essentially a Linked List of HashMap<String, Object>? – Wesley Wiser – 2011-01-13T14:25:50.023

It ends up being more of a tree, i.e. a HashMap of HashMaps. You can't even declare it statically because you don't know the depth ahead of time. If you tried, it would look like this: HashMap<String, HashMap<String, HashMap<String, HashMap<String, ...>>> > – Kelly S. French – 2011-01-13T15:59:45.007

One usually uses a Stack&lt;HashMap&lt;String,SymbolInformation&gt;&gt; for this, and one usually makes one pass through the AST, pushing and popping the stack as various scopes are encountered, and updating the hashes as new variables are added. If one really does want to keep the data structure around for lots of different branches of the tree at once, one uses a list in the Lisp sense, where the list nodes are immutable, and the later nodes in the list are shared by multiple lists. – Ken Bloom – 2011-03-24T23:40:12.960


There is a clever Data-structure out there that uses Arrays to save the Data of the Elements, but the Arrays are linked together in an Linked-List/Array.

This does have the advantage that the iteration over the elements is very fast (faster than a pure linked-list approach) and the costs for moving the Arrays with the Elements around in Memory and/or (de-)allocation are at a minimum. (Because of this this data-structure is usefull for Simulation stuff).

I know about it from here:


"...and that an additional array is allocated and linked in to the cell list of arrays of particles. This is similar in some respects to how TBB implemented its concurrent container."(it is about ther Performance of Linked Lists vs. Arrays)


Posted 2009-02-01T11:12:25.403

Reputation: 2 479

In C++'s standard library, this is known as a deque: http://www.cplusplus.com/reference/stl/deque/

– Josh Townzen – 2010-07-14T04:12:24.780

This brings http://en.wikipedia.org/wiki/VList to my mind ...

– f3lix – 2010-07-14T16:25:30.550


Someone else already proposed Burkhard-Keller-Trees, but I thought I might mention them again in order to plug my own implementation. :)


There are faster implementations around (see ActiveState's Python recipes or implementations in other languages), but I think/hope my code helps to understand these data structures.

By the way, both BK and VP trees can be used for much more than searching for similar strings. You can do similarity searches for arbitrary objects as long as you have a distance function that satisfies a few conditions (positivity, symmetry, triangle inequality).


Posted 2009-02-01T11:12:25.403

Reputation: 510

Perhaps you may want to edit the previous answer ?

– huitseeker – 2011-01-15T11:30:44.360


I had good luck with WPL Trees before. A tree variant that minimizes the weighted path length of the branches. Weight is determined by node access, so that frequently-accessed nodes migrate closer to the root. Not sure how they compare to splay trees, as I've never used those.


Posted 2009-02-01T11:12:25.403

Reputation: 2 893


Half edge data structure and winged edge for polygonal meshes.

Useful for computational geometry algorithms.


Posted 2009-02-01T11:12:25.403

Reputation: 1


I think Cycle Sort is a pretty neat sorting algorithm.

It's a sorting algorithm used to minimize the total number of writes. This is particularly useful when you're dealing with flash memory where the life-span of the flash memory is proportional to the amount of writes. Here is the Wikipedia article, but I recommend going to the first link. (nice visuals!)


Posted 2009-02-01T11:12:25.403

Reputation: 10


  • Binary decision diagram (my very favorite data structure, good for representing boolean equations, and solving them. Effective for a great lot of things)
  • Heaps (a tree where the parent of a node always maintains some relation to the children of the node, for instance, the parent of a node is always greater than each of it's children (max-heap) )
  • Priority Queues (really just min-heaps and max-heaps, good for maintaining order of a lot of elements there the e.g. the item with the highest value is supposed to be removed first)
  • Hash tables, (with all kinds of lookup strategies, and bucket overflow handling)
  • Balanced binary search trees (Each of these have their own advantages)
    • RB-trees (overall good, when inserting, lookup, removing and iterating in an ordered fashion)
    • Avl-trees (faster for lookup than RB, but otherwise very similar to RB)
    • Splay-trees (faster for lookup when recently used nodes are likely to be reused)
    • Fusion-tree (Exploiting fast multiplication for getting even better lookup times)
    • B+Trees (Used for indexing in databases and file systems, very efficient when latency to read/write from/to the index is significant).
  • Spatial indexes ( Excellent for querying for whether points/circles/rectangles/lines/cubes is in close proximity to or contained within each other)
    • BSP tree
    • Quadtree
    • Octree
    • Range-tree
    • Lots of similar but slightly different trees, and different dimensions
  • Interval trees (good finding overlapping intervals, linear)
  • Graphs
    • adjacency list (basically a list of edges)
    • adjacency matrix (a table representing directed edges of a graph with a single bit per edge. Very fast for graph traversal)

These are the ones i can come to think of. There are even more on wikipedia about data structures


Posted 2009-02-01T11:12:25.403

Reputation: 1 084

What do you mean by saying Interval trees are "linear"? – Jonathan Graehl – 2010-07-23T01:55:53.857

Thanks for giving constructive critique before downvoting this answer </sarcasm> – Zuu – 2009-02-18T20:05:49.610

Not the downvoter, but I'd guess it's because Heaps, PQs, Hash Tables and Binary Trees aren't what you'd call lesser known. – Dana – 2009-02-18T20:12:29.703

16@Zuu, Ok, I'll give some constructive criticism. You provided many data structures, of which only a small fraction could be considered "lesser known". There are no links in your post and it generally misses the entire point of the question. – mmcdole – 2009-02-18T20:20:56.110

It's hard to tell what people would understand by 'lesser known'. Some people barely know what a balanced tree is. And while people might know the term 'heap' they don't know it's a general data structure that can actually be used with sense in a given application. – Zuu – 2009-02-19T00:54:54.313

What goes for the links, sure i could look it all up, but i was nice enough to categorize them as well as linking to an index of data structures on Wikipedia. Also note that the last part of the question was added after i posted this answer :-) – Zuu – 2009-02-19T01:00:38.607

BDDs. Fear the most mindwarping datastructure :) – Tetha – 2009-03-18T21:18:35.077

@Zuu: Why don't you create new entries for elsewhere unmentioned Collections? – user unknown – 2011-03-24T17:12:33.927

@user unknown, I don't intend to 'maintain' my answers. I find it rude that questions can be change in the first place, effectively leaving all existing answers irrelevant. This answer reflects the original question. If anyone wants to split it up, they're free to do so :-) – Zuu – 2011-04-06T20:08:44.993


Right-angle triangle networks (RTINs)

Beautifully simple way to adaptively subdivide a mesh. Split and merge operations are just a few lines of code each.

Jon Harrop

Posted 2009-02-01T11:12:25.403

Reputation: 39 096


I stumbled on another data structure Cartesian Tree when i read about some algorithms related to RMQ and LCA. In a cartesian tree, the lowest common ancestor between two nodes is the minimum node between them. It is useful to convert a RMQ problem to LCA.


Posted 2009-02-01T11:12:25.403

Reputation: 1 114