Algorithms To Live By – Part 1, Sorting

Updated on

Many self-improvement-type books are built upon a catchy, colorful metaphor.  This can be a very effective and inspirational technique as the human brain is wired for the drawing of analogies.  We may get quite fired up and motivated while reading about “turning the flywheel” or “leaning in.” But once we return to reality, it may be difficult to work out the practicality of these ideas and see how they might actually be implemented.  And if you have spent a lot of time in large corporations, you may have developed a certain level of incredulity towards this entire literary genre from the scars of the more ridiculous specimens, such as Who Moved My Cheese?.

Get Our Icahn eBook!

Get The Full Carl Icahn Series In PDF

Also read:

  • Baupost Letter Points To Concern Over Risk Parity, Systematic Strategies During Crisis
  • AI Hedge Fund Robots Beating Their Human Masters

I prefer to wander off the top ten list published weekly in the Wall Street Journal.  A good example of one such hidden gem is Algorithms to Live By.  If you’re willing to dig a bit deeper, there are a number of concepts presented in the book that can impact our work process.

The first topic worth exploring is sorting. A bit of clarification about the title first – the word “algorithm” is meant literally. The book is discussing classic computer science problems and solutions.  And, as such, it is necessary to cover some of the terminology for the lay person.  One such terminology is the classification of the difficulty of a computational problem and performance of an algorithm using big O notation.  Very simply, big O notation describes how the time for computation scales with the size of the problem or number of elements n.  The best of all worlds is constant time, or O(1).  In this case, no matter how big the problem, the time to solve does not change.  A construction problem might be thought of in this way; once it has been decided what to construct, it doesn’t matter how many people are going to eventually visit the building, the time to build is the same.

More commonly, we encounter the situation of linear time, O(n), which is exactly what it sounds like, i.e. the larger the problem, the more time it takes, but in a one-to-one ratio.  These are situations that can be dealt with one at a time, with no backtracking or interdependencies.  But things deteriorate rapidly from here as soon as elements of the problem start mixing.

The next level of difficulty is quadratic time, or O(n2).  Now every element of the problem must interact with each other once, so the time scales upward much more sharply with size, e.g.:

But the real killer is factorial time – O(n!).  This scaling with size is what is encountered in any problem that involves full sorting of the elements in the most simplistic fashion.  Consider doing a puzzle by taking one piece and trying, one by one, the other pieces to see if they fit together.  After a fit is found, then continue until the puzzle is complete.  This is a highly inefficient form known as the “bubble sort.”  We can do better via “mini” sorts, e.g. by color, edge pieces, and then collating or merging our sort results.  Computationally, this is better as well, and merge sorting algorithms bring the time factor down to a realm called “linearithmic” time, or O(nlog(n)).  But that function still slopes upward pretty sharply with time.

Algorithms

Unfortunately, this is about the best level of optimization that you can reach for the best sorting algorithms. The key message: sorting is expensive!

Note, however, that sorting is merely a time substitute for searching.  Therefore, the two can and should be balanced according to the needs of the problems. 

For example, have you ever shared electronic storage space with someone who was “subfolder” happy?  Yes, it may save the occasional instance of searching around trying to find the correct file.  But it is certain that every time you need to open or save a folder you must spend the time navigating through that hierarchy.  So for every time substructure is added, the time saved in the search should be weighed against the time spent on the sort.  Consider also that in the long run, this depends not only on the number of elements involved in the sort/search but the total occasions of access.  For objects that are not sourced often, perhaps it is best to just leave them in a partial or complete jumble!

Earlier in the year at Broyhill, we did just that.  We took just under 10K files, nearly a decade of internal and external research nicely sorted into countless folders a dozen levels deep in places, and flattened the file structure into just two folders.  Before doing so, we made sure to procure a storage system that could index the document content (even PDFs!) for easy searching.  So far so good; everyone’s happy and we’ve had no trouble finding the things we need.  The only thing we need to remember to do is to hide everything in the closet before company comes over to visit!

Article by Andrea Sefler, Broyhill Asset Management

Leave a Comment