For the rest of this essay, I'm going to prefix each paragraph with one of three words, "CS," "Everybody," and "Non-CS." If you're into computer science, then read the "CS" paragraphs. If you're not then read the "Non-CS" paragraphs. Everybody should obviously read the "Everybody" paragraphs.

Everybody: So the sleep sort algorithm is a devious little sorting method that was developed fairly recently. It tells the computer to pause ("sleep") and then print the first number, but while it's sleeping, our program tells it to sleep and print the second number and so on. The amount of time to sleep is the same as the number to print. So, given the list "1, 2, 3, 6, 5, 4," the computer would sleep 1 second and print 1 then a second later it would print 2 and so on to the 7th second when it would print 6 and be done printing the sorted list. Magic!

CS: So, the question comes up: what is the run-time complexity of sleep sort (recall that we express this in "big-O notation" such as O(n) or O(n logn). It turns out that we can't know the run time complexity of sleep sort in the general case!

Non-CS: When we want to know how efficient a sorting method (algorithm) is, we ask, "how much slower does it get when I add new values to the list?" This is critical because computers often have to sort

*very*large lists. For example, you might have thousands of emails in your spam folder, but Google (or whoever your email provider is) doesn't want to spend years of computer time sorting everybody's mail if they can help it, so they want to know what algorithm is the absolute fastest for large numbers of emails. For sleep sort, we can't actually answer the question of how efficient it is, which is a rather surprising answer!

Everybody: But it seems as if we can know that. It takes about 1 second plus the largest number in the list. So, if the list were just 4 numbers, "1, 2, 3" and "10,000,000" then it would take ten million and one seconds to print the whole list, right? Well, not quite. It's actually 10,000,000 and some small amount of prep time. Of course, for four numbers the prep time is trivial and the time it takes to do the work is effectively all spent sleeping, waiting to print that last number.

Everybody: What if, however, we want to sort the numbers 1 through 100 trillion? Well, if we have a computer with enough memory to hold all of those numbers, then we're going to spend 100 trillion plus 1 seconds, right? Well.... not really. Actually, we run into a big problem right off the bat. What if the list of numbers is reversed, except for the number 2 which comes first? We tell the computer to "sleep 2 seconds and print the number 2," then, "sleep 100 trillion seconds and print 100,000,000,000,000" and so on, all the way down to "sleep 1 second and print 1", but the amount of time that it takes to tell the computer how long to sleep for every number is actually going to be longer than 2 seconds, so the number 2 is going to be printed before the number 1!

Everybody: So, we've proven that sleep sort doesn't work, right? Well, there are ways to ensure that it doesn't fail. The easiest of these is to predict how long it will take to prepare all of the numbers and add that to the number of seconds to sleep between each print. So, if it will take 20 minutes to prepare the list of 100 trillion numbers, we sleep 20 minutes and 1 second and then print the number 2. This resolves the bug and sleep sort works again. It's still the case that 20 minutes is a trivial amount of time compared to the 100 trillion and 1 seconds that it will take to print out the remaining numbers.

CS: I suspect that you already see where this is going. The time spent sleeping is a constant factor times the largest value. Because the largest value can be scaled down to the length of the list, it's really a constant factor times n, and we ignore constant factors for runtime complexity analysis, so it's really n. So is sleep sort O(n)? If so, that's pretty amazing! But we had to allow for that extra prep time...

Non-CS: When we try to compare the speed of sorting algorithms, we typically ignore any time spent if it doesn't change the "shape" of the curve that we would draw to describe the increased time. So, if your sorting algorithm takes 1 second per number and my sorting algorithm takes 2 seconds per number, then we say that they are the same "complexity." It turns out, though, that if your sorting algorithm took 1 second per number, it would be the best sorting algorithm ever devised! In practice, a sort takes more or less as long as the number of elements you have to compare times the logarithm of the number of elements. I won't get into logarithms or why this is true, but what this means is that the amount of time it takes to sort a list of numbers grows

*much*faster than the number of items in the list as we add new items to the list.

Everybody: What's confusing about sleep sort is that the sleep time is so large (a second is a very long time to a computer) that it seems as if it will always dominate the amount of time taken, but that prep time is growing at a very high rate and though it might take a list of many trillions of digits for it to happen, eventually it will be more time spent than the amount of time spent sleeping.

Everybody: Okay, so what the heck is that mysterious "prep time"? Ready for the reveal...? Okay, so here's the spoiler: that prep time is spent

*sorting the list!*That's right, the computer has to take all of those requests to sleep some number of seconds and it has to order them, so it needs to sort the numbers you gave it! The reason that we can't know how efficient sleep sort is is because we have to know how efficient your particular operating system's algorithm is for sorting all of those sleep requests first! That's actually the dominant factor once we have enough input items in the list, not the number of seconds that we sleep.

CS: Note that the OS doesn't actually sort the list per se. It inserts each new sleep request into a data structure that is always sorted. Maintaining a sorted data structure is a related class of algorithms with similar run time complexities, but without some of the optimizations that are possible when you have all of the items from the start.

Everybody: Sleep sort is a great example of how deceptive working with algorithms can be. It's so easy to hide the actual work that is going on behind layers of seemingly more time-consuming work. This is why it's often the case that performance problems in software only pop up in actual use, not in testing. If an inefficient algorithm is being used, it might not become apparent when working with small numbers of items, but as the number of items grows, the real scaling of your algorithm becomes clear!