Sleep Sort

We never run out of sorting algorithms. Here is a strange sorting algorithm called the Sleep Sort .

1
2
3
4
5
6
7
8
9
procedure printNumber(n)
    sleep n seconds
    print n
end

for arg in args
    run printNumber(arg) in background
end
wait for all processes to finish

Which of the following best describes the time complexity for sleep sort?

  • Assume that the length of the array is n n .
O ( 1 ) O(1) O ( n ) O(n) O ( max input ) O(\max{\text{input}}) O ( max input + n ) O(\max{\text{input}} + n)

This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.

1 solution

Ivan Koswara
Sep 18, 2016

First, note that the algorithm is hardware-dependent, because forking a new process is not an instantaneous operation, and switching between running threads also takes time. In fact, if there are enough elements in the array so that forking a new process takes more than 1 n \frac{1}{n} seconds, the algorithm fails; consider the sequence 1 , 0 , 0 , , 0 1, 0, 0, \ldots, 0 , where the process printing 1 will finish before the last process printing 0 starts. So now assume that forking a new process takes a very short time, say ϵ < 1 n \epsilon < \frac{1}{n} seconds, and that all threads can run simultaneously without any impact on time (say they are running on parallel processors). We'll also consider that the sequence consists of non-negative integers only, because we don't want to deal with time travel involving sleeping for -1 seconds. (We can generalize this so that the sequence consists of multiples of a certain number τ \tau ; we then need ϵ < τ n \epsilon < \frac{\tau}{n} . We cannot have unrestricted rational or real numbers because they might be separated by less than ϵ \epsilon .)

Consider the sequence a 1 , a 2 , , a n a_1, a_2, \ldots, a_n . The first element takes ϵ \epsilon seconds to fork, plus a 1 a_1 seconds to wait. The second element takes 2 ϵ 2\epsilon seconds to fork (it needs to wait for the first element to finish forking first), plus a 2 a_2 seconds to wait. In general, the i i -th element takes i ϵ i\epsilon seconds to fork, plus a i a_i seconds to wait. Since all elements are running simultaneously, the time taken will be the maximum among these: max ( i ϵ + a i ) \max (i\epsilon + a_i) . Because ϵ < 1 n \epsilon < \frac{1}{n} , we have i ϵ < 1 i\epsilon < 1 , thus a larger a i a_i will definitely take more time than a smaller a i a_i . Among those with the largest a i a_i , those with larger i i will take more time (because i ϵ i\epsilon is larger when i i is larger). Thus suppose the maximum a i a_i is achieved by a k a_k (and if tied, take the largest k k ). The running time is k ϵ + a k k\epsilon + a_k seconds.

Thus k ϵ + a k = O ( n ) + O ( max a i ) = O ( max a i + n ) k\epsilon + a_k = O(n) + O(\max a_i) = O(\max a_i + n) .

The answer is not O ( max a i ) O(\max a_i) because it's possible to have a i = 0 a_i = 0 for all i i . In this case, max a i = 0 \max a_i = 0 , so by definition of big O notation, there should be some c c satisfying k ϵ + a k c max a i k\epsilon + a_k \le c \cdot \max a_i , or k ϵ c 0 = 0 k\epsilon \le c \cdot 0 = 0 , contradiction. If the input is restricted to positive integers only, then the answer O ( max a i ) O(\max a_i) is more accurate (we can pick c = 2 c = 2 ).

0 pending reports

×

Problem Loading...

Note Loading...

Set Loading...