Hi everyone, here is my second post on computer science. Contrary to what I said before (sorry!), this will be about big-O notation, a very important concept in computer science.
Big-O notation is a system used to analyze how fast a program runs or how much memory it takes up. We'll focus on time for now, but it can similarly be applied to other things.
The "big-O" of the run time of a program is usually denoted as , where is some function of . For example, if a program runs in big-O of squared, we would write that as .
Usually, denotes the amount of input to the program. Then, the function is the number of operations the computer executes to go through the program. The run time is analyzed as approaches infinity, so if the number of operations is the sum of multiple terms, only the one that grows the fastest is considered. Also, any coefficients are ignored.
For example, here are some possible big-O categories from fastest to slowest, along with some programs that might fall under each category:
: This is the best big-O category. It means that no matter how much input there is, the program always takes the same amount of time. Very few programs fit this definition. One example is a program that takes in a list and returns the first element of the list.
: This is a pretty nice big-O category. It grows much slower than . For example, binary search on a sorted list takes time. Note that the base of the logarithm is not included, since all logarithms differ only by a factor of a constant.
: This big-O category usually results when a program iterates over data one or more times. For example, a program to find the maximum value in a list usually falls under this category, as it loops over the list once keeping track of the maximum so far.
: Included because often-used sorting algorithms, such as mergesort and quicksort, fall under this category.
: This category happens when a program has a loop within a loop.
: This is a very slow big-O category. Usually, a program in this category has values that each can be any of values.
: This is one of the worst big-O categories. Usually, a program in this category works with all permutations of a list.
There are many more possible categories, including products of the above, but the above are very common.
When solving computer science problems, big-O notation can be a valuable tool to figure out whether an approach is feasible or not. Here is an example of using it:
Write a program that prints all permutations of the first positive integers that are in strictly increasing order.
Okay, perhaps the program is really easy to write, since the only permutation is obvious. However, let us consider another approach: going through every permutation and examining whether the elements are in increasing order. This would run in time, since there are permutations. It takes a constant number of operations to check each permutation, and the constant is dropped. This is infeasible, as grows much too fast. Even would take a nontrivial amount of time.
Here is another problem:
Pseudocode:
function sum(list):
sum = 0
for each element in list:
current = element
add current to sum
sum = sum*1
return sum
Find the big-O category for this program.
We can quickly see that this program finds the sum of the elements in a list. It may not be the most efficient, but it works.
First, it sets sum to 0, which takes one operation. Then, it iterates over the list, performing two operations each time. This takes time. Then, it performs two more operations.
Overall, the total time is . The fastest growing term is , and we ignore the coefficient of 2, so the big-O category is .
One last note: while the big-O category can help in analyzing algorithms, it isn't the only thing to consider. Programs within the same category can be significantly faster or slower than another. Also, don't forget to consider memory usage as well as time.
Problems:
Think about big-O notation and try to categorize some programs you write.
I'll put up some problems asking for the big-O time category of some programs soon.
Thanks for reading! I hope you use big-O notation to analyze algorithms in the future.
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
Great post Daniel! I have a challenge for you: In my feed I posted a geometry problem that I need explained to me. Would you mind trying your hand at it? The discussion is called "Can anybody help me with this?". Thanks!
So, I'm struggling with Big O and in my class the lecturer essentially said "You have O(n), O(n log n) and so on, but he never said what they actually mean. So I was wondering if you could help me.
I was wondering you say: "This takes n time. Then, it performs two more operations.". The two more operations I assume are "sum = sum * 1" and "return n". I was wondering, why isn't this 3 operations, that is, the assignment, multiplication, and return?
Also, when you say "Add current to sum" That would evaluate to "sum = sum + current", which presents the same question...
And also, when you say that "Then, it iterates over the list, performing two operations each time. This takes n time", is it n * 2 or 2n) because there are two operations happening, per iteration of the loop? If there were 4 operations, it would be 4n, where n represents the size of the list, or the number of iterations it has to do?
Great post. I need a little clarity, so this means the last two line makes the +2 in the equation 1 +2n (+2).