Running Time and Big-O


Collapse Content

This topic isn't fully covered elsewhere on Learneroo, so this page will go through the basics that you should know for interviews.

A good algorithm doesn't only find an answer, it finds it quickly and efficiently. But how do we evaluate how fast an algorithm is? Let's try profiling an algorithm manually before introducing a better approach.

Running time of Algorithms

An operation is a single instruction that a computer performs, such as adding two numbers together, or looking up a single value in an array. Examine the following code:

public class Main
{
    public static void main (String[] args) 
    {
        int[] ar = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
        System.out.println( sumEvenNums(ar) );
    }

    static int sumEvenNums(int[] ar){
        int sum = 0;
        for(int num: ar){
            if(num%2 == 0)
                sum += num;

        }
        return sum;
    }
}
int sum = 0;
for(int num: ar){
  if(num%2 == 0)
    sum += num;
}
return sum;

Hover above for details

It's pretty tedious to calculate how many operations even a simple piece of code will perform, and the final result doesn't tell us much about the nature of the algorithm. We need a better method to evaluate and compare the running time of algorithms for any size of input.

(See Insertion Sort and Lists for more examples of counting the number of operations.)

Time Complexity

Instead of looking at the exact number of operations an algorithm will perform, we examine the time complexity, a measure of how much longer it will take an algorithm to run (in number of operations) as the size of the input increases. This only examines the proportional time of the largest components of the algorithm:

Proportional Time

Instead of looking at the running-time for a specific input, we evaluate the running-time in proportion to an input of size n. So in the above example, the loop iterates through the entire array once, so it runs in time directly proportional to n, which means it has a linear running time.

Biggest Components

Instead of checking every line of code, we only pay attention to the most significant part, the component that dominates the running time as the input gets larger. In the above example, the significant part is the loop which iterates through the array; we ignore the operations outside the loop. If there was a nested-loop that went through the input then we would ignore any single loops, since their running time is insignificant compared to a nested loop.

In fact, we even ignore how many operations the code performs inside each iteration of the loop! A loop with 50 operations inside it has the same time complexity as an loop with 1 operation inside, even if it likely takes 50 times as long to run. The time complexity ignores constants (like 50 vs. 1) and only cares about numbers that change in proportion to the input size.

Worst-case vs. Best

Running time can vary depending on how "lucky" the algorithm is. For example, let's say you have an algorithm that looks for a number in a list by searching through the whole list linearly. These are the possible running times:

  • Worst-case running time - the algorithm finds the number at the end of the list or determines that the number isn't in the list. It went through the entire list so it took linear time. The worst-case running time is usually what is examined.
  • Best-case running time - the algorithm gets lucky and finds the number on the first check. It took constant time regardless of the input size. Since we are not usually that lucky, we don't usually care about the best-case running time.
  • Expected-case running time - the algorithm finds the number halfway through the list (assuming the number is in the input). We sometimes care about the expected-case, though it can be harder to calculate than the worst-case.

Big-O

Big-O is the shorthand used to classify the time complexity of algorithms. It has a formal mathematical definition, but you just need to know how to classify algorithms into different "Big-O categories", which can be reviewed below:

O(1) - Constant Time

The algorithm performs a constant number of operations regardless of the size of the input.

Examples:

  • Access a single number in an array by index
  • Add 2 numbers together.

O(log n) - Logarithmic Time

As the size of input n increases, the algorithm's running time grows by log(n). This rate of growth is relatively slow, so O(log n) algorithms are usually very fast. As you can see in the table below, when n is 1 billion, log(n) is only 30.

Example:
Binary search on a sorted list. The size that needs to be searched is split in half each time, so the remaining list goes from size n to n/2 to n/4... until either the element is found or there's just one element left. If there are a billion elements in the list, it will take a maximum of 30 checks to find the element or determine that it is not on the list.

Whenever an algorithm only goes through part of its input, see if it splits the input by at least half on average each time, which would give it a logarithmic running time.

Table of n vs. log(n)

n log(n)
2 1
1000 10
1M 20
1B 30

O(n) - Linear Time

The running time of an algorithm grows in proportion to the size of input.

Examples:

  • Search through an unsorted array for a number
  • Sum all the numbers in an array
  • Access a single number in a LinkedList by index

O(n log n) - Linearithmic time

log(n) is much closer to n than to n2, so this running time is closer to linear than to higher running times (and no one actually says "Linearithmic").

Examples:

Sort an an array with QuickSort or Merge Sort. When recursion is involved, calculating the running time can be complicated. You can often work out a sample case to estimate what the running time will be. See Quick Sort for more info.

O(n2) - Quadratic Time

The algorithm's running time grows in proportion to the square of the input size, and is common when using nested-loops.

Examples:

  • Printing the multiplication table for a list of numbers
  • Insertion Sort

Table of n vs. n2

n n2
1 1
10 100
100 10,000
1000 1M

Nested Loops

Sometimes you'll encounter O(n3) algorithms or even higher exponents. These usually have considerably worse running times than O(n2) algorithms even if they don't have a different name!

A Quadratic running time is common when you have nested loops. To calculate the running time, find the maximum number of nested loops that go through a significant portion of the input.

  • 1 loop (not nested) = O(n)
  • 2 loops = O(n2)
  • 3 loops = O(n3)

Some algorithms use nested loops where the outer loop goes through an input n while the inner loop goes through a different input m. The time complexity in such cases is O(nm). For example, while the above multiplication table compared a list of data to itself, in other cases you may want to compare all of one list n with all of another list m.

O(2n) - Exponential Time

Table of n vs. 2n (reverse of log(n) table)

n 2n
1 2
10 ~1000
20 ~1M
30 ~1B

Example:

  • Trying out every possible binary string of length n. (E.g. to crack a password).

O(n!) - Factorial Time

This algorithm's running time grows in proportion to n!, a really large number. O(n!) becomes too large for modern computers as soon as n is greater than 15+ (or upper teens). This issue shows up when you try to solve a problem by trying out every possibility, as in the traveling salesman problem.

Examples:

  • Go through all Permutations of a string
  • Traveling Salesman Problem (brute-force solution)

Table of n vs. n!

n n!
1 1
10 ~3.6M
20 ~2.43 × 1018
30 ~2.65 × 1032

Challenges

Can you determine the time complexity of the algorithms below? See if you can solve them on your first try!

Challenge

Approximately how many operations will the method sumEvenNums perform above?

Please sign in or sign up to submit answers.

Alternatively, you can try out Learneroo before signing up.

Challenge

Here's some code which prints the multiplication table for a given array of numbers. Which is the dominant part of the running time?

static void printMultiTable(int[] ar){
    for(int i=0; i<ar.length; i++){ // A
        System.out.print("- ");   // A
    }
    System.out.println();  //  B
    for(int i=0; i<ar.length; i++){
        for(int j=0; j<ar.length; j++){             // C
            System.out.print(ar[i] * ar[j] + " ");   // C
        }
        System.out.println();  // D
    }
}

Please sign in or sign up to submit answers.

Alternatively, you can try out Learneroo before signing up.

Challenge

What is the big-O time complexity of an optimal algorithm that finds the largest 10 numbers in an unsorted array and sorts them?

Please sign in or sign up to submit answers.

Alternatively, you can try out Learneroo before signing up.

Challenge

What is the expected big-O time complexity of an optimal algorithm that finds the largest n/2 items in an unsorted array of size n (but does not sort them)?

Please sign in or sign up to submit answers.

Alternatively, you can try out Learneroo before signing up.

Challenge

Here's a simple algorithm (though non-optimal) to find a match of a substring word within a longer string text. It returns the index of the match, or -1 if no match is found.

Let n be the length of the text and m be the length of the word. What is the worst-case running-time of this algorithm?

static int indexOf(String text, String word) {
    for (int i = 0; i <= text.length() - word.length(); i++) {  // loop through text
        int j = 0;
        // loop through word
        while (text.charAt(i + j) == word.charAt(j)) {  //character match
            j++;
            if (j == word.length())
                return i;
        }
    }
    return -1;      
}

Please sign in or sign up to submit answers.

Alternatively, you can try out Learneroo before signing up.

Contact Us
Sign in or email us at [email protected]