Things you need to know about the Big O Notation

The Big-O notation is a BETTER way of formalized fuzzy counting. It is the efficiency/performance of your work. As a programmer, you need to know how fast your code works when the inputs get bigger because in real life you can't know the exact number of inputs. Furthermore, you can't compare two different algorithmic approaches without an asymptotic notation so if you want to choose the better one, you are going to compare them with big-O and see which one fits your situation. Both may be inefficient but you will know which one is BETTER.

What does BETTER mean?

  1. Faster(speed)- Time Complexity
  2. Less memory-intensive(memory usage)- Space Complexity.
  3. More readable.

Time Complexity: is the amount of time an algorithm requires to run, as a function of the amount of input, measured in such a way as to ignore constant terms and multiplication by constant terms.

Rules Of Thumb In Time Complexity.

a. Assignment variables ('=') are constants in Time Complexity

b. Addition ('+') is constant in Time Complexity.

c. Accessing Array with (index) and Object with (key) is constant in Time Complexity.

Auxiliary Space Complexity: is the space required by the algorithm, not including the space taken by the input.

Rules Of Thumb In Space Complexity.

a. Most primitives data types(booleans, numbers, undefined, null) are constant space.

b. Strings require O(n) Space i.e a linear space( where n is the string length).

c. Reference types are generally O(n), where n is the length (for arrays) or the number of keys (for objects).

Here are some real-life examples of Space and Time Complexity:

  • Take for instance you have to count 1 to 10. Now using the old or ancient way you have to count 1 for i, ii for 2 e.t.c. now applying the concept of big O. In other to manage space and speed of counting we resolve into using numerical numbers instead of the old Roman figures way of counting.

  • For instance while going from your home to your office or school or college, there can be "n" number of paths. But you choose only one path to go to your destination i.e. the shortest path.

  • Take for instance you want Garri and you have Rice. Using the old or ancient means of exchange you have to go about looking for someone to make an exchange with( trade by barter) this will end up taking a lot of time. Now applying the concept of big O, In other to manage the speed(time) of exchange we resolve into using Money instead of the ancient trade by barter method.

N/B: If an algorithm has:

  1. A O(1) means it has the least complexity. It is often called "constant time”, if you can create an algorithm to solve the problem in O(1), you are probably at your best. In some cases, the complexity may go beyond O(1), then we can analyze them by finding its O(1/g(n)) counterpart. For example, O(1/n) is more complex than O(1/n²).

  2. O(log n) is more complex than O(1) but less complex than polynomials. As complexity is often related to divide and conquer algorithms, O(log n) is generally a good complexity you can reach for sorting algorithms. O(log n) is less complex than O(n²) because the square root function can be considered a polynomial, where the exponent is 0.5.

  3. Complexity of polynomials increases as the exponent increases. For example, O(n⁵) is more complex than O(n⁴). Due to the simplicity of it.

  4. Exponentials have greater complexity than polynomials as long as the coefficients are positive multiples of n. O(2ⁿ) is more complex than O(n⁹⁹), but O(2ⁿ) is actually less complex than O(1). We generally take 2 as a base for exponentials and logarithms because things tend to be binary in Computer Science, but exponents can be changed by changing the coefficients. If not specified, the base for logarithms is assumed to be 2.

  5. Factorials have greater complexity than exponentials factorials and exponentials have the same number of multiplications, but the numbers that get multiplied grow for factorials while remaining constant for exponentials.

  6. Multiplying terms

When multiplying, the complexity will be greater than the original, but not more than the equivalence of multiplying something that is more complex. For example, O(n log n) is more complex than O(n) but less complex than O(n²), because O(n²) = O(n n) and n are more complex than O(log n).


Objects are unordered data structures.

When To Use Objects

  • When you do not need order.
  • When you need fast access/Insertion and removal.

Big O of objects.

  • Insertion- O(1)
  • Access- O(1)
  • Removal- O(1)
  • Searching- O(n).

The Big O of objects methods

  • Object.keys- O(n)
  • Object.values- O(n)
  • Object.entries- O(n)
  • hasOwnProperty- O(1).


Arrays are ordered lists/data structures.

When To Use Arrays

  • When you need order.

  • When you need fast access/insertion and removal(sort of....)

Big O of Arrays

*Insertion- depends on where you are inserting it.

*Removal- depends on where you are removing from.

*Searching- O(n).

*Access- O(1).

Big O of Array Methods (operations)

*Push- O(1)

*Shift- O(n)

*Pop- O(1)

*Unshift- O(n)

*Concat- O(n)

*Slice- O(n)

*Splice- O(n)

*Sort- O(n log n)

*Foreach,map,filter,reduce,find,some, every etc- O(n)..