Know Thy Complexities

Know Thy Complexities!

Hi there!  This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science.  When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn’t be stumped when asked about them.  Over the last few years, I’ve interviewed at several Silicon Valley startups, and also some bigger companies, like Google, Facebook, Yahoo, LinkedIn, and eBay, and each time that I prepared for an interview, I thought to myself “Why hasn’t someone created a nice Big-O cheat sheet?”.  So, to save all of you fine folks a ton of time, I went ahead and created one.  Enjoy!

Common Data Structure Operations

Data Structure Time Complexity Space Complexity
Average Worst Worst
Access Search Insertion Deletion Access Search Insertion Deletion
Array Θ(1) Θ(n) Θ(n) Θ(n) O(1) O(n) O(n) O(n) O(n)
Stack Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Queue Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Singly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Doubly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n) O(n) O(1) O(1) O(n)
Skip List Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n log(n))
Hash Table N/A Θ(1) Θ(1) Θ(1) N/A O(n) O(n) O(n) O(n)
Binary Search Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n)
Cartesian Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) N/A O(n) O(n) O(n) O(n)
B-Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Red-Black Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Splay Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) N/A O(log(n)) O(log(n)) O(log(n)) O(n)
AVL Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
KD Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n) O(n) O(n) O(n) O(n)

Array Sorting Algorithms

Algorithm Time Complexity Space Complexity
Best Average Worst Worst
Quicksort Ω(n log(n)) Θ(n log(n)) O(n^2) O(log(n))
Mergesort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(n)
Timsort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
Heapsort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(1)
Bubble Sort Ω(n) Θ(n^2) O(n^2) O(1)
Insertion Sort Ω(n) Θ(n^2) O(n^2) O(1)
Selection Sort Ω(n^2) Θ(n^2) O(n^2) O(1)
Tree Sort Ω(n log(n)) Θ(n log(n)) O(n^2) O(n)
Shell Sort Ω(n log(n)) Θ(n(log(n))^2) O(n(log(n))^2) O(1)
Bucket Sort Ω(n+k) Θ(n+k) O(n^2) O(n)
Radix Sort Ω(nk) Θ(nk) O(nk) O(n+k)
Counting Sort Ω(n+k) Θ(n+k) O(n+k) O(k)
Cubesort Ω(n) Θ(n log(n)) O(n log(n)) O(n)

Binary Numbers

Task
Given a base-10 integer,n, convert it to binary (base-2). Then find and print the base-10 integer denoting the maximum number of consecutive 1’s in n’s binary representation.

Sample Input 1

5
Sample Output 1

1
Sample Input 2

13
Sample Output 2

2

Explanation

Sample Case 1:
The binary representation of 5 is 101, so the maximum number of consecutive 1’s is 1.

Sample Case 2:
The binary representation of 13 is 1101, so the maximum number of consecutive 1’s is 2.

Congratulations, you passed this test case in 0.0600/4 seconds!
Input (stdin)
5
Your Output (stdout)
1
Expected Output
1

Checking if any anagram of a given string is palindrome or not

Given a string, how do we check if any anagram of it can be a palindrome?

For example let us consider the string “AAC”. An anagram of it is “ACA” which is a palindrome. We have to write a method which takes a string and outputs true if we can form a palindrome from any anagram of the given string. Otherwise outputs false.

We can immediately think of a solution where we can generate all anagrams of the given string and check if there is any palindrome exists. But this approach is very lengthy and has exponential time complexity (O(n!)).

The key to solve this problem efficiently is to understand how a palindrome is formed. We all know that a palindrome reads the same when you read from left to right or right to left.

A palindrome can be broken into two halves. where in the first half is the reverse of second half. In case of odd length of strings, we have to ignore the middle character.

So for every character in the first half, there is a corresponding matching character in the second half. This indicates that all the characters in a string must occur even number of times except for one which appears in the middle of the string.

Hence if we check if there is at most one character with odd occurrences in the string we can say that we can form a palindrome from any anagram.

Input
aaabbbb

Output
It’s a anagram palindrome