What is efficiency?
- How quickly does an algorithm execute as the problem input grows?
- Some problems can be solved very quickly despite the size of the input; other problems take longer and longer time to solve as the size of the input; the growth can be linear, quadratic, or exponential.
- Algorithms belong to different classes based on their efficiency.
- Given a problem, map the problem to an existing algorithm to predict the computing efficiency
- Efficiency is not measured in terms of time and space required to solve the problem on a particular machine or programming language.
- Efficiency is measured in terms of space and time required to solve a problem using the basic primitive steps of a computing machine.
Space
Space is the amount of memory used by algorithm to solve the problem, for example, memory used by variables and data structures excluding memory used by the input. Amount of time required to access any portion of memory (RAM) is considered to be a constant.
Time
Time required to solve a problem is measured in terms of basic primitive steps needed to solve the problem. The basic primitive steps are arithmetic, logical, comparisons, memory access, and other built-in operations of the machine.
For a solution to each problem, there are three scenario.
- Best Case
- Worst Case
- Expected Case (average)
Complexity
We try to map a given problem to an algorithm of known complexity to predict performance. We use classes of complexity to different efficiencies.
Given a problem:
2 + 3b
b
The asymptotic notation frequently used to describe complexity is called Big O notation. The Big O notation provides an upper bound on the growth rate of the function. In Big O notation, linear algorithms are denoted as O(n).
Let say we have a algorithm represented by f(x). If the is algorithm is determined to be of linear complexity, the following holds true.
f(x) ∈ O(n)
f(x) is a element of class of algorithms of linear complexity.
a^b = (a*a)^(b/2) when b is even
a^b = a * (a ^ (b-1) when b is odd
By using this algorithm, we have cut down the steps required into half. Thus, the complexity of this algorithm is log2 b.
Binary search of a sorted list is another example of an algorithm of logarithmic (sub-linear) complexity.
For m=1 to 500
For n=1 to 200
<do something>
The number of steps required to solve such a problem is approximately m * n. In the worst case scenario, m = n, hence the complexity of the algorithm approaches O(n2).
Aside: O(1) are algorithms that execute in the same time (or space) regardless of the size of the input.
Given a problem:
- Identify basic steps to solve the problem
- Establish the relationship between the size of the input and the number of basic steps required to process the input
- Focus on the worst case scenario since we are interested in asymptotic growth meaning we are interested in considering the performance of algorithms for very large input datasets
- Linear - the time required grows in direct proportion to the size of input
- Logarithmic - the time required grows with size of input but is reduced by some multiple of the input
- Quadratic - the time required to solve problems grows with input size that comprises of two loops, m and n (an outer loop and an inner loop)
- Exponential - the time required to solve the problem increases at a very high rate compared to increase in size of input
Linear O(n)
Let's say a problem requires the number of basis steps expressed by the following equation.2 + 3b
- The problem requires 2 steps for any size of input plus the size of input times 3
- 2 is the additive constant
- 3 is the multiplicative constant
- Since we are interested in asymptotic growth, the additive and multiplicative constants are immaterial
b
The asymptotic notation frequently used to describe complexity is called Big O notation. The Big O notation provides an upper bound on the growth rate of the function. In Big O notation, linear algorithms are denoted as O(n).
Let say we have a algorithm represented by f(x). If the is algorithm is determined to be of linear complexity, the following holds true.
f(x) ∈ O(n)
f(x) is a element of class of algorithms of linear complexity.
Logarithmic O(log n)
How do we calculate exponentiation of a number? Say we want to calculate, 2^500. We can multiply out 2, 500 times (2*2*......*2). The complexity of this algorithmic approach would be linear. However, there is a shortcut to calculating exponentiation.a^b = (a*a)^(b/2) when b is even
a^b = a * (a ^ (b-1) when b is odd
By using this algorithm, we have cut down the steps required into half. Thus, the complexity of this algorithm is log2 b.
Binary search of a sorted list is another example of an algorithm of logarithmic (sub-linear) complexity.
Quadratic O(n2)
The general structure of these kinds of algorithm involve an inner loop and an outer loop.For m=1 to 500
For n=1 to 200
<do something>
The number of steps required to solve such a problem is approximately m * n. In the worst case scenario, m = n, hence the complexity of the algorithm approaches O(n2).
Exponential O(2n)
For problems such as Knapsack, Traveling Salesman, and Towers of Hanoi fall into a class of problems that require time that is exponential in relation to the size of input.Aside: O(1) are algorithms that execute in the same time (or space) regardless of the size of the input.
Comparison
- Let's say we have an input size, n, of 1000
- Let's assume we need to perform one basic operation for each input
- Let's further assume we require one nano second for each operation
- Logarithmic - log2 1000 is 9.96 or the algorithm will require about 10 nano seconds
- Linear - will require 1 micro second (1000 * 1/1,000,000,000)
- Quadratic - will require 1 milli second (1000 * 1000 * 1/1,000,000,000)
- Exponential - will require 10^285 years!
Aside: Linear algorithms are not always faster than quadratic algorithms.