Fibonacci Numbers Calculation Algorithm Performance Comparison (vercel)

Fibonacci Runtime Comparisons

Click to toggle show and hide the charts

Raw Performance Data

n Iterative (DP) Iterative (Optimized) Memoization Matrix

Findings

Based on the performance data, we can observe several important trends:

  • The Matrix Exponentiation algorithm consistently outperforms all other methods, especially as the input size grows. This confirms its theoretical O(log n) time complexity.
  • Both Iterative (DP) and Memoization approaches show linear growth in runtime, matching their O(n) theoretical time complexity.
  • The Optimized Iterative approach performs better than the standard iterative approach due to its reduced memory operations, despite having the same O(n) time complexity.
  • The gap between the Matrix approach and the others widens significantly as n increases past 10,000, demonstrating how the logarithmic vs. linear complexity difference becomes pronounced with large inputs.
  • For small values of n (n < 100), the difference between all algorithms is negligible in practical terms (fractions of a millisecond).

These results validate that the theoretical time complexity analysis accurately predicts the real-world performance.

The Fibonacci Sequence

The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, usually starting with 0 and 1. Mathematically, it can be defined as:

\[F(0) = 0\] \[F(1) = 1\] \[F(n) = F(n-1) + F(n-2) \text{ for } n > 1\]

The resulting sequence begins: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...

The Fibonacci sequence appears in many settings in nature, from the arrangement of leaves on a stem to the spiral pattern of shells and the family tree of honeybees.

Fibonacci Spiral

The Fibonacci spiral is created by drawing arcs that connect opposite corners of squares with sides equal to Fibonacci numbers. This spiral approximates the golden spiral, which is based on the golden ratio (approximately 1.618).

While the Fibonacci spiral and golden spiral are often confused, they are not identical. However, as the Fibonacci sequence progresses toward infinity, the ratio of consecutive Fibonacci numbers approaches the golden ratio, making the Fibonacci spiral an increasingly close approximation of the golden spiral.

Complexity Concepts

Time Complexity

Time complexity measures how the runtime of an algorithm grows as the input size increases. It answers the question: "How does the algorithm's execution time scale?"

We describe it using Big O notation, which represents the worst-case scenario:

  • O(1): Constant time - Runtime doesn't increase with input size
  • O(log n): Logarithmic time - Runtime grows logarithmically with input size
  • O(n): Linear time - Runtime grows linearly with input size
  • O(n²): Quadratic time - Runtime grows with the square of the input size
  • O(2^n): Exponential time - Runtime doubles with each additional input element

Space Complexity

Space complexity measures how much memory an algorithm needs based on input size. It answers the question: "How does the algorithm's memory usage scale?"

Like time complexity, it's expressed in Big O notation:

  • O(1): Constant space - Memory usage doesn't increase with input size
  • O(n): Linear space - Memory usage grows linearly with input size
  • O(n²): Quadratic space - Memory usage grows with the square of the input size

Space-Time Trade-off

In algorithm design, there's often a balance between time and space efficiency:

  • We can sometimes make algorithms faster by using more memory (e.g., caching results)
  • Alternatively, we might reduce memory usage at the cost of increased computation time
  • The best approach depends on specific constraints (is memory or speed more critical?)

The Fibonacci algorithms demonstrate this trade-off well. For example, the memoization approach uses extra memory to store previous results, which improves time efficiency compared to the naive recursive approach.

Fibonacci Calculation Algorithms

1. Recursive (Naive) Approach

This approach directly implements the mathematical definition of Fibonacci numbers using recursion:


def fib_recursive(n):
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fib_recursive(n - 1) + fib_recursive(n - 2)
                

Time Complexity: O(2^n) - Each call branches into two more calls, creating an exponential growth.

Space Complexity: O(n) - Maximum recursion depth is n, which determines the maximum call stack size.

2. Iterative (Dynamic Programming) Approach

This approach calculates Fibonacci numbers iteratively, storing all previous results in an array:


def fib_iterative(n):
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        fib_sequence = [0] * (n + 1)
        fib_sequence[0] = 0
        fib_sequence[1] = 1
        for i in range(2, n + 1):
            fib_sequence[i] = fib_sequence[i - 1] + fib_sequence[i - 2]
        return fib_sequence[n]
                

Time Complexity: O(n) - We perform a single loop from 2 to n.

Space Complexity: O(n) - We store an array of size n+1.

3. Iterative (Optimized) Approach

This approach optimizes memory usage by only storing the two most recent Fibonacci numbers:


def fib_iterative_optimized(n):
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        a = 0
        b = 1
        for _ in range(2, n + 1):
            temp = a + b
            a = b
            b = temp
        return b
                

Time Complexity: O(n) - Still requires a single loop from 2 to n.

Space Complexity: O(1) - Only uses three variables regardless of input size.

4. Memoization Approach

This approach uses a dictionary to store previously calculated values, preventing redundant calculations:


def fib_memo(n, memo=None):
    if memo is None:
        memo = {0: 0, 1: 1}  # Initialize with base cases
    
    if n in memo:
        return memo[n]
    
    for i in range(2, n+1):
        if i not in memo:
            memo[i] = memo[i-1] + memo[i-2]
    
    return memo[n]
                

Time Complexity: O(n) - Each Fibonacci number is calculated exactly once.

Space Complexity: O(n) - We store up to n+1 Fibonacci numbers in the memo dictionary.

5. Matrix Exponentiation Approach

This approach uses the mathematical property that Fibonacci numbers can be represented using matrix exponentiation:

\[ \begin{pmatrix} F_{n+1} \\ F_n \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n \begin{pmatrix} 1 \\ 0 \end{pmatrix} \]

def fib_matrix(n):
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        fib_matrix_base = [[1, 1], [1, 0]]
        result_matrix = power(fib_matrix_base, n - 1)
        return result_matrix[0][0]

def power(matrix, n):
    result = [[1, 0], [0, 1]]  # Identity matrix
    while n > 0:
        if n % 2 == 1:
            result = multiply_matrices(result, matrix)
        matrix = multiply_matrices(matrix, matrix)
        n //= 2
    return result
                

Time Complexity: O(log n) - The binary exponentiation algorithm reduces the number of multiplications needed.

Space Complexity: O(1) - Only a constant number of 2x2 matrices are used regardless of n.

Complexity Analysis Explanation:

Time Complexity - O(log n):

  • The matrix approach achieves logarithmic time complexity through binary exponentiation (also called "exponentiation by squaring").
  • Instead of multiplying the matrix by itself n times (which would be O(n)), we use the following principle:
    • If n is even: Mn = (Mn/2)2
    • If n is odd: Mn = M × (M(n-1)/2)2
  • The key insight is that we repeatedly square the matrix and halve the exponent (n //= 2).
  • This process requires only log2(n) iterations because we divide n by 2 in each step until it reaches 0.
  • Each matrix multiplication is a constant-time operation since we're only working with 2×2 matrices.

Space Complexity - O(1):

  • Throughout the algorithm, we only need to store a fixed number of 2×2 matrices:
  • The original matrix, the result matrix, and any temporary matrices needed during multiplication.
  • The size of these matrices doesn't change regardless of how large n becomes.
  • Therefore, the algorithm requires constant space O(1).

This demonstrates why the matrix method dramatically outperforms the O(n) algorithms for large inputs - the difference between calculating 1,000,000 operations versus only ~20 operations (log2 of 1,000,000).

Conclusion

This analysis of Fibonacci algorithms reveals several important lessons:

  1. Algorithm choice matters: For large inputs, the theoretical advantage of an O(log n) algorithm over an O(n) algorithm creates significant real-world performance differences.
  2. Mathematics can improve computing: The matrix exponentiation approach leverages mathematical properties to achieve superior performance.
  3. Space-time trade-offs: As demonstrated by the optimized iterative approach, reducing memory usage can sometimes improve time performance as well by enhancing cache efficiency.
  4. Practical considerations: For small inputs, even the theoretically "slower" algorithms perform adequately, suggesting that complexity analysis is most important when working with large data sets.

The Fibonacci sequence serves as an excellent example for studying algorithmic efficiency because:

  • It has a simple mathematical definition that can be implemented in many ways
  • The naive solution is highly inefficient, creating a clear opportunity for optimization
  • It demonstrates how mathematical insights can lead to dramatically more efficient algorithms

Understanding these patterns helps develop intuition for algorithm optimization in more complex problems.

References