1. Recursive (Naive) Approach
This approach directly implements the mathematical definition of Fibonacci numbers using recursion:
def fib_recursive(n):
if n <= 0:
return 0
elif n == 1:
return 1
else:
return fib_recursive(n - 1) + fib_recursive(n - 2)
Time Complexity: O(2^n) - Each call branches into two more calls, creating an exponential growth.
Space Complexity: O(n) - Maximum recursion depth is n, which determines the maximum call stack size.
2. Iterative (Dynamic Programming) Approach
This approach calculates Fibonacci numbers iteratively, storing all previous results in an array:
def fib_iterative(n):
if n <= 0:
return 0
elif n == 1:
return 1
else:
fib_sequence = [0] * (n + 1)
fib_sequence[0] = 0
fib_sequence[1] = 1
for i in range(2, n + 1):
fib_sequence[i] = fib_sequence[i - 1] + fib_sequence[i - 2]
return fib_sequence[n]
Time Complexity: O(n) - We perform a single loop from 2 to n.
Space Complexity: O(n) - We store an array of size n+1.
3. Iterative (Optimized) Approach
This approach optimizes memory usage by only storing the two most recent Fibonacci numbers:
def fib_iterative_optimized(n):
if n <= 0:
return 0
elif n == 1:
return 1
else:
a = 0
b = 1
for _ in range(2, n + 1):
temp = a + b
a = b
b = temp
return b
Time Complexity: O(n) - Still requires a single loop from 2 to n.
Space Complexity: O(1) - Only uses three variables regardless of input size.
4. Memoization Approach
This approach uses a dictionary to store previously calculated values, preventing redundant calculations:
def fib_memo(n, memo=None):
if memo is None:
memo = {0: 0, 1: 1} # Initialize with base cases
if n in memo:
return memo[n]
for i in range(2, n+1):
if i not in memo:
memo[i] = memo[i-1] + memo[i-2]
return memo[n]
Time Complexity: O(n) - Each Fibonacci number is calculated exactly once.
Space Complexity: O(n) - We store up to n+1 Fibonacci numbers in the memo dictionary.
5. Matrix Exponentiation Approach
This approach uses the mathematical property that Fibonacci numbers can be represented using matrix exponentiation:
\[ \begin{pmatrix} F_{n+1} \\ F_n \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n \begin{pmatrix} 1 \\ 0 \end{pmatrix} \]
def fib_matrix(n):
if n <= 0:
return 0
elif n == 1:
return 1
else:
fib_matrix_base = [[1, 1], [1, 0]]
result_matrix = power(fib_matrix_base, n - 1)
return result_matrix[0][0]
def power(matrix, n):
result = [[1, 0], [0, 1]] # Identity matrix
while n > 0:
if n % 2 == 1:
result = multiply_matrices(result, matrix)
matrix = multiply_matrices(matrix, matrix)
n //= 2
return result
Time Complexity: O(log n) - The binary exponentiation algorithm reduces the number of multiplications needed.
Space Complexity: O(1) - Only a constant number of 2x2 matrices are used regardless of n.
Complexity Analysis Explanation:
Time Complexity - O(log n):
- The matrix approach achieves logarithmic time complexity through binary exponentiation (also called "exponentiation by squaring").
- Instead of multiplying the matrix by itself n times (which would be O(n)), we use the following principle:
- If n is even: Mn = (Mn/2)2
- If n is odd: Mn = M × (M(n-1)/2)2
- The key insight is that we repeatedly square the matrix and halve the exponent (n //= 2).
- This process requires only log2(n) iterations because we divide n by 2 in each step until it reaches 0.
- Each matrix multiplication is a constant-time operation since we're only working with 2×2 matrices.
Space Complexity - O(1):
- Throughout the algorithm, we only need to store a fixed number of 2×2 matrices:
- The original matrix, the result matrix, and any temporary matrices needed during multiplication.
- The size of these matrices doesn't change regardless of how large n becomes.
- Therefore, the algorithm requires constant space O(1).
This demonstrates why the matrix method dramatically outperforms the O(n) algorithms for large inputs - the difference between calculating 1,000,000 operations versus only ~20 operations (log2 of 1,000,000).