Adapted from problem 306 of "¡Acepta el reto!".
Solution: A useful method when solving for the general term of linear, homogeneous recurrence relationships is to consider the matrix that maps the vector containing $n$ consecutive terms $a_i, \cdots, a_{i+n-1}$, to $a_{i+1}, \cdots, a_{i+n}$, where $n$ is the number of initial conditions. The structure of such a matrix has a particular shape, namely ones above the diagonal and the coefficients of the recurrence formula on the last row; typically you would try to diagonalize this, or at least find its Jordan Canonical Form to make computations trivial, but as it will be seen later, this approach is useless here.
Consider the particular case that concerns us right now: here the aforementioned matrix will be
$$ F := \begin{pmatrix} 0 & 1\\ 1 & 1 \end{pmatrix}, \qquad \text{because} \qquad F \begin{pmatrix} \operatorname{fib}_{k-1}\\ \operatorname{fib}_k \end{pmatrix} = \begin{pmatrix} \operatorname{fib}_k\\ \operatorname{fib}_{k+1} \end{pmatrix}; $$
hence, imposing the initial conditions and extracting the last term we can obtain a matrix formula for $\operatorname{fib}_n$, which is
$$ \operatorname{fib}_n = \begin{pmatrix} 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1\\ 1 & 1 \end{pmatrix}^{n-1} \begin{pmatrix} 0\\ 1 \end{pmatrix}. $$
If you insist on diagonalizing $F$, you will get $\varphi = (1 + \sqrt{5})/2$ and $\tilde\varphi = (1 - \sqrt{5})/2$ as eigenvalues, which in turn requires for the general term to compute the $n$th power of a floating-point number; this will probably be inaccurate due to rounding errors when performing floating-point arithmetic. Instead, provided that multiplying two-by-two matrices is computationally very cheap, we can work with integers and perform all the computations there.
The tricky part at this point is to achieve logarithmic time complexity, since proceeding by $F$, $F^2$, and so forth until $F^{n-1}$ is linear and as poor as the naive, dynamic-programming-based approach. Consider an easy test case, when $n-1$ is a power of 2: here one could simply keep squaring the matrix until done. But notice that the benefit of working in binary is precisely that we can decompose any number into a sum of powers of two, and hence apply this same procedure for each separate part and multiply them together.
This fundamental algorithm is called square-and-multiply, repeated squaring, or even binary exponentiation, and the
implementation for the idea is to iterate over the bits of the exponent; at each step you square a residual term,
which you multiply to the total if the corresponding bit is $1$. Furthermore, we can leverage GCC's __builtin_clzll
extension to count the leading zeros of the exponent, which tells us the most significant bit, and saves a few more
cycles. Using concepts, we can generalize this algorithm for any element of a monoid $A$, as shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |
And with this machinery already built, the Fibonacci case is nothing but a small corollary. Defining a 2x2 integer matrix class and setting the constructors correctly, we will get something along these lines:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | |
Finally, we can instantiate the base matrix at main, fetch the given $n$, and invoke pow to then extract
the $e_{22}$ component; we are done.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |