Articles

2.4: Solving Recurrence Relations

2.4: Solving Recurrence Relations


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Example (PageIndex{1})

Find a recurrence relation and initial conditions for (1, 5, 17, 53, 161, 485ldots ext{.})

Solution

Finding the recurrence relation would be easier if we had some context for the problem (like the Tower of Hanoi, for example). Alas, we have only the sequence. Remember, the recurrence relation tells you how to get from previous terms to future terms. What is going on here? We could look at the differences between terms: (4, 12, 36, 108, ldots ext{.}) Notice that these are growing by a factor of 3. Is the original sequence as well? (1cdot 3 = 3 ext{,}) (5 cdot 3 = 15 ext{,}) (17 cdot 3 = 51) and so on. It appears that we always end up with 2 less than the next term. Aha!

So (a_n = 3a_{n-1} + 2) is our recurrence relation and the initial condition is (a_0 = 1 ext{.})


Recurrence Relations

A recurrence relation is an equation in which each term of the sequence is defined as a function of the preceding terms.

There are different ways of solving these relations, we're going to examine:

Solving recurrence relations by repeated derivation/substitution

The premise for this type of solution is to continually substitute the recurrence relation on the right hand side (ie. substitute a value into the original equation and then derive a previous version of the equation). The various derivations should lead to a generalized form of the equation that can be solved for a time constraint on the algorithm/equation: For example:

The only substitution available at this point is to substitute n/2 for n in the original equation:

Add c to both sides to derive the original equation:

Now that another equation has been derived there is a new substitution that can be made in the original: n/4 for n

In general, the basic equation is:

Making an assumption that n = 2 k allows for:

Solving recurrence relations by telescoping

The premise for this type of solution is to substitute values for n, add up the results, and eliminate like terms. For example:


The Master method

The Master method is applicable for solving recurrences of the form
$egin T(n) = aTleft (frac ight) + f(n) end$
where $a ge 0$ and $b ge 1$ are constants and $f(n)$ is an asymptotically positive function. Depending upon the value of $a, b$ and function $f(n)$, the master method has three cases.

  1. If $f(n) = O(n^)$ for some constant $epsilon > 0$, then $T(n) = Theta(n^)$.
  2. If $f(n) = Theta(n^)$, then $T(n) = Theta(n^log n)$.
  3. If $f(n) = Omega(n^)$ for some constant $epsilon > 0$, and if $af(n/b) le cf(n)$ for some constant $c 0$ is a constant for $i le i le k$
  4. $b_i in (0, 1)$ is a constant for $1 le i le k$
  5. $k ge 1$ is a constant and,
  6. $f(n)$ is non-negative function

The solution of recurrence given in (2) is,
$T(n) = Thetaleft(n^p left( 1 + int_<1>^ frac<>

> du ight ) ight)$
Provided,
$sum_^a_ib_i^p = 1 ext< where $p$ is a unique real number.>$

Example 1: Consider a recurrence,
$T(n) = 2T(n/4) + 3T(n/6) + Theta(nlog n)$

For this recurrence, $a_1 = 2, b_1 = 1/4, a_2 = 3, b_2 = 1/6, f(n) = nlog n$. The value of $p$ can be calculated as,
$a_1b_1^p + a_2b_2^p = 2 imes(1/4)^p + 3 imes (1/6)^p = 1$
$p = 1$ satisfies the above equation. The solution is
$egin T(n) &= Thetaleft(n^p left( 1 + int_<1>^ frac<>

> du ight ) ight)
& = Thetaleft(n left( 1 + int_<1>^ frac du ight ) ight)
&= Thetaleft(n left( 1 + frac <2> ight ) ight)
&= Theta(nlog^2n)
end$


Back substitution method

In forward substitution method, we put $n = 0, 1, 2, …$ in the recurrence relation until we see a pattern. In backward substitution, we do the opposite i.e. we put $n = n, n - 1, n - 2, …$ or $n = n, n/2, n/4, …$ until we see the pattern. After we see the pattern, we make a guesswork for the running time and we verify the guesswork. Let us use this method in some examples.

Consider an example recurrence relation given below
$T(n) = egin 1 & ext < if >n = 1 2T left (frac <2> ight) + n & ext end$

Given $T(n)$, we can calculate the value of $T(n/2)$ from the above recurrence relation as
$T(n/2) = 2Tleft ( frac <4> ight ) + frac<2>$
Now we back substitute the value of $T(n/2)$ in $T(n)$
$T(n) = 2^2Tleft (frac<2^2> ight ) + 2n$
We proceed in a similar way
$eginT(n) &= 2^3T left (frac <2 ^3> ight ) + 3n
&= 2^4T left (frac <2^4> ight ) + 4n
&= 2^kT left (frac <2^k> ight ) + kn
end$
Now we should use the boundary (base) condition i.e. $T(1) = 1$. In order to use the boundary condition, the entity inside $T()$ must be 1 i.e.
$frac <2^k>= 1$
Taking $log_2$ on both sides,
$n = log_2 n$
The equation (6) becomes
$egin
T(n) &= 2^T left (frac<2^> ight ) + log_2 n.n
& = nT(1) + nlog_2 n
& = nlog_2 n + n
end$
The correctness of above running time can be proved using induction. Put $n = 2, 4, 8, 16, …$ and you can easily verify that the guessed running time is actually correct.

We rarely use forward and backward substitution method in the practical cases. There are much more sophisticated and fast methods. But these methods can be used as a last resort when other methods are powerless to solve some kinds of recurrences.


2.4: Solving Recurrence Relations

In the previous post, we discussed analysis of loops. Many algorithms are recursive in nature. When we analyze them, we get a recurrence relation for time complexity. We get running time on an input of size n as a function of n and the running time on inputs of smaller sizes. For example in Merge Sort, to sort a given array, we divide it in two halves and recursively repeat the process for the two halves. Finally we merge the results. Time complexity of Merge Sort can be written as T(n) = 2T(n/2) + cn. There are many other algorithms like Binary Search, Tower of Hanoi, etc.

There are mainly three ways for solving recurrences.

1) Substitution Method: We make a guess for the solution and then we use mathematical induction to prove the guess is correct or incorrect.

2) Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is typically a arithmetic or geometric series.

3) Master Method:
Master Method is a direct way to get the solution. The master method works only for following type of recurrences or for recurrences that can be transformed to following type.

There are following three cases:
1. If f(n) = O(n c ) where c < Logba then T(n) = Θ(n Log b a )

2. If f(n) = Θ(n c ) where c = Logba then T(n) = Θ(n c Log n)

3.If f(n) = Ω(n c ) where c > Logba then T(n) = Θ(f(n)) 

How does this work?
Master method is mainly derived from recurrence tree method. If we draw recurrence tree of T(n) = aT(n/b) + f(n), we can see that the work done at root is f(n) and work done at all leaves is Θ(n c ) where c is Logba. And the height of recurrence tree is Logbn

In recurrence tree method, we calculate total work done. If the work done at leaves is polynomially more, then leaves are the dominant part, and our result becomes the work done at leaves (Case 1). If work done at leaves and root is asymptotically same, then our result becomes height multiplied by work done at any level (Case 2). If work done at root is asymptotically more, then our result becomes work done at root (Case 3).

Examples of some standard algorithms whose time complexity can be evaluated using Master Method
Merge Sort: T(n) = 2T(n/2) + Θ(n). It falls in case 2 as c is 1 and Logba] is also 1. So the solution is Θ(n Logn)

Binary Search: T(n) = T(n/2) + Θ(1). It also falls in case 2 as c is 0 and Logba is also 0. So the solution is Θ(Logn)

Notes:
1) It is not necessary that a recurrence of the form T(n) = aT(n/b) + f(n) can be solved using Master Theorem. The given three cases have some gaps between them. For example, the recurrence T(n) = 2T(n/2) + n/Logn cannot be solved using master method.

2) Case 2 can be extended for f(n) = Θ(n c Log k n)
If f(n) = Θ(n c Log k n) for some constant k >= 0 and c = Logba, then T(n) = Θ(n c Log k+1 n)

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above

Attention reader! Don&rsquot stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.


2.4: Solving Recurrence Relations

The Recursion Tree Method is a way of solving recurrence relations. In this method, a recurrence relation is converted into recursive trees. Each node represents the cost incurred at various levels of recursion. To find the total cost, costs of all levels are summed up.

  1. Draw a recursive tree for given recurrence relation
  2. Calculate the cost at each level and count the total no of levels in the recursion tree.
  3. Count the total number of nodes in the last level and calculate the cost of the last level
  4. Sum up the cost of all the levels in the recursive tree

Let us see how to solve these recurrence relations with the help of some examples:

Question 1: T(n) = 2T(n/2) + c

  • Step 2: Calculate the work done or cost at each level and count total no of levels in recursion tree

Recursive Tree with each level cost

Count the total number of levels –

Choose the longest path from root node to leaf node

Size of problem at last level = n/2 k

At last level size of problem becomes 1

Total no of levels in recursive tree = k +1 = log2(n) + 1

  • Step 3: Count total number of nodes in the last level and calculate cost of last level

T(n) = c + 2c + 4c + —- + (no. of levels-1) times + last level cost

= c + 2c + 4c + —- + log2(n) times + Θ(n)

= c(1 + 2 + 4 + —- + log2(n) times) + Θ(n)

1 + 2 + 4 + —– + log2(n) times –> 2 0 + 2 1 + 2 2 + —– + log2(n) times –> Geometric Progression(G.P.)

= c(n) + Θ(n)


Thus, T(n) = Θ(n)

Question 2: T(n) = T(n/10) + T(9n/10) + n

  • Step 2: Calculate the work done or cost at each level and count total no of levels in recursion tree

Recursion Tree with each level cost

Count the total number of levels –

Choose the longest path from root node to leaf node

(9/10) 0 n –> (9/10) 1 n –> (9/10) 2 n –> ……… –> (9/10) k n

Size of problem at last level = (9/10) k n

At last level size of problem becomes 1

(9/10) k n = 1

(9/10) k = 1/n

k = log10/9(n)

  • Step 3: Count total number of nodes in the last level and calculate cost of last level

Attention reader! Don&rsquot stop learning now. Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.


3.2. When T(n) = aT(n-1) + f(n)

This is a little trickier, because as the arguments to the f's drop, they are multiplied by more and more a's. After some backward substitution it is not hard to recognize the pattern

Example: T(n) = 2T(n-1) + n. Then from the formula

This turns out to be a rather painful sum to solve exactly, but we can reasonably guess that it's somewhere between 2 n and 2 n n, and try guess-but-verify to whittle the range down further.


Solving Non-Linear Relations

  • The above theorem is great if you have a homogeneous linear recurrence.
    • &hellip with distinct roots.
    • And the related theorem in the text can handle characteristic equations with repeated roots.
    • Suppose we ignore the non-linear part and just look at the homogeneous part: [h_n=c_1h_ + c_2h_ + cdots + c_kh_,.]
    • We can solve that and get a formula for ().
    • If we're lucky, we might be able to find a solution for () for some initial conditions, but maybe not the ones we're interested in
    • We can use the previous theorem (or just guess-and-prove) that solutions to (h_n=3h_) are in the form (h_n=dcdot 3^).
    • If are willing to accept any initial conditions, we might be able to get a particular solution ().
    • We could guess that there might be a linear solution in the form (p_n=cn+d) for some constants (c,d).
    • We would be right. There is such a solution iff for all (n> 1): [egin p_n &= 3p_ + 2n cn+d&= 3(c(n-1)+d) + 2n cn+d&= 3cn-3c+3d+2n 0&=(2c+2)n+(2d-3c),. end] This is true for all (n) iff (2c+2=0) and (2d-3c=0). Solving these, we get (c=-1) and (d=-3/2).
    • We have a solution to the recurrence: (p_n=-n-3/2).
    • That isn't actually a useful solution: we're interested in (a_1=3) and that's not what (p_1) is. We have the wrong initial conditions.

    Theorem: For a recurrence relation in the form [a_n=c_1a_ + c_2a_ + cdots + c_ka_ + F(n),,] if we have a solution to the homogenous linear part () (as described above), and a particular solution (), then all solutions to the recurrence are in the form ().

    Proof: Suppose we have any solution to the original recurrence: (). We know that () is also a solution: [ a_n=c_1a_ + c_2a_ + cdots + c_ka_ + F(n) p_n=c_1p_ + c_2p_ + cdots + c_kp_ + F(n) ]

    Subtracting these, we get [a_n-p_n=c_1(a_-p_) + c_2(a_-p_)+ cdots + c_k(a_-p_)] Thus, () is a solution to the homogenous part, ().

    So, any solution () is can be written in the form () for some solution to the homogenous recurrence. ∎

    • The above theorem tells us that all solutions to the recurrence look like (a_n=dcdot 3^-n-3/2).
    • We just have to fill in (d): [egin a_1&=3 dcdot 3^<1>-1-3/2 &= 3 d&= 11/6,. end]
    • We finally have (a_n=11/6cdot 3^-n-3/2).
    • The homogeneous part of this is (h_n=2h_). We could guess, but let's use the theorem.
    • The characteristic equation for this recurrence is (r-2=0) which has solution (r=2). Thus, solutions are in the form (h_n=dcdot2^n).
    • For a particular solution guess that (p_n=ccdot 3^n) for some (c). We can confirm the guess by finding a constant (c). To do this, we substituting into the recurrence: [egin p_n &= 2p_+3^n ccdot 3^n &= 2(ccdot 3^)+3^n c(3^n-2cdot 3^) &= 3^n c(3-2) &= 3^n/3^ c&=3,. end]
    • We have a particular solution (for no particular initial conditions) of (p_n=3^).
    • Now we can satisfy (a_n) for our initial condition since all solutions to () are in the form [egin a_n &= d2^n + 3^ 5=a_1 &= d2^1 + 3^2 5 &= 2d+3 d&=2,. end]
    • So, (a_n=2^+ 3^).
    • The homogeneous part has characteristic equation (r^2-5r-6=0), so roots (r_1=6,r_2=-1).
    • So, solutions are in the form (h_n=d_1 6^n+d_2 (-1)^n).
    • For a particular solution, we should find something in the form (p_n=ccdot 7^n): [egin p_n = ccdot 7^n &= 5ccdot 7^+6ccdot 7^+7^n ccdot 7^2 &= 5ccdot 7^1+6ccdot 7^0+7^2 0 &= 49/8,. end]
    • So we have (p_n=frac<49><8>cdot 7^n).
    • From the theorem, [a_n=d_1 6^n+d_2 (-1)^n+ frac<49><8>cdot 7^n,.]
    • Substituting (a_0) and (a_1), we get [ 1=d_1+d_2+ frac<49><8> ext < and >6=6d_1-d_2+ frac<343><8>,, d_1=-6 ext < and >d_2= frac<7><8>,. ]
    • Thus we have a solution: [a_n= -6cdot 6^n- frac<7><8>cdot (-1)^n+ frac<49><8>cdot 7^n = frac<1><8>left(-8cdot 6^ - 7(-1)^n + 7^ ight) ,.]
    • I definitely wouldn't have come up with that without the theorem to help.

    4. The Master Theorem

    The Master Theorem provides instant asymptotic solutions for many recurrences of the form T(n) = aT(n/b) + f(n), that apply for all values of n (not just powers of b). It is based on applying the analysis of the preceding section to various broad families of functions f, and then extending the results using a monotonicity argument to values of n that are not powers of b. Here we sketch out the proof see LevitinBook Appendix B for a more detailed argument.

    If f(n) = 0, then the recurrence is simply T(n) = aT(n/b). This has solution T(n) = n log[b] a T(1) = Θ(n log[b] a ). (Example: T(n) = 4T(n/2) has solution Θ(n lg 4 ) = Θ(n²).) We classify different cases of the Master Theorem based on how f(n) compares to this default solution.

    We assume that T(1) = Θ(1) throughout.

    Suppose that f(x) = x c . Then a i f(n/b i ) = a i n c / b ic = n c (a/b c ) i . The sum is then a geometric series with ratio (a/b c ), and its behavior depends critically on whether (a/b c ) is less than 1, equal to 1, or greater than 1.

    If (a/b c ) is less than 1, then Sigmai=0 to infinity n c (a/b c ) i = n c /(1-(a/b c )) = O(n c ). This case arises when log(a/b c ) = log a - c log b is less than zero, which occurs precisely when c > log a / log b = logb a. So if f(n) = n c , the f(n) term in the sum dominates both the rest of the sum and the n log[b] a term, and we get T(n) = Θ(f(n)). If f(n) is Omega(n c ), but satisfies the additional technical requirement that af(n/b) <= (1-delta) f(n) for all n and some fixed delta > 0, then the geometric series argument still works with factor (1-delta), and we still get T(n) = Θ(f(n)). This covers the case where f(n) = Omega(n log[b] a + epsilon ).

    If (a/b c ) is equal to 1, then every term in the sum is the same, and the total is f(n) logb n. In this case c = logb a, so f(n) = n log[b] a and f(n) logb n dominates (barely) the T(1) term. An extended version of this analysis shows that the solution is T(n) = Θ(f(n) log n) when f(n) = Θ(n log[b] a ).

    Finally, if (a/b c ) is greater than 1, we have a geometric series whose sum is proportional to its last term, which can be shown to be asymptotically smaller than the T(1) term. This case gives T(n) = Θ(n log[b] a ) for any f(n) = O(n log[b] a - epsilon ).

    These three cases do not cover all possibilities (consider T(n) = 2T(n/2) + n log n), but they will handle most recurrences of this form you are likely to encounter.


    This seems pretty close to the right answer.

    When $k = lceil ln(n)/ln(7) ceil$, $n le 7^k$, so $A(lceil n/7^k ceil) = 1$.

    Putting this in your equation, $A(n) = 4^k +4^klfloor n/7^k floor ^2 . +4lfloor n/7 floor ^2 + n^2 $.

    To get an upper bound, since $lfloor x floor le x le lceil x ceil < x+1$, $4^k < 4^ <1+ln(n)/ln(7)>= 4n^ < 4n $ so $A(n) < 4n + n^2(4^k/7^k+. +4/7+1) < 4n + n^2/(1-4/7) < 4n + 7 n^2/3 $.

    In this case, the series multiplying $n^2$ converges.

    Try to solve this with the 4 and 7 swapped, so that $A()=7A(lfloor floor)+n^2$.



Comments:

  1. Edric

    I think you are wrong. I'm sure. Let's discuss.

  2. Menhalom

    Sorry to interfere, but I need a little more information.

  3. Einhardt

    It not absolutely approaches me. Who else, what can prompt?

  4. Gardarr

    I believe that you are wrong. I can defend my position. Email me at PM, we will discuss.

  5. Lysander

    You can debate endlessly, so I'll just thank the author. Thanks!

  6. Tylere

    I am final, I am sorry, there is an offer to go on other way.

  7. Seafra

    )



Write a message