Articles

4.5: Constant Coefficient Homogeneous Systems II - Mathematics

4.5:  Constant Coefficient Homogeneous Systems II - Mathematics


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Constant Coefficient Homogeneous Systems II

We saw in Section 4.4 that if an (n imes n) constant matrix (A) has (n) real eigenvalues (lambda_1), (lambda_2), (dots), (lambda_n) (which need not be distinct) with associated linearly independent eigenvectors ({f x}_1), ({f x}_2), (dots), ({f x}_n), then the general solution of ({f y}'=A{f y}) is

egin{eqnarray*}
{f y} = c_1{f x}_1 e^{lambda_1 t} = c_2{f x}_2 e^{lambda_2 t} + cdots + c_n{f x}_n e^{lambda_n t}.
end{eqnarray*}

In this section we consider the case where (A) has (n) real eigenvalues, but does not have (n) linearly independent eigenvectors. It is shown in linear algebra that this occurs if and only if (A) has at least one eigenvalue of multiplicity (r>1) such that the associated eigenspace has dimension less than (r). In this case (A) is said to be ( extcolor{blue}{mbox{defective}} ). Since it's beyond the scope of this book to give a complete analysis of systems with defective coefficient matrices, we will restrict our attention to some commonly occurring special cases.

Example(PageIndex{1})

Show that the system

egin{equation}label{eq:4.5.1}
{f y}'= left[ egin{array} {11} & {-25} 4 & {-9} end{array} ight] {f y}
end{equation}

does not have a fundamental set of solutions of the form ({{f x}_1e^{lambda_1t},{f x}_2e^{lambda_2t}}), where (lambda_1) and (lambda_2) are eigenvalues of the coefficient matrix (A) of eqref{eq:4.5.1} and ({f x}_1), and ({f x}_2) are associated linearly independent eigenvectors.

Answer

The characteristic polynomial of (A) is

egin{eqnarray*}
left[ egin{array} {11} -lambda & {-25} 4 & {-9} -lambda end{array} ight]
&=&(lambda-11)(lambda+9)+100
&=&lambda^2-2lambda+1=(lambda-1)^2.
end{eqnarray*}

Hence, (lambda=1) is the only eigenvalue of (A). The augmented matrix of the system ((A-I){f x}={f 0}) is

egin{eqnarray*}
left[ egin{array} 10 & -25 & vdots & 0 4 & -10 & vdots $ 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & -displaystyle{5 over 2} & vdots & 0 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (x_1=5x_2/2) where (x_2) is arbitrary. Therefore all eigenvectors of (A) are scalar multiples of ({f x}_1 = left[ egin{array} 5 2 end{array} ight]), so (A) does not have a set of two linearly independent eigenvectors.

From Example ((4.5.1)), we know that all scalar multiples of ({f y}_1 = left[ egin{array} 5 2 end{array} ight] e^t) are solutions of eqref{eq:4.5.1}; however, to find the general solution we must find a second solution ({f y}_2) such that ({{f y}_1,{f y}_2}) is linearly independent. Based on your recollection of the procedure for solving a constant coefficient scalar equation

egin{eqnarray*}
ay'' + by' + cy = 0
end{eqnarray*}

in the case where the characteristic polynomial has a repeated root, you might expect to obtain a second solution of eqref{eq:4.5.1} by multiplying the first solution by (t). However, this yields ({f y}_2 = left[ egin{array} 5 2 end{array} ight] te^t), which doesn't work, since

egin{eqnarray*}
{f y}_2' = left[ egin{array} 5 2 end{array} ight] (t e^t + e^t), quad mbox{while} quad left[ egin{array} 11 & -25 4 & -9 end{array} ight] {f y}_2 = left[ egin{array} 5 2 end{array} ight] t e^t.
end{eqnarray*}

The next theorem shows what to do in this situation.

Theorem (PageIndex{1})

Suppose the (n imes n) matrix (A) has an eigenvalue (lambda_1) of multiplicity (ge2) and the associated eigenspace has dimension (1;) that is, all (lambda_1)-eigenvectors of (A) are scalar multiples of an eigenvector ({f x}). Then there are infinitely many vectors ({f u}) such that

egin{equation}label{eq:4.5.2}
(A-lambda_1I){f u}={f x}.
end{equation}

Moreover, if ({f u}) is any such vector then

egin{equation}label{eq:4.5.3}
{f y}_1={f x}e^{lambda_1t} quad mbox{and } quad {f y}_2={f u}e^{lambda_1t}+{f x}te^{lambda_1t}
end{equation}

are linearly independent solutions of ({f y}'=A{f y}).

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

A complete proof of this theorem is beyond the scope of this book. The difficulty is in proving that there's a vector ({f u}) satisfying eqref{eq:4.5.2}, since (det(A-lambda_1I)=0). We'll take this without proof and verify the other assertions of the theorem.

We already know that ({f y}_1) in eqref{eq:4.5.3} is a solution of ({f y}'=A{f y}). To see that ({f y}_2) is also a solution, we compute

egin{eqnarray*}
{f y}_2'-A{f y}_2&=&lambda_1{f u}e^{lambda_1t}+{f x} e^{lambda_1t}
+lambda_1{f x} te^{lambda_1t}-A{f u}e^{lambda_1t}-A{f x} te^{lambda_1t}
&=&(lambda_1{f u}+{f x} -A{f u})e^{lambda_1t}+(lambda_1{f x} -A{f x} )te^{lambda_1t}.
end{eqnarray*}

Since (A{f x}=lambda_1{f x}), this can be written as

egin{eqnarray*}
{f y}_2' - A{f y}_2 = - left( (A -lambda_1 l) {f u} - {f x} ight) e^{lambda_1 t},
end{eqnarray*}

and now eqref{eq:4.5.2} implies that ({f y}_2'=A{f y}_2).

To see that ({f y}_1) and ({f y}_2) are linearly independent, suppose (c_1) and (c_2) are constants such that

egin{equation}label{eq:4.5.4}
c_1{f y}_1+c_2{f y}_2=c_1{f x}e^{lambda_1t}+c_2({f u}e^{lambda_1t} +{f x}te^{lambda_1t})={f 0}.
end{equation}

We must show that (c_1=c_2=0). Multiplying eqref{eq:4.5.4} by (e^{-lambda_1t}) shows that

egin{equation}label{eq:4.5.5}
c_1{f x}+c_2({f u} +{f x}t)={f 0}.
end{equation}

By differentiating this with respect to (t), we see that (c_2{f x}={f 0}), which implies (c_2=0), because ({f x} e{f 0}). Substituting (c_2=0) into eqref{eq:4.5.5} yields (c_1{f x}={f 0}), which implies that (c_1=0), again because ({f x} e{f 0})

Example (PageIndex{2})

Use Theorem ((4.5.1)) to find the general solution of the system

egin{equation}label{eq:4.5.6}
{f y}' = left[ egin{array} {11} & {-25} 4 & {-9} end{array} ight] {f y}
end{equation}

considered in Example ((4.5.1)).

Answer

In Example ((4.5.1)) we saw that (lambda_1=1) is an eigenvalue of multiplicity (2) of the coefficient matrix (A) in eqref{eq:4.5.6}, and that all of the eigenvectors of (A) are multiples of

egin{eqnarray*}
{f x} = left[ egin{array} 5 2 end{array} ight].
end{eqnarray*}

Therefore

egin{eqnarray*}
{f y}_1 = left[ egin{array} 5 2 end{array} ight] e^t
end{eqnarray*}

is a solution of eqref{eq:4.5.6}. From Theorem ((4.5.1)), a second solution is given by ({f y}_2={f u}e^t+{f x}te^t), where ((A-I){f u}={f x}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} 10 & -25 & vdots & 5 4 & -10 & vdots & 2 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & -{5 over 2} & vdots & {1 over 2} 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Therefore the components of ({f u}) must satisfy

egin{eqnarray*}
u_1 - {5 over 2} u_2 = {1 over 2},
end{eqnarray*}

where (u_2) is arbitrary. We choose (u_2=0), so that (u_1=1/2) and

egin{eqnarray*}
{f u} = left[ egin{array} {1 over 2} 0 end{array} ight].
end{eqnarray*}

Thus,

egin{eqnarray*}
{f y}_2 = left[ egin{array} 1 0 end{array} ight] {e^t over 2} + left[ egin{array} 5 2 end{array} ight] e^t.
end{eqnarray*}

Since ({f y}_1) and ({f y}_2) are linearly independent by Theorem ((4.5.1)), they form a fundamental set of solutions of eqref{eq:4.5.6}. Therefore the general solution of eqref{eq:4.5.6} is

egin{eqnarray*}
{f y} = c_1 left[ egin{array} 5 2 end{array} ight] e^t + c_2 left( left[ egin{array} 1 0 end{array} ight] {e^t over 2} + left[ egin{array} 5 2 end{array} ight] t e^t ight)
end{eqnarray*}

Note that choosing the arbitrary constant (u_2) to be nonzero is equivalent to adding a scalar multiple of ({f y}_1) to the second solution ({f y}_2) (Exercise ((4.5E.33))).

Example (PageIndex{3})

Find the general solution of

egin{equation}label{eq:4.5.7}
{f y}' = left[ egin{array} 3 & 4 & {-10} 2 & 1 & {-2} 2 & 2 & {-5} end{array} ight] {f y}.
end{equation}

Answer

The characteristic polynomial of the coefficient matrix (A) in eqref{eq:4.5.7} is

egin{eqnarray*}
left[ egin{array} 3-lambda & 4 & -10 2 & 1-lambda & -2 2 & 2 & -5-lambda end{array} ight] = -(lambda-1)(lambda+1)^2.
end{eqnarray*}

Hence, the eigenvalues are (lambda_1=1) with multiplicity (1) and (lambda_2=-1) with multiplicity (2).

Eigenvectors associated with (lambda_1=1) must satisfy ((A-I){f x}={f 0}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} 2 & 4 & -10 & vdots & 0 2 & 0 & -2 & vdots & 0 2 & 2 & -6 & vdots & 0 end{array} ight]
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & 0 0 & 1 & -2 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (x_1 =x_3) and (x_2 =2 x_3), where (x_3) is arbitrary. Choosing (x_3=1) yields the eigenvector

egin{eqnarray*}
{f x}_1 = left[ egin{array} 1 2 1 end{array} ight].
end{eqnarray*}

Therefore

egin{eqnarray*}
{f y}_1 = left[ egin{array} 1 2 1 end{array} ight] e^t
end{eqnarray*}

is a solution of eqref{eq:4.5.7}.

Eigenvectors associated with (lambda_2 =-1) satisfy ((A+I){f x}={f 0}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} 4 & 4 & -10 & vdots & 0 2 & 2 & -2 & vdots & 0 2 & 2 & -4 & vdots & 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 1 & 0 & vdots & 0 0 & 0 & 1 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (x_3=0) and (x_1 =-x_2), where (x_2) is arbitrary. Choosing (x_2=1) yields the eigenvector

egin{eqnarray*}
{f y}_2 = left[ egin{array} -1 1 0 end{array} ight],
end{eqnarray*}

so

egin{eqnarray*}
{f y}_2 = left[ egin{array} -1 1 0 end{array} ight] e^{-t}
end{eqnarray*}

is a solution of eqref{eq:4.5.7}.

Since all the eigenvectors of (A) associated with (lambda_2=-1) are multiples of ({f x}_2), we must now use Theorem ((4.5.1)) to find a third solution of eqref{eq:4.5.7} in the form

egin{equation}label{eq:4.5.8}
{f y}_3={f u}e^{-t} + left[ egin{array} {-1} 1 0 end{array} ight] te^{-t},
end{equation}

where ({f u}) is a solution of ((A+I){f u=x}_2). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} 4 & 4 & -10 & vdots & -1 2 & 2 & -2 & vdots & 1 2 & 2 & -4 & vdots & 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 1 & 0 & vdots & 0 0 & 0 & 1 & vdots & {1 over 2} 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (u_3=1/2) and (u_1 =1-u_2), where (u_2) is arbitrary. Choosing (u_2=0) yields

egin{eqnarray*}
{f u} = left[ egin{array} 1 0 {1 over 2} end{array} ight],
end{eqnarray*}

and substituting this into eqref{eq:4.5.8} yields the solution

egin{eqnarray*}
{f y}_3 = left[ egin{array} 2 0 1 end{array} ight] {e^{-t} over 2} + left[ egin{array} -1 1 0 end{array} ight] t e^{-t}
end{eqnarray*}

of eqref{eq:4.5.7}.

Since the Wronskian of ({{f y}_1,{f y}_2,{f y}_3}) at (t=0) is

egin{eqnarray*}
left| egin{array} 1 & -1 & 1 2 & 1 & 0 1 & 0 & {1 over 2} end{array} ight| = {1 over 2},
end{eqnarray*}

({{f y}_1,{f y}_2,{f y}_3}) is a fundamental set of solutions of eqref{eq:4.5.7}. Therefore the general solution of eqref{eq:4.5.7} is

egin{eqnarray*}
{f y} = c_1 left[ egin{array} 1 2 1 end{array} ight] e^t + c_2 left[ egin{array} -1 1 0 end{array} ight] e^{-t} + c_3 left( left[ egin{array} 2 0 1 end{array} ight] {e^{-t} over 2} + left[ egin{array} -1 1 0 end{array} ight] e^{-t} ight).
end{eqnarray*}

Theorem (PageIndex{2})

Suppose the (n imes n) matrix (A) has an eigenvalue (lambda_1) of multiplicity (ge 3) and the associated eigenspace is one-dimensional; that is, all eigenvectors associated with (lambda_1) are scalar multiples of the eigenvector ({f x}.) Then there are infinitely many vectors ({f u}) such that

egin{equation}label{eq:4.5.9}
(A-lambda_1I){f u}={f x},
end{equation}

and, if ({f u}) is any such vector, there are infinitely many vectors ({f v}) such that

egin{equation}label{eq:4.5.10}
(A-lambda_1I){f v}={f u}.
end{equation}

If ({f u}) satisfies { meqref{eq:4.5.9}} and ({f v}) satisfies eqref{eq:4.5.10}, then

egin{eqnarray*}
{f y}_1 &=&{f x} e^{lambda_1t},
{f y}_2&=&{f u}e^{lambda_1t}+{f x} te^{lambda_1t},mbox{
and }
{f y}_3&=&{f v}e^{lambda_1t}+{f u}te^{lambda_1t}+{f
x} {t^2e^{lambda_1t}over2}
end{eqnarray*}

are linearly independent solutions of ({f y}'=A{f y}).

Proof

Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.

Again, it's beyond the scope of this book to prove that there are vectors ({f u}) and ({f v}) that satisfy eqref{eq:4.5.9} and
eqref{eq:4.5.10}. Theorem ((4.5.1)) implies that ({f y}_1) and ({f y}_2) are solutions of ({f y}'=A{f y}). We leave the rest of the proof to you (Exercise ((4.5E.34))).

Example (PageIndex{4})

Use Theorem ((4.5.2)) to find the general solution of

egin{equation}label{eq:4.5.11}
{f y}' = left[ egin{array} 1 & 1 & 1 1 & 3 & {-1} 0 & 2 & 2 end{array} ight] {f y}.
end{equation}

Answer

The characteristic polynomial of the coefficient matrix (A) in eqref{eq:4.5.11} is

egin{eqnarray*}
left| egin{array} 1-lambda & 1 & phantom{-}1 1 & 3-lambda & -1 0 & 2 & -lambda end{array} ight| = -(lambda-2)^3.
end{eqnarray*}

Hence, (lambda_1=2) is an eigenvalue of multiplicity (3). The associated eigenvectors satisfy ((A-2I){f x=0}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} -1 & 1 & 1 & vdots & 0 1 & 1 & -1 & vdots & 0 0 & 2 & 0 & vdots & 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & 0 0 & 1 & 0 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (x_1 =x_3) and (x_2 = 0), so the eigenvectors are all scalar multiples of

egin{eqnarray*}
{f x}_1 = left[ egin{array} 1 0 1 end{array} ight].
end{eqnarray*}

Therefore

egin{eqnarray*}
{f y}_1 = left[ egin{array} 1 0 1 end{array} ight] e^{2t}
end{eqnarray*}

is a solution of eqref{eq:4.5.11}.

We now find a second solution of eqref{eq:4.5.11} in the form

egin{eqnarray*}
{f y}_2 = {f u} e^{2t} + left[ egin{array} 1 0 1 end{array} ight] t e^{2t},
end{eqnarray*}

where ({f u}) satisfies ((A-2I){f u=x}_1). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} -1 & 1 & 1 & vdots & 1 1 & 1 & -1 & vdots & 0 0 & 2 & 0 & vdots & 1 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & -{1 over 2} 0 & 1 & 0 & vdots & {1 over 2} 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Letting (u_3=0) yields (u_1=-1/2) and (u_2=1/2); hence,

egin{eqnarray*}
{f u} = {1 over 2} left[ egin{array} -1 1 0 end{array} ight]
end{eqnarray*}

and

egin{eqnarray*}
{f y}_2 = left[ egin{array} -1 1 0 end{array} ight] {e^{2t} over 2} + left[ egin{array} 1 0 1 end{array} ight] t e^{2t}
end{eqnarray*}

is a solution of eqref{eq:4.5.11}.

We now find a third solution of eqref{eq:4.5.11} in the form

egin{eqnarray*}
{f y}_3 = {f v} e^{2t} + left[ egin{array} -1 1 0 end{array} ight] {t e^{2t} over 2} + left[ egin{array} 1 0 1 end{array} ight] {t^2 e^{2t} over 2}
end{eqnarray*}

where ({f v}) satisfies ((A-2I){f v}={f u}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} -1 & 1 & 1 & vdots & -{1 over 2} 1 & 1 & -1 & vdots & {1 over 2} 0 & 2 & 0 & vdots & 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & {1 over 2} 0 & 1 & 0 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Letting (v_3=0) yields (v_1=1/2) and (v_2=0); hence,

egin{eqnarray*}
{f v} = {1 over 2} left[ egin{array} 1 0 0 end{array} ight].
end{eqnarray*}

Therefore

egin{eqnarray*}
{f y}_3 = left[ egin{array} 1 0 0 end{array} ight] {e^{2t} over 2} + left[ egin{array} -1 1 0 end{array} ight] {t e^{2t} over 2} + left[ egin{array} 1 0 1 end{array} ight] {t^2 e^{2t} over 2}
end{eqnarray*}

is a solution of eqref{eq:4.5.11}. Since ({f y}_1), ({f y}_2), and ({f y}_3) are linearly independent by Theorem ((4.5.2)), they form a fundamental set of solutions of eqref{eq:4.5.11}. Therefore the general solution of eqref{eq:4.5.11} is

egin{eqnarray*}
{f y} &=& c_1 left[ egin{array} 1 0 1 end{array} ight] e^{2t} + c_2 left( left[ egin{array} {-1} 1 0 end{array} ight] {e^{2t}over2} + left[ egin{array} 1 0 1 end{array} ight] te^{2t} ight)
&& + c_3 left( left[ egin{array} 1 0 0 end{array} ight] {e^{2t}over2} + left[ egin{array} {-1} 1 0 end{array} ight] {te^{2t}over2} + left[ egin{array} 1 0 1 end{array} ight] {t^2e^{2t}over2} ight).
end{eqnarray*}

Theorem (PageIndex{3})

Suppose the (n imes n) matrix (A) has an eigenvalue (lambda_1) of multiplicity (ge 3) and the associated eigenspace is two-dimensional; that is, all eigenvectors of (A) associated with (lambda_1) are linear combinations of two linearly independent eigenvectors ({f x}_1) and ({f x}_2). Then there are constants (alpha) and (eta) (not both zero) such that if

egin{equation}label{eq:4.5.12}
{f x}_3=alpha{f x}_1+eta{f x}_2,
end{equation}

then there are infinitely many vectors ({f u}) such that

egin{equation}label{eq:4.5.13}
(A-lambda_1I){f u}={f x}_3.
end{equation}

If ({f u}) satisfies eqref{eq:4.5.13}, then

egin{eqnarray}
{f y}_1&=&{f x}_1 e^{lambda_1t}, onumber
{f y}_2&=&{f x}_2e^{lambda_1t},mbox{and } onumber
{f y}_3&=&{f u}e^{lambda_1t}+{f x}_3te^{lambda_1t}label{eq:4.5.14},
end{eqnarray}

are linearly independent solutions of ({f y}'=A{f y}.)

Proof

We omit the proof of this theorem.

Example (PageIndex{5})

Use Theorem ((4.5.3)) to find the general solution of

egin{equation}label{eq:4.5.15}
{f y}' = left[ egin{array} 0 & 0 & 1 {-1} & 1 & 1 {-1} & 0 & 2 end{array} ight] {f y}.
end{equation}

Answer

The characteristic polynomial of the coefficient matrix (A) in eqref{eq:4.5.15} is

egin{eqnarray*}
left| egin{array} -lambda & 0 & 1 -1 & 1-lambda & 1 -1 & 0 & 2-lambda end{array} ight| = -(lambda-1)^3.
end{eqnarray*}

Hence, (lambda_1=1) is an eigenvalue of multiplicity (3). The associated eigenvectors satisfy ((A-I){f x=0}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} -1 & 0 & 1 & vdots & 0 -1 & 0 & 1 & vdots & 0 -1 & 0 & 1 & vdots & 0 end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & 0 0 & 0 & 0 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

Hence, (x_1 =x_3) and (x_2) is arbitrary, so the eigenvectors are of the form

egin{eqnarray*}
{f x}_1 = left[ egin{array} x_3 x_2 x_3 end{array} ight] = x_3 left[ egin{array} 1 0 1 end{array} ight] + x_2 left[ egin{array} 0 1 0 end{array} ight].
end{eqnarray*}

Therefore the vectors

egin{equation}label{eq:4.5.16}
{f x}_1 = left[ egin{array} 1 0 1 end{array} ight] quad mbox{and} quad {f x}_2 = left[ egin{array} 0 1 0 end{array} ight]
end{equation}

form a basis for the eigenspace, and

egin{eqnarray*}
{f y}_1 = left[ egin{array} 1 0 1 end{array} ight] e^t quad mbox{and} quad {f y}_2 = left[ egin{array} 0 1 0 end{array} ight] e^t
end{eqnarray*}

are linearly independent solutions of eqref{eq:4.5.15}.

To find a third linearly independent solution of eqref{eq:4.5.15}, we must find constants (alpha) and (eta) (not both zero) such that the system

egin{equation}label{eq:4.5.17}
(A-I){f u}=alpha{f x}_1+eta{f x}_2
end{equation}

has a solution ({f u}). The augmented matrix of this system is

egin{eqnarray*}
left[ egin{array} -1 & 0 & 1 & vdots & alpha -1 & 0 & 1 & vdots & eta -1 & 0 & 1 & vdots & alpha end{array} ight],
end{eqnarray*}

which is row equivalent to

egin{equation}label{eq:4.5.18}
left[egin{array}{rrrcr} 1 & 0 &- 1 &vdots& -alpha 0 & 0 & 0
&vdots&eta-alpha 0 & 0 & 0 &vdots&0end{array}
ight].
end{equation}

Therefore eqref{eq:4.5.17} has a solution if and only if (eta=alpha), where (alpha) is arbitrary. If (alpha=eta=1) then eqref{eq:4.5.12} and eqref{eq:4.5.16} yield

egin{eqnarray*}
{f x}_3 = {f x}_1 + {f x}_2 = left[ egin{array} 1 0 1 end{array} ight] + left[ egin{array} 0 1 0 end{array} ight] = left[ egin{array} 1 1 1 end{array} ight],
end{eqnarray*}

and the augmented matrix eqref{eq:4.5.18} becomes

egin{eqnarray*}
left[ egin{array} 1 & 0 & -1 & vdots & -1 0 & 0 & 0 & vdots & 0 0 & 0 & 0 & vdots & 0 end{array} ight].
end{eqnarray*}

This implies that (u_1=-1+u_3), while (u_2) and (u_3) are arbitrary. Choosing (u_2=u_3=0) yields

egin{eqnarray*}
{f u} = left[ egin{array} -1 0 0 end{array} ight].
end{eqnarray*}

Therefore eqref{eq:4.5.14} implies that

egin{eqnarray*}
{f y}_3 = {f u} e^t + {f x}_3 t e^t = left[ egin{array} -1 0 0 end{array} ight] e^t + left[ egin{array} 1 1 1 end{array} ight] t e^t
end{eqnarray*}

is a solution of eqref{eq:4.5.15}. Since ({f y}_1), ({f y}_2), and ({f y}_3) are linearly independent by Theorem ((4.5.3)), they form a fundamental set of solutions for eqref{eq:4.5.15}. Therefore the general solution of eqref{eq:4.5.15} is

egin{eqnarray*}
{f y} = c_1 left[ egin{array} 1 0 1 end{array} ight] e^t + c_2 left[ egin{array} 0 1 0 end{array} ight] e^t + c_3 left( left[ egin{array} -1 0 0 end{array} ight] e^t + left[ egin{array} 1 1 1 end{array} ight] e^t ight)
end{eqnarray*}

Geometric Properties of Solutions when (n=2)

We'll now consider the geometric properties of solutions of a (2 imes2) constant coefficient system

egin{equation} label{eq:4.5.19}
left[ egin{array} {y_1'} {y_2'} end{array} ight] = left[ egin{array}{cc}a_{11}&a_{12}a_{21}&a_{22}
end{array} ight] left[ egin{array} {y_1} {y_2} end{array} ight]
end{equation}

under the assumptions of this section; that is, when the matrix

egin{eqnarray*}
A = left[ egin{array} a_{11} & a_{12} a_{21} & a_{22} end{array} ight]
end{eqnarray*}

has a repeated eigenvalue (lambda_1) and the associated eigenspace is one-dimensional. In this case we know from Theorem ((4.5.1)) that the general solution of eqref{eq:4.5.19} is

egin{equation} label{eq:4.5.20}
{f y}=c_1{f x}e^{lambda_1t}+c_2({f u}e^{lambda_1t}+{f
x}te^{lambda_1t}),
end{equation}

where ({f x}) is an eigenvector of (A) and ({f u}) is any one of the infinitely many solutions of

egin{equation} label{eq:4.5.21}
(A-lambda_1I){f u}={f x}.
end{equation}

We assume that (lambda_1 e0).

Figure (4.5.1)

Positive and negative half-planes

Let (L) denote the line through the origin parallel to ({f x}). By a ( extcolor{blue}{mbox{half-line}} ) of (L) we mean either of the rays obtained by removing the origin from (L). Equation eqref{eq:4.5.20} is a parametric equation of the half-line of (L) in the direction of ({f x}) if (c_1>0), or of the half-line of |(L) in the direction of (-{f x}) if (c_1<0). The origin is the trajectory of the trivial solution ({f y}equiv{f 0}).

Henceforth, we assume that (c_2 e0). In this case, the trajectory of eqref{eq:4.5.20} can't intersect (L), since every point of (L) is on a trajectory obtained by setting (c_2=0). Therefore the trajectory of eqref{eq:4.5.20} must lie entirely in one of the open half-planes bounded by (L), but does not contain any point on (L). Since the initial point ((y_1(0),y_2(0))) defined by ({f y}(0)=c_1{f x}_1+c_2{f u}) is on the trajectory, we can determine which half-plane contains the trajectory from the sign of (c_2), as shown in Figure (4.5.1). For convenience we'll call the half-plane where (c_2>0) the ( extcolor{blue}{mbox{positive half-plane}} ). Similarly, the-half plane where (c_2<0) is the ( extcolor{blue}{mbox{negative half-plane}} ). You should convince yourself (Exercise ((4.5E.35))) that even though there are infinitely many vectors ({f u}) that satisfy eqref{eq:4.5.21}, they all define the same positive and negative half-planes. In the figures simply regard ({f u}) as an arrow pointing to the positive half-plane, since wen't attempted to give ({f u}) its proper length or direction in comparison with ({f x}). For our purposes here, only the relative orientation of ({f x}) and ({f u}) is important; that is, whether the positive half-plane is to the right of an observer facing the direction of ({f x}) (as in Figures (4.5.2) and (4.5.5)), or to the left of the observer (as in Figures (4.5.3) and (4.5.4)).

Multiplying eqref{eq:4.5.20} by (e^{-lambda_1t}) yields

egin{eqnarray*}
e^{-lambda_1 t} {f y}(t) = c_1{f x} + c_2{f u} + c_2 t {f x}.
end{eqnarray*}

Since the last term on the right is dominant when (|t|) is large, this provides the following information on the direction of ({f y}(t)):

(a) Along trajectories in the positive half-plane ((c_2>0)), the direction of ({f y}(t)) approaches the direction of ({f x}) as (t oinfty) and the direction of (-{f x}) as (t o-infty).

(b) Along trajectories in the negative half-plane ((c_2<0)), the direction of ({f y}(t)) approaches the direction of (-{f x}) as (t oinfty) and the direction of ({f x}) as (t o-infty).

Since

egin{eqnarray*}
lim_{t oinfty} | {f y}(t) | = infty quad mbox{and} quad lim_{t o-infty}{f y}(t) = {f 0} quad mbox{if} quad lambda_1>0,
end{eqnarray*}

or

egin{eqnarray*}
lim_{t- oinfty} | {f y}(t) | = infty quad mbox{and} quad lim_{t oinfty}{f y}(t) = {f 0} quad mbox{if} quad lambda_1<0,
end{eqnarray*}

there are four possible patterns for the trajectories of eqref{eq:4.5.19}, depending upon the signs of (c_2) and (lambda_1).
Figures (4.5.2) to (4.5.5) illustrate these patterns, and reveal the following principle:

If (lambda_1) and (c_2) have the same sign then the direction of the trajectory approaches the direction of (-{f x}) as (|{f y} | o0) and the direction of ({f x}) as (|{f y}| oinfty) If (lambda_1) and (c_2) have opposite signs then the direction of the trajectory approaches the direction of ({f x}) as (|{f y} | o0) and the direction of (-{f x}) as (|{f y}| oinfty).

Figure (4.5.2)

Positive eigenvalue; motion away from the origin

Positive eigenvalue; motion away from the origin

Negative eigenvalue; motion toward the origin


Figure (4.5.5)

Negative eigenvalue; motion toward the origin


4.5: Constant Coefficient Homogeneous Systems II - Mathematics

In this section we will be looking at the last case for the constant coefficient, linear, homogeneous second order differential equations. In this case we want solutions to

where solutions to the characteristic equation

This leads to a problem however. Recall that the solutions are

These are the same solution and will NOT be “nice enough” to form a general solution. We do promise that we’ll define “nice enough” eventually! So, we can use the first solution, but we’re going to need a second solution.

Before finding this second solution let’s take a little side trip. The reason for the side trip will be clear eventually. From the quadratic formula we know that the roots to the characteristic equation are,

In this case, since we have double roots we must have

This is the only way that we can get double roots and in this case the roots will be

So, the one solution that we’ve got is

To find a second solution we will use the fact that a constant times a solution to a linear homogeneous differential equation is also a solution. If this is true then maybe we’ll get lucky and the following will also be a solution

with a proper choice of (v(t)). To determine if this in fact can be done, let’s plug this back into the differential equation and see what we get. We’ll first need a couple of derivatives.

We dropped the (left( t ight)) part on the (v) to simplify things a little for the writing out of the derivatives. Now, plug these into the differential equation.

We can factor an exponential out of all the terms so let’s do that. We’ll also collect all the coefficients of (v) and its derivatives.

Now, because we are working with a double root we know that that the second term will be zero. Also exponentials are never zero. Therefore, (eqref) will be a solution to the differential equation provided (v(t)) is a function that satisfies the following differential equation.

We can drop the (a) because we know that it can’t be zero. If it were we wouldn’t have a second order differential equation! So, we can now determine the most general possible form that is allowable for (v(t)).

This is actually more complicated than we need and in fact we can drop both of the constants from this. To see why this is let’s go ahead and use this to get the second solution. The two solutions are then

Eventually you will be able to show that these two solutions are “nice enough” to form a general solution. The general solution would then be the following.

Notice that we rearranged things a little. Now, (c), (k), (c_<1>), and (c_<2>) are all unknown constants so any combination of them will also be unknown constants. In particular, (c_<1>+c_<2>k) and (c_<2>c) are unknown constants so we’ll just rewrite them as follows.

So, if we go back to the most general form for (v(t)) we can take (c=1) and (k=0) and we will arrive at the same general solution.

Let’s recap. If the roots of the characteristic equation are (r_ <1>= r_ <2>= r), then the general solution is then

Now, let’s work a couple of examples.

The characteristic equation and its roots are.

The general solution and its derivative are

Don’t forget to product rule the second term! Plugging in the initial conditions gives the following system.

[egin12 & = yleft( 0 ight) = - 3 & = y'left( 0 ight) = 2 + end]

This system is easily solved to get (c_ <1>= 12) and (c_ <2>= -27). The actual solution to the IVP is then.

The characteristic equation and its roots are.

The general solution and its derivative are

Don’t forget to product rule the second term! Plugging in the initial conditions gives the following system.

This system is easily solve to get (c_ <1>= 3) and (c_ <2>= -6). The actual solution to the IVP is then.

The characteristic equation and its roots are.

The general solution and its derivative are

Plugging in the initial conditions gives the following system of equations.


4.5: Constant Coefficient Homogeneous Systems II - Mathematics

Course Coordinator: Dr Trent Mattner

Course Timetable

The full timetable of all activities for this course can be accessed from Course Planner.

Course Learning Outcomes
  1. Derive mathematical models of physical systems.
  2. Present mathematical solutions in a concise and informative manner.
  3. Recognise ODEs that can be solved analytically and apply appropriate solution methods.
  4. Solve more difficult ODEs using power series.
  5. Know key properties of some special functions.
  6. Express functions using Fourier series.
  7. Solve certain ODEs and PDEs using Fourier and Laplace transforms.
  8. Solve problems numerically via the fast Fourier transform using Matlab.
  9. Solve standard PDEs (wave and heat equations) using appropriate methods.
  10. Evaluate and represent solutions of differential equations using Matlab.
University Graduate Attributes

This course will provide students with an opportunity to develop the Graduate Attribute(s) specified below:

  • informed and infused by cutting edge research, scaffolded throughout their program of studies
  • acquired from personal interaction with research active educators, from year 1
  • accredited or validated against national or international standards (for relevant programs)
  • steeped in research methods and rigor
  • based on empirical evidence and the scientific approach to knowledge development
  • demonstrated through appropriate and relevant assessment
Required Resources
Recommended Resources
Online Learning

This course uses MyUni extensively and exclusively for providing electronic resources, such as course notes, assignment and tutorial questions, and worked solutions. Students should make appropriate use of these resources. MyUni can be accessed here: https://myuni.adelaide.edu.au/

This course also makes use of online assessment software for mathematics called Mobius, which we use to provide students with instantaneous formative feedback. Further details about using Mobius will be provided on MyUni.

Students are also reminded that they need to check their University email on a daily basis. Sometimes important and time-critical information might be sent by email and students are expected to have read it. Any problems with accessing or managing student email accounts should be directed to Technology Services.

Learning & Teaching Modes
Workload

The information below is provided as a guide to assist students in engaging appropriately with the course requirements.

Activity Quantity Workload hours
Lecture videos 68
Tutorials 11 22
Midsemester test 1 9
Written assignments 5 24
Online assignments 11 33
TOTALS 156
Learning Activities Summary
Schedule
Week 1 Differential equations and applications
Ordinary differential equations (ODEs), directional fields
Separable, linear and exact first-order ODEs, substitution
Week 2 Existence and uniqueness of solutions of first-order ODEs
Homogeneous ODEs, superposition, linear independence, Wronskian
Reduction of order, constant-coefficient homogenous ODEs
Week 3 Modelling of mass-spring-dashpot systems, free oscillations
Nonhomogeneous ODEs, method of undetermined coefficients
Forced oscillations, electrical circuits
Week 4 Variation of parameters
Systems of first-order ODEs and applications
Constant-coefficient homogeneous linear systems of ODEs
Week 5 Variable-coefficient homogeneous ODEs, Euler-Cauchy equation, power series method
Ordinary and singular points, Legendre's equation
Frobenius method, Bessel's equation
Week 6 Bessel functions
Laplace transform
Inverse Laplace transform, partial fractions, s-shifting
Week 7 Laplace transform of derivatives, application to ODEs
Convolution
Unit step function, t-shifting, Dirac delta function
Week 8 Fourier series
Complex form of Fourier series, energy spectrum, convergence
Fourier sine and cosine series, half-range expansions
Week 9 Partial differential equations (PDEs), wave equation, D&rsquoAlembert&rsquos solution
Separation of variables in 1D
Separation of variables in 2D
Week 10 Heat equation
Laplace equation
Laplace transform solution of PDEs
Week 11 Fourier transform, Fourier integral, Fourier sine and cosine transforms
Fourier transform solution of PDEs
Discrete Fourier transform
Week 12 Fast Fourier transform

  1. Assessment must encourage and reinforce learning.
  2. Assessment must enable robust and fair judgements about student performance.
  3. Assessment practices must be fair and equitable to students and give them the opportunity to demonstrate what they have learned.
  4. Assessment must maintain academic standards.
Assessment Summary
Assessment
Task Type Weighting Learning Outcomes
Written assignments Formative and Summative 17.5 % All
Mobius (online) assignments Formative and Summative 17.5 % All except 2
Midsemester test Summative 15 % 1,2,3,4,5
Examination Summative 50 % All
Assessment Related Requirements

An aggregate score of at least 50% is required to pass the course.

Assessment Detail

Written assignments are due every fortnight. The first written assignment will be released in Week 2 and due in Week 4.

Mobius (online) assignments are due every week. The first Mobius assignment will be released in Week 2 and due in Week 4.

The midsemester test will be held in Week 8. Further details, including test dates, times and venues, will be provided by email and MyUni.

Submission
Course Grading

Grades for your performance in this course will be awarded in accordance with the following scheme:

M10 (Coursework Mark Scheme)
Grade Mark Description
FNS Fail No Submission
F 1-49 Fail
P 50-64 Pass
C 65-74 Credit
D 75-84 Distinction
HD 85-100 High Distinction
CN Continuing
NFE No Formal Examination
RP Result Pending

Further details of the grades/results can be obtained from Examinations.

Grade Descriptors are available which provide a general guide to the standard of work that is expected at each grade level. More information at Assessment for Coursework Programs.

Final results for this course will be made available through Access Adelaide.

The University places a high priority on approaches to learning and teaching that enhance the student experience. Feedback is sought from students in a variety of ways including on-going engagement with staff, the use of online discussion boards and the use of Student Experience of Learning and Teaching (SELT) surveys as well as GOS surveys and Program reviews.

SELTs are an important source of information to inform individual teaching practice, decisions about teaching duties, and course and program curriculum design. They enable the University to assess how effectively its learning environments and teaching practices facilitate student engagement and learning outcomes. Under the current SELT Policy (http://www.adelaide.edu.au/policies/101/) course SELTs are mandated and must be conducted at the conclusion of each term/semester/trimester for every course offering. Feedback on issues raised through course SELT surveys is made available to enrolled students through various resources (e.g. MyUni). In addition aggregated course SELT data is available.

This section contains links to relevant assessment-related policies and guidelines - all university policies.

Students are reminded that in order to maintain the academic integrity of all programs and courses, the university has a zero-tolerance approach to students offering money or significant value goods or services to any staff member who is involved in their teaching or assessment. Students offering lecturers or tutors or professional staff anything more than a small token of appreciation is totally unacceptable, in any circumstances. Staff members are obliged to report all such incidents to their supervisor/manager, who will refer them for action under the university's student’s disciplinary procedures.

The University of Adelaide is committed to regular reviews of the courses and programs it offers to students. The University of Adelaide therefore reserves the right to discontinue or vary programs and courses without notice. Please read the important information contained in the disclaimer.


Homogeneous Constant Coefficient Equations: Real Roots

Download the video from iTunes U or the Internet Archive.

PROFESSOR: Hi, everyone. Welcome back. So today, we're going to take a look at homogeneous equations with constant coefficients, and specifically, the case where we have real roots. And we'll start the problem off by looking at the equation x dot dot plus 8x dot plus 7x equals 0.

And we're asked to find the general solution to this differential equation. And then we also have the question, do all the solutions go to 0 as t goes to infinity? And then for part B, we're going to take a look at just the differential equation y dot equals negative k*y. So this is the same equation that we've seen in past recitations.

And we're just going to show that we can use this method to solve this differential equation and obtain the same result. And then lastly, we're asked, or we're told that we have eight roots to an eighth-order differential equation, negative 4, negative 3, negative 2, negative 1, 0, 1, 2, and 3. And we're asked, what is the general solution. So why don't you take a moment and try and work these problems out, and I'll be back in a minute.

Hi, everyone. Welcome back. OK, so we're asked to find the general solution to x double dot plus 8x dot plus 7x equals 0. And we see that this is a differential equation, it's linear, and it has constant coefficients. And whenever we have a differential equation that's linear with constant coefficients, one of the standard ways to generate the solution is to seek what sometimes mathematicians call an ansatz, but it's to try a solution of the form x is equal to a constant times e to the s*t.

And if we substitute a solution n of this form, we see that taking the second derivative of this function pulls down two s's. One derivative pulls down one s. We have no derivatives here. And we also have, on each term, a factor of c times e to the s*t. And we want this to be 0.

So specifically, c e to the s*t can't be 0 for all time. So the only way that this can hold is if s squared plus 8s plus 7 equals 0. So what this means is if we choose s to solve this polynomial, then x equals c e to the s*t will be the solution. And this will be the solution for any constant c.

OK, so what are the roots to this algebraic equation. Well, we can factorize it. The roots are going to be negative 7 and negative 1. And notice how this whole process has turned a differential equation into a simpler algebraic equation. So if we can solve the algebraic equation, then we can solve the differential equation.

OK, so the general solution. Well, we've just shown that we can take any constant times e to the s*t, provided s is equal to negative 1 or negative 7. So the general solution is going to be some constant, c_1, times e to the minus t, plus c_2, can be a different constant, e to the minus 7t.

So notice how there's two constants in the final solution. And the reason there's two constants is because we started out with a second-order differential equation. So, in some sense, for each order of the differential equation, we always have one constant. It's almost as if for each time we integrate, we have a constant of integration. So at the end of the day, we have two constants in our general solution.

As part of part A, we're also asked for any solution to this differential equation, does the solution go to 0 as t goes to infinity? Well, the general solution has this form. So for any constant c_1 and c_2, the solution is c_1 e to the minus t plus c_2 e to the minus 7t.

And we see that no matter what c_1 and c_2 are, this term, as t goes to infinity, is multiplied by e to the minus t, which goes to 0. And the second term also goes to 0. So as t goes to infinity, both e to the minus t and e to the minus 7t both go to 0. So that means that any constant times e to the minus t plus any constant times e to the minus 7t must also go to 0. So hence, x of t goes to 0 as t goes to infinity.

OK. For part B, we have the differential equation y dot equals negative k*y. And this is the first-order linear differential equation with constant coefficients. And we're going to use the same trick.

We let y is equal to c times e to the s*t. And we see that the characteristic equation in this case, it's not a polynomial. It's just s, s is equal to negative k. So we get y is equal to c e to the negative k*t is the general solution.

And this is exactly what we had in previous recitations, when we used, for example, integrating factors to solve this very same differential equation. So this just shows that we can use the same method to solve first-order linear differential equations.

OK. Now, lastly, we're given eight roots to an eighth-order differential equation. An eighth-order differential equations with constant coefficients. So I'll just write out the roots again. So we're told the roots are negative 4, negative 3, negative 2, negative 1, 0, 1, 2, and 3. And in general, the solution to an eighth-order differential equation whose roots to the characteristic polynomial are negative 4 through 3, the general solution, x of t, is going to be a constant c_1 times e to the power of the first root, which will be minus 4t, plus c_2 e to the minus 3t.

And of course, we take different constants for each term. c_3 e to the minus 2t, plus c_4 e to the minus t, plus c_5. And now for this term, it should be e to the 0t, but e to the 0t is just 1. So the zero root is just going to give us a constant c_5. And we have c_6 to the t, plus c_7 e to the 2t. And then plus c_8 e to the 3t.

So the solution has eight terms and eight constants. And just for fun, we can ask, does every solution to this differential equation go to 0 as t goes to infinity. And the answer is no. In fact, although each term with a negative root does go to zero as t goes to infinity, there are three terms that go to positive infinity as t goes to infinity, and there's one term that just stays constant. So in general, as t grows, goes to infinity, these terms will become very large and won't necessarily go to 0. Well, they'll never go to 0.


Homogeneous Constant Coefficient Equations: Any Roots

Download the video from iTunes U or the Internet Archive.

PROFESSOR: Welcome back. So in this session we're going to cover homogeneous constant coefficient equations for any roots. So we're told to assume that z of t is equal to exponential minus 3t times cosine plus i of sine t. And to assume that this complex number is a solution to this differential equation, which is second order with constant coefficients m, b, and k, which are real, and from this assumption give two real solutions to this equation.

In the second part, we're asked to find a general solution for this other differential equation, which is the same form as that seen in part a, with now the real values for b equals to 6, m equals to 1, and k equals to 10. And then we're asked of this system that would be captured by this differential equation is overdamped or underdamped.

In the last part we switch gears and we're given a series of roots, eight roots to an eighth degree polynomial, and we're asked to write down the general solution for that polynomial. Note that here we have repeated roots, which is basically where the trick is. So why don't you take a few minutes, and we'll come back to solve these problems.

Welcome back. So for the first part, we're given a complex number clearly split into its real part and its imaginary part. So this is the imaginary part of z, and this is the real part of z. Now, the differential equation that we are given is constant value-- differential equation second order, linear, and homogeneous. So there is no right-hand side. So clearly you can see that if a complex number that can be written as real part plus i imaginary part is a solution to this equation, then both sides independently are also solutions to this equation.

So by giving us this complex number, we are actually given two types of solutions that we can then write down as two linearly independent solutions. So the general solution to this differential equation can then be written in the form of-- one constant coefficient is undetermined, minus 3t cos t plus another constant coefficient minus 3t sine of t. Where basically, we're introducing a linear combination of two linearly independent solutions that were just taken from the original complex number we were given.

So for the second part now, we are asked to find a general solution for the ODE. We'll just rewrite here for now. So again, constant coefficient, homogeneous equation. And we're just asked to find a general solution. So we learned from the class notes that we can use the characteristic polynomial method to solve this equation.

And the characteristic polynomial here will have this form. And basically, we just need to find the roots of this characteristic polynomial to be able then to express the solutions of this homogeneous constant coefficient equation in terms of function, exponential to whatever roots we will find.

So I just want to write down the discriminant of this polynomial. So what do we have here? What was the question again? So we're asked then to just find the discriminant for this equation. So b squared minus 40, which is basically minus 4. So we have a discriminant that is negative. And the roots for our polynomial here are just going to be. the root absolute value of the discriminant with the complex number i out, because we had a negative discriminant.

So these couple of solutions, plus or minus, tell us that we are going to have oscillations as solutions for this differential equation, and that it's also going to be actually damped oscillations, because we have the first-- the real part of the number being negative.

So let me just write down the pair of solutions that we would have. We will have this that comes out of the real part, cosine. I'll put the root of 4 is just 2. And so we have cosine t, that's one. 6/2, of course, is just 3.

So here I switched directly to writing them in cosine and sine, in the forms. But we could also have kept it in the complex form if we were solving the equation in complex space. And then we would just have complex exponentials here with the frequencies omega-- angular frequency 1.

So this is the homogeneous equation, the homogeneous solution for this homogeneous constant coefficient differential equation. Now, what do we see? We can see that it's exactly the same form that the solution that we had previously.

And so the question here is this oscillator overdamped or underdamped? And as we see here, we're going to have oscillations that are damped by a pre-factor of decaying exponential minus 3t for both cases. So this is decaying. So there is definitely damping. And we can see that here, because we had a positive value for b, which is the damping term in the oscillator system.

But there are still oscillations, because we had a discriminant that was negative, which introduced complex numbers, hence cosines and sines, so basically, oscillations. Therefore we're in the case of an underdamped response, an underdamped system. The discriminant was negative.

So for the last part of this problem, we now leave this differential equation, which was second order, and move to higher-order differential equations of eighth order. And we're given eight roots. 4 minus 5i.

We have the root of 3, which is real, which is repeated three times, and a root 4 plus or minus 5i, which is repeated two times. So we saw before then when we have repeated roots, that means that we would have, let's say, in a second-order differential equation. We would have in this case a discriminant that would be equal to 0. And we would need to introduce a different type of solution, because instead of introducing an exponential of plus r*t, exponential minus r*t, now we don't have the minus r*t. So we need to seek another type of solution. And we'll multiply the exponential by t.

So one type of solution for this unique root that would come out would be just 2t. For the first of the tripled root, it would be a 3t. For the second one, we would have t 3t. For the third one, we need to go higher order, 3t.

Now we have complex roots, so that gives us solutions of the same form that we found in questions a and b. So I'm going to just write them here, where now we have e^(4t) cosine 5t plus e^(4t) sine 5t. And for the repeated root, it's the same game. Even though we're in complex roots, it's the same idea, we need to multiply all of it by t.

So these are the families of linearly independent solutions that would come out of these roots. So now the general solution is simply the linear combination of all these linearly independent functions. So we will introduce different constants. c_5, c_6, c_7, and c_8. And we have eight undetermined coefficients to deal with, because these would be eight roots of an eighth-order polynomial. And so if we were to solve an initial value problems like that, we would need eight different types of initial conditions to determine the values of these constants.

So to summarize, we used the characteristic polynomial method to solve these problems introducing higher-order equations than the first-order differential equations we saw before. And what is important to remember is that the different types of solutions that you get, depending on the roots of the characteristic polynomial and the sine of the discriminant. So whether the roots are real or purely imaginary or imaginary numbers will give three types of the behavior: the critically damped behavior if it's real then it's going to be just overdamped behavior and if the roots are complex, then it would be underdamped behavior.

Now when we are given different types of roots, with repeated roots it's important to know how to construct the solutions as well. And that's by introducing new types of functions where we're multiplying by polynomial in t in front of the exponential. So that ends the session for today.


18MATDIP41 ADDITIONAL MATHEMATICS – II Notes

alt="Telegram" />

Her you can download the VTU CBCS 2018 Scheme notes, Study materials of 18MATDIP41 ADDITIONAL MATHEMATICS – II Notes.

University
Visvesvaraya Technological University, Belagavi
Branch Name
Common to all branches
Subject Name
ADDITIONAL MATHEMATICS – II
Subject Code
18MATDIP41
Semester
4th Semester
Scheme of Examination
2018 Scheme

Important topics Topics Covered

Linear Algebra, Numerical Methods: Finite differences. Interpolation/extrapolation using Newton’s forward and backward difference formulae (Statements only)-problems. Higher order ODE’s: Linear differential equations of second and higher order equations with constant coefficients.

Higher order ODE’s, Homogeneous /non-homogeneous equations. Partial Differential Equations (PDE’s):- Formation of PDE’s by elimination of arbitrary constants and functions. Solution of non-homogeneous PDE by direct integration and Probability.

Click the below link to download 2018 Scheme VTU CBCS Notes of ADDITIONAL MATHEMATICS – II Notes

M-1 M-2 M-3 M-4 M-5 and Solutions

Summary

Here you can download the 2018 scheme VTU CBCS Notes of ADDITIONAL MATHEMATICS – II Notes. If you like the material share it with your friends. Like the Facebook page for regular updates and YouTube channel for video tutorials.


Higher Order Linear Homogeneous Differential Equations with Constant Coefficients

where (,, ldots ,) are constants which may be real or complex.

Using the linear differential operator (Lleft( D ight),) this equation can be represented as

[Lleft( D ight)yleft( x ight) = 0,]

For each differential operator with constant coefficients, we can introduce the characteristic polynomial

is called the characteristic equation of the differential equation.

According to the fundamental theorem of algebra, a polynomial of degree (n) has exactly (n) roots, counting multiplicity. In this case the roots can be both real and complex (even if all the coefficients of (,, ldots ,) are real).

Let us consider in more detail the different cases of the roots of the characteristic equation and the corresponding formulas for the general solution of differential equations.

Case (1.) All Roots of the Characteristic Equation are Real and Distinct

We assume that the characteristic equation (Lleft( lambda ight) = 0) has (n) roots (,, ldots ,.) In this case the general solution of the differential equation is written in a simple form:

where (,, ldots ,) are constants depending on initial conditions.

Case (2.) The Roots of the Characteristic Equation are Real and Multiple

Let the characteristic equation (Lleft( lambda ight) = 0) of degree (n) have (m) roots (,, ldots ,,) the multiplicity of which, respectively, is equal to (,, ldots ,.) It is clear that the following condition holds:

Then the general solution of the homogeneous differential equations with constant coefficients has the form

It is seen that the formula of the general solution has exactly () terms corresponding to each root () of multiplicity (.) These terms are formed by multiplying (x) to a certain degree by the exponential function (x>>.) The degree of (x) varies in the range from (0) to ( – 1,) where () is the multiplicity of the root (.)

Case (3.) The Roots of the Characteristic Equation are Complex and Distinct

If the coefficients of the differential equation are real numbers, the complex roots of the characteristic equation will be presented in the form of conjugate pairs of complex numbers:

In this case the general solution is written as

Case (4.) The Roots of the Characteristic Equation are Complex and Multiple

Here, each pair of complex conjugate roots (alpha pm ieta ) of multiplicity (k) produces (2k) particular solutions:

Then the part of the general solution of the differential equation corresponding to a given pair of complex conjugate roots is constructed as follows:

In general, when the characteristic equation has both real and complex roots of arbitrary multiplicity, the general solution is constructed as the sum of the above solutions of the form (1-4.)


Akhiezer, D.: Lie group actions in complex analysis. In: Aspects of Mathematics, vol. 27. Friedr. Vieweg & Sohn, Braunschweig (1995)

Bernstein, J., Krötz, B.: Smooth Fréchet globalizations of Harish-Chandra modules. Israel J. Math. 199(1), 45–111 (2014)

Brion, M.: Classification des espaces homogènes sphériques. Compos. Math. 63, 189–208 (1987)

Casselman, W., Miličić, D.: Asymptotic behaviour of matrix coefficients of admissible representations. Duke Math. J. 49, 869–930 (1982)

du Cloux, F.: Sur les représentations différentiables des groupes de Lie algébriques. Ann. Sci. Ec. Norm. Sup. 24(4), 257–318 (1991)

Danielsen, T., Krötz, B., Schlichtkrull, H.: Decomposition Theorems for Triple Spaces. arXiv:1301.0489 [math.DG]

Duke, W., Rudnick, Z., Sarnak, P.: Density of integer points on affine homogeneous varieties. Duke Math. J. 71, 143–179 (1993)

Eskin, A., McMullen, C.: Mixing, counting, and equidistribution in Lie groups. Duke Math. J. 71, 181–209 (1993)

Gross, B.H., Prasad, D.: On the decomposition of a representation of (SO_) when restricted to (SO_) . Can. J. Math. 44, 974–1002 (1992)

Helgason, S.: Differential geometry, Lie groups, and symmetric spaces. Academic Press, London (1978)

Krämer, M.: Sphärische Untergruppen in kompakten zusammenhängenden Gruppen. Compos. Math. 38, 129–153 (1979)

Krötz, B., Schlichtkrull, H., Sayag, E.: Vanishing at infinity on homogeneous spaces of reductive type, arXiv:1211.2781[math.RT]

Miličić, D.: Asymptotic behaviour of matrix coefficients of the discrete series. Duke Math. J. 44, 59–88 (1977)

Mostow, G.D.: Some new decomposition theorems for semi-simple groups. Mem. Am. Math. Soc. 14, 31–54 (1955)

Schlichtkrull, H.: Hyperfunctions and Harmonic Analysis on Symmetric Spaces. Birkhäuser, Boston (1984)

van den Ban, E.: Asymptotic behaviour of matrix coefficients related to reductive symmetric spaces. Indag. Math. 49, 225–249 (1987)

van den Ban, E.: The principal series for a reductive symmetric space I. H-fixed distribution vectors. Ann. Sci. Éc. Norm. Sup 21, 359–412 (1988)

van den Ban, E.: The principal series for a reductive symmetric space II. Eisenstein integrals. J. Funct. Anal. 109, 331–441 (1992)

van den Ban, E., Schlichtkrull, H.: Asymptotic expansions on symmetric spaces. In: Barker, W., Sally, P. (eds.) Harmonic Analysis on Reductive Groups, Progr. Math., pp. 79–87. Birkhäuser, Basel (1991)


Differential Equations with Theory (18.034)

Ravi Vakil, Dept. of Mathematics Rm. 2-271, [email protected]

FINAL EXAM AND COURSE GRADES ARE AVAILABLE HERE.

Remember to fill out a card with the various bits of information I asked for, and sign up to meet me. (Just ask if you have no idea what this is about.) If you missed the first recitation and class, be sure to e-mail me.

Lectures: Monday, Wednesday, Friday 1-2 in 2-146.

Recitations: Monday, Wednesday 10-11 in 2-146. If there is anything you would like to hear about in recitation, please e-mail suggestions (ideally two days in advance, but even the day before would be fine).

Office hours: Wednesday 11-12 and at one other hour in the week. You can also catch me after most recitations and classes.

Text: Boyce and DiPrima, Elementary Differential Equations and Boundary Value Problems, Sixth Edition. It should be available at Quantum Books. If you'd like something else to look at, you can check out M. Braun's Differential Equations and Their Applications. For a more beautiful and advanced (Russian) viewpoint, see Arnol'd's Ordinary Differential Equations.

Grading: Problem sets will be worth a total of 100, the two midterms (Mar. 10 and Apr. 14) will be worth 100 each, and the final exam will be worth 200.

Problem sets: The best 7 will count (out of an estimated 10, due on Fridays in class) we agreed that no lates will be accepted for any reason. Feel free to think about the problems in groups (collaboration is encouraged), but you must write up solutions on your own, and give credit for an idea to the person who thought of it. (You won't lose marks for doing so.)

Here is Problem Set 1 (in dvi, postscript, and pdf formats the last one will most likely work for you).

Here is Problem Set 2 (dvi, ps, pdf), minus the figures for the last problem.

Here is Problem Set 3 (dvi, ps, pdf), minus the figure for the last problem.

Here is Problem Set 5 (dvi, ps, pdf), due Mar. 17. In problem 2, change the x's to t's.

Here is Problem Set 6 (dvi, ps, pdf), due Mar. 31.

Here is Problem Set 7 (dvi, ps, pdf), due Apr. 7.

Here is Problem Set 8 (dvi, ps, pdf), due Apr. 21.

Here is Problem Set 9 (dvi, ps, pdf), due Apr. 28.

Here is Problem Set 10 (dvi, ps, pdf), due May 5.

Here is Problem Set 11 (dvi, ps, pdf), consisting of Laplace transform practice problems, not to be handed in.

Hard copies (with figures) are available on the bulletin board beside my office door.

Midterms: There will be two midterms.

Here are some practice problems for Midterm 1, including a syllabus of the topics we've covered (dvi, ps, pdf). Here are answers to practice midterm 1.

Here is Midterm 1 (dvi, ps, pdf). Here are sketches of solutions (dvi, ps, pdf).

Here are some practice problems for Midterm 2, including a syllabus of the topics we've covered (dvi, ps, pdf). (A question on eigenvectors was added Apr. 12.) Here are answers to practice midterm 2.

Here is Midterm 2 (dvi, ps, pdf). Here are sketches of solutions (dvi, ps, pdf).

Final exam: Here are practice problems for the final (dvi, ps, pdf).

Here are (sketches of) solutions (minus figures) to the practice problems for the final (dvi, ps, pdf).

Syllabus

A syllabus (covering what we've done) will appear here, most likely with a time lag of a few days. References are to the text.

Wed. Feb. 2 (Recitation). Recurrences, a discrete analogue of differential equations.

Wed. Feb. 2. Definitions: ODE, PDE, order, solution, linear ODE. Direction field. Solving y' = 6t+4, y'/y=1. Chapter 1.

Fri. Feb. 4. Integrating factors for first-order linear ODEs. Existence and uniqueness of solutions in that case. Separable equations. Sections 2.1-2.3.

Mon. Feb. 7 (Rec). Problems with differentials. The Brachistochrone problem. Autonomous equation preview.

Mon. Feb. 7. Integral curves. The existence and uniqueness theorem. Examples showing pathologies: y' = -t/y, y' = y^2, y' = y, y' = y^(1/3). Section 2.4. Autonomous equations (done quickly make sure you understand the phase line, stable equilibria, unstable equilibria). Section 2.6.

Wed. Feb. 7 (Rec). Three problems: the logistic equation (Verhulst equation, see p. 70 #18), linear equations (p. 31 #26), Bernoulli equations (p. 33).

Wed. Feb. 9. Exact equations, Section 2.8. The "closed" necessary condition sufficiency on a rectangle (or simply connected set). Where sufficiency fails on a non-simply connected set.

Fri. Feb. 11. Exact equations continued. The example (-y+xy')/(x^2+y^2) = d/dx arctan(y/x). Problem Set 1 due.

Mon. Feb. 14 (Rec). Homogeneous equations (Section 2.9): y' = f(x,y) where f(x,y) is a function purely of y/x, i.e. can be written F(y/x) then subtitute v=y/x. Orthogonal trajectories, e.g. to families x^2+y^2 = c^2, (x-c)^2+y^2=c^2, x^2-xy+y^2=c^2 (problems 2.10.36-38).

Mon. Feb. 14. Solutions to homogeneous linear ODEs with constant coefficients. The toy model: recurrences. The characteristic polynomial.

Wed. Feb. 16 (Rec). Linear algebra. Complex numbers.

Wed. Feb. 16. The case where the characteristic polynomial has distinct real roots. Repeated roots (without proof). Complex roots (with proof). Sections 3.1, 3.2, 4.2.

Fri. Feb. 18. Introduction to Linear operators (and linear equations with constant coefficients). Problem Set 2 due.

Tues. Feb. 22 (Rec). Solving y'''-2y''-y'+2y=0 (with initial conditions) "inductively". (Working through the proof to be given in class.)

Tues. Feb. 22. Proof on existence and uniqueness of solutions to homogeneous linear equations with constant coefficients.

Wed. Feb. 23 (Rec). PS2#7: When is a differential equation exact on a region R? Asides on delta functions, distributions, Lebesgue integration, etc.

Wed. Feb. 23. Solving nonhomogeneous linear differential equations with constant coefficients p(D) y = f(x): (i) find solutions to p(D)y = 0, (ii) find one solution to p(D)y = f(x) (by guessing = the method of undetermined coefficients, Section 3.6), and (iii) add the answers to (i) and (ii). A useful trick to guess well: If g(x) is a degree d polynomial, then (D-a) g(x) e^(bx) = h(x) e^(bx), where deg h = deg g -1 if a =b, and deg h = deg g otherwise. We did this when f(x) was an exponential, an exponential times a polynomial, and an exponential times a polynomial times a trig term. This is discussed in BD4.3 see also 4.1 and 4.2.

Fri. Feb. 25. Problem Set 3 due. Applying our general machine to second-order linear equations (Section 3.8).

Mon. Feb. 28 (Rec). Quantitative and qualitative behavior of y"+4y'+ay = 0 (playing with spring constant) when i) a=0, y(0)=-3, y'(0)=16. ii) a=3, y(0)=1, y'(0)=-5. iii) a=4, y(0)=2, y'(0)=-3. iv) a=5, y(0)=0, y'(0)=1.

Mon. Feb. 28. Further analysis of m y" + gamma y' + k y = 0 when gamma is very small (BD3.9) . Damping decreases frequency. Forced vibrations: m y" + ky = F_0 cos wt (no damping). If w <> root(k/m)=w_0, then general solution is R cos(w_0 t - delta) + (F_0/ ( m (w_0^2-w^2))) cos wt. If y(0)=y'(0)=0, get beats if w and w_0 are very close: y = (F/ (m (w_0^2 - w^2))) (cos wt - cos w_0 t). cos a - cos b = 2 sin ((a+b)/2) sin ((a-b)/2).

Wed. Mar. 1 (Rec). Beats (BD3.9). Simplify sin at - sin bt. Qualitative behavior of cos t + epsilon sin t. Strobe lights. Frequency and music.

Wed. Mar. 1. Resonance: m y" + ky = F_0 cos wt, w=root(k/m). Adding damping: m y" + gamma y' + k y = F_0 cos wt. The amplitude drops, and there is a phase lag (either a little more than 0 if w is smaller than w_0, or a little less than 180 if w is greater than w_0). BD3.9.

Fri. Mar. 3. Fored vibrations with damping, BD 3.9. Problem Set 4 due. Practice midterm 1 (and syllabus for midterm) out see above to get it from this page.

Mon. Mar. 6 (Rec). Several questions, including an introduction to the Wronskian.

Mon. Mar. 6. Second-order equations without constant coefficients. Introduction to the Wronskian. BD3.2 and 3.3.

Wed. Mar. 8 (Rec). Abel's theorem. Wronskian practice.

Wed. Mar. 8. BD Section 3.3, including Abel's theorem. Application: using the Wronskian to find one solution to an order 2 homogeneous equation given another.

Fri. Mar. 10. First Midterm (in class).

Mon. Mar. 13 (Rec). Finding W(af+bg, cf+dg) given W(f,g), where a, b, c, d are constants. Using the Wronskian to find one solution to an order 2 homogeneous equation given another. Using "variation of parameters" to solve the following problem: (BD Ex. 4.1.26) given one solution y_1 to y''' + p_1 y'' + p_2 y' + p_3 y = 0, find a second-order equation from which you can find the rest of the solutions.

Mon. Mar. 13. Variation of parameters, BD3.7.

Wed. Mar. 15 (Rec). Variation of paramaters: generalizing downwards (to first-order equations).

Wed. Mar. 15. Variation of parameters for higher-order equations, BD4.4.

Fri. Mar. 17. Problem Set 5 due. Differential equations and physics.

Mon. Mar. 27 (Rec). Existence and uniqueness theorems, and topology.

Mon. Mar. 27. Philosophy of proof of the Existence and Uniqueness Theorem for Frist-order Equations (BD2.11). The Contraction Mapping Theorem.

Wed. Mar. 29 (Rec). More on topology and existence theorems: fixed-point theorems and applications.

Wed. Mar. 29. Using the Contraction Mapping Theorem to compute things such as root(2). Beginning the proof of the Existence and Uniqueness Theorem: translating the equation into integral form Picard iterates.

Fri. Mar. 31. Continuing the proof of the Existence and Uniqueness Theorem: the Lipschitz condition, and Lipschitz constants. Problem Set 6 due.

Mon. Apr. 3 (Rec). The existence and uniqueness theorem applied to z'=z: the exponential function e^x exists.

Mon. Apr. 3. Finishing the proof of the existence and uniqueness theorem.

Wed. Apr. 5 (Rec). Systems of first-order linear differential equations, e.g. x'=y, y'=-x. Phase portrait, direction fields. Practice sketching.

Wed. Apr. 5. Introduction to systems of first-order differential equations (7.1). Existence and uniqueness statements (Thm. 7.1.1, 7.1.2). Lightning introduction to linear algebra (7.2, 7.3).

Fri. Apr. 7. Problem Set 7 due. Qualitative analysis of systems of equations. Guest lecturer (Prof. Marcolli).

Mon. Apr. 10 (Rec). More examples of systems of linear first-order equations.

Mon. Apr. 10. Basic theory of systems of first-order linear equations (7.4). Principle of superposition (7.4.1). Wronskian. Fundamental set of solutions. Beginning study of systems with constant coefficients.

Wed. Apr. 12 (Rec). Review of Wronskians. Computing powers of matrices with distinct eigenvalues.

Wed. Apr. 12. Review for midterm. How to solve a system of differential equations with (i) distinct eigenvalues, (ii) repeated eigenvalues. A handout on Phase portraits in two dimensions (by Prof. Miller) is available outside my office.

Fri. Apr. 14. Second Midterm (in class).

Wed. Apr. 19 (Rec). Conservation laws.

Wed. Apr. 19. Solving a system of linear differential equations with complex eigenvalues.

Fri. Apr. 21. Problem set 8 due. Fundamental mtarices, BD 7.8. Power series introduction, BD 5.1.

Mon. Apr. 24 (Rec). Conservation laws and i^i. Fun with power series.

Mon. Apr. 24. Power series continued.

Wed. Apr. 26 (Rec). Fun with power series continued.

Wed. Apr. 26. Series solutions near an ordinary point, BD 5.2-5.3.

Fri. Apr. 28. Initial ideas on what to do near a regular singular point, BD 5.4. Problem set 9 due.

Mon. May 1 (Rec). Rationality and irrationality algebraicity and transcendence. Why root(2), e, and pi are irrational. Some comments on why e is transcendental.

Mon. May 1. Series solutions near a regular singular point, BD 5.4-5.7.

Wed. May 3 (Rec). The gamma function. (1/2)! = root(pi)/2. What power series tell you about combinatorics.

Wed. May 3. The Laplace transform.

Fri. May 5. The Laplace transform (up to 6.4). Problem set 10 due.

Mon. May 8. Guest lecture (Prof. Marcolli). The delta function and impulse functions (6.5). Remarks on the theory of distributions and generalized functions.


2 Answers 2

The method of annihilators from constant coefficient ODEs is interesting. If $L = P(D) in mathbb[D]$ where $D = d/dt$ and $deg(P)=n$ then $L[y]=0$ defines an $n$ -th order homogeneous ODE. In this case, $Ann(L)$ is a finite dimensional subspace of the infinite dimensional space of smooth functions on $mathbb$ . So, one interesting thing is just that annihilators give a formalism to characterize solution sets as they relate to the structure of an operator. This alone is not terribly interesting, the method of annihilators typically refers to the following mathematical slight of hand: given a nonhomogeneous $n$ -th order ODE $L[y] = g$ we can apply the method of annihilators if there exists $A in mathbb[D]$ with $deg(A)=k$ for which $A[g]=0$ . Then, $AL[y] = A[g] = 0$ . By the theory of ODEs, there exists a fundamental solution set for $AL[y]=0$ say $y_1,y_2,dots, y_n, y_, dots , y_$ . Moreover, without loss of generality we may suppose $y_1,y_2,dots, y_n$ is a fundamental solution set of $L[y]=0$ since solutions of $L[y]=0$ are also solutions of $AL[y]=0$ . In fact, we can argue $y=displaystylesum_^ c_jy_j$ serves as the general solution of $L[y]=g$ .

For example, $y''-y = e^t$ has $L = D^2-1$ and $A=D-1$ gives $A[e^t]=0$ . Note, $AL = (D-1)(D^2-1) = (D+1)(D-1)^2$ thus $AL[y]=0$ has solution $y = c_1e^<-t>+c_2e^t+c_3te^t$ . Observe $L[y] = e^t$ gives $ (D^2-1)(c_1e^<-t>+c_2e^t+c_3te^t) = e^t $ hence $ c_3D^2[te^t] - c_3te^t = e^t $ or $ c_3(te^t+2e^t) - c_3te^t = e^t $ consequently, $ 2c_3e^t = e^t $ and we find $c_3 = frac<1><2>$ thus $y = c_1e^<-t>+c_2e^t+ frac<1><2>te^t$ is the general solution of $y''-y=e^t$

Notice the nonhomogeneous problem can be viewed as a finite dimensional linear algebra problem since $ann(AL)$ is an $(n+k)$ -dimensional vector space by the existence and uniqueness theory for constant coefficient ODEs. I think this is fairly removed from direct matrix math. Of course, pick a basis, and we can be right back to the matrix.

Beyond this, the concept of annihilators play a large role in the study of differential forms and integral submanifolds. But, you might be looking for a very different sort of example, so I'll stop here.


Watch the video: Homogeneous Systems of Linear Equations - Intro to EigenvalueEigenvector Method (June 2022).


Comments:

  1. Witt

    Thanks for an explanation.

  2. Jerren

    The portal is superb, however, it is noticeable that something needs to be tweaked.



Write a message