We are searching data for your request:

**Forums and discussions:**

**Manuals and reference books:**

**Data from registers:**

**Wait the end of the search in all databases.**

Upon completion, a link will appear to access the found materials.

Upon completion, a link will appear to access the found materials.

IN THIS CHAPTER we study the integral calculus of real-valued functions of several variables.

- SECTION 7.1 defines multiple integrals, first over rectangular parallelepipeds in (R^n) and then over more general sets. The discussion deals with the multiple integral of a function whose discontinuities form a set of Jordan content zero, over a set whose boundary has Jordan content zero.
- SECTION 7.2 deals with evaluation of multiple integrals by means of iterated integrals.
- SECTION 7.3 begins with the definition of Jordan measurability, followed by a derivation of the rule for change of content under a linear transformation, an intuitive formulation of the rule for change of variables in multiple integrals, and finally a careful statement and proof of the rule. This is a complicated proof.

## Holomorphic Functions and Integral Representations in Several Complex Variables

The subject of this book is Complex Analysis in Several Variables. This text begins at an elementary level with standard local results, followed by a thorough discussion of the various fundamental concepts of "complex convexity" related to the remarkable extension properties of holomorphic functions in more than one variable. It then continues with a comprehensive introduction to integral representations, and concludes with complete proofs of substantial global results on domains of holomorphy and on strictly pseudoconvex domains inC", including, for example, C. Fefferman's famous Mapping Theorem. The most important new feature of this book is the systematic inclusion of many of the developments of the last 20 years which centered around integral representations and estimates for the Cauchy-Riemann equations. In particu lar, integral representations are the principal tool used to develop the global theory, in contrast to many earlier books on the subject which involved methods from commutative algebra and sheaf theory, and/or partial differ ential equations. I believe that this approach offers several advantages: (1) it uses the several variable version of tools familiar to the analyst in one complex variable, and therefore helps to bridge the often perceived gap between com plex analysis in one and in several variables (2) it leads quite directly to deep global results without introducing a lot of new machinery and (3) concrete integral representations lend themselves to estimations, therefore opening the door to applications not accessible by the earlier methods.

## MATH 2204 Course Page

INTRODUCTION TO MULTIVARIABLE CALCULUS

Calculus for functions for several variables. Planes and surfaces, continuity, differentiation, chain rule, extreme values, Lagrange multipliers, double and triple integrals and applications, software-based techniques. A student can earn credit for at most one of 2204 and 2406H. A student can earn credit for at most one of 2024 and 2204. A student can earn credit for at most one of 2204 and CMDA 2005.

Admission to MATH 2204 is offered to students who have passed MATH 1226.

#### Textbook:

Text: Calculus: Early Transcendentals by Stewart (9 th edition) with WebAssign access

Download the Complete Syllabus with Problem Assignments (PDF )

#### Syllabus: Topics & Chapters

#### Unit 1: Vectors, Surfaces and Functions of Several Variables

Section | Subject |
---|---|

12.1 | Three-Dimensional Coordinate Systems |

12.2 | Vectors |

12.3 | The Dot Product |

12.4 | The Cross Product |

12.5 | Equations of Lines and Planes |

12.6 | Cylinders and Quadric Surfaces |

14.1 | Functions of Several Variables |

14.2 | Limits and Continuity |

14.3 | Partial Derivatives |

14.4 | Tangent Planes and Linear Approximations |

#### Unit 2: Double Integrals and Triple Integrals

Section | Topic |
---|---|

15.1 | Double Integrals Over Rectangles |

15.2 | Double Integrals Over General Regions |

15.3 | Polar Coordinates |

15.4 | Applications of Double Integrals |

15.6 | Triple Integrals |

15.7 | Cylindrical Coordinates |

15.8 | Spherical Coordinates |

#### Unit 3: Optimization and Vector Functions

Section | Topic |
---|---|

14.5 | Chain Rule |

14.6 | Directional Derivatives and Gradients |

14.7 | Optimization |

14.8 | Lagrange Multipliers |

13.1 | Vector Functions and Space Curves |

13.2 | Derivatives and Integrals of Vector Functions |

13.3 | Arc Length and Curvature |

13.4 | Motion in Space |

#### Final Exam

The **final exam is a Common Time Exam**.

The exam consists of two parts:

- Common Exam
- This test is a multiple choice exam taken by all sections of MATH 2204. Samples of Common Time Final Exams given in previous years are available (koofers).

- Free Response Exam
- Your instructor will give you information on what to expect for the second portion of the exam.

Note: Both portions of this exam will be administered virtually.

Check the timetable or your instructor's Canvas course site for the date and time of the common final exam.

#### Instructors & Sections

See the Timetable of classes for information on current offerings of MATH 2204

#### Honor System Information

The Undergraduate Honor Code pledge that each member of the university community agrees to abide by states:

“As a Hokie, I will conduct myself with honor and integrity at all times. I will not lie, cheat, or steal, nor will I accept the actions of those who do.”

Students enrolled in this course are responsible for abiding by the Honor Code. A student who has doubts about how the Honor Code applies to any assignment is responsible for obtaining specific guidance from the course instructor before submitting the assignment for evaluation. Ignorance of the rules does not excuse any member of the University community from the requirements and expectations of the Honor Code.

## Elementary functions.

In mathematical analysis the elementary functions are of fundamental importance. Basically, in practice, one operates with the elementary functions and more complicated functions are approximated by them. The elementary functions can be considered not only for real but also for complex $ x $ then the conception of these functions becomes in some sense, complete. In this connection an important branch of mathematics has arisen, called the theory of functions of a complex variable, or the theory of analytic functions (cf. Analytic function).

## Optimum Design Concepts

### 4.3.3 Optimality Conditions for Functions of Several Variables

For the general case of a function of several variables *f*(**x**) where **x** is an n-vector, we can repeat the derivation of *necessary and sufficient* conditions using the multidimensional form of Taylor's expansion:

Or, change in the function is given as

If we assume a local minimum at **x** * then Δ*f* must be nonnegative due to the definition of a local minimum given in Inequality (4.1), i.e., Δ*f* ≥ 0. Concentrating only on the first-order term in Eq. (4.33) , we observe (as before) that Δ*f* can be nonnegative for all possible **d** when

That is, the gradient of the function at **x** * must be zero. In the component form, this necessary condition becomes

Points satisfying Eq. (4.35) are called *stationary points*. Considering the second term in Eq. (4.33) evaluated at a stationary point, the positivity of Δ*f* is assured if

for all **d** ≠ **0**. This will be true if the Hessian **H**(**x** * ) is a positive definite matrix (see Section 4.2 ) which is then the sufficient condition for a local minimum of *f*(**x**) at **x** * . Conditions (4.35) and (4.36) are the multidimensional equivalent of Conditions (4.29) and (4.31), respectively. We summarize the development of this section in Theorem 4.4.

*Theorem 4.4 Necessary and Sufficient Conditions for Local Minimum*

*Necessary condition*. If *f*(**x**) has a local minimum at **x** * then

*Second-order necessary condition*. If *f*(**x**) has a local minimum at **x** * , then the Hessian matrix of Eq. (4.5)

is positive semidefinite or positive definite at the point **x** * .

*Second-order sufficiency condition*. If the matrix **H**(**x** * ) is positive definite at the stationary point **x** * , then **x** * is a local minimum point for the function *f*(**x**).

Note that if **H**(**x** * ) at the stationary point **x** * is indefinite, then **x** * is neither a local minimum nor a local maximum point because the second-order necessary condition is violated for both cases. Such stationary points are called *inflection points*. Also if **H**(**x** * ) is at least positive semidefinite, then **x** * cannot be a local maximum since it violates the second-order necessary condition for a local maximum of *f*(**x**). In other words a point cannot be a local minimum and local maximum simultaneously. The optimality conditions for a function of single variable and a function of several variables are summarized in Table 4-1 .

TABLE 4-1 . Optimality Conditions for Unconstrained Problems

Function of one variable minimize f(x) | Function of several variables minimize f(x) |
---|---|

First-order necessary condition: f′ = 0. Any point satisfying this condition is called a stationary point it can be a local minimum, local maximum, or neither of the two (inflection point) | First-order necessary condition: ∇f = 0. Any point satisfying this condition is called a stationary point it can be a local minimum, local maximum, or neither of the two (inflection point) |

Second-order necessary condition for a local minimum: f″ ≥ 0 | Second-order necessary condition for a local minimum: H must be at least positive semidefinite |

Second-order necessary condition for a local maximum: f″ ≤ 0 | Second-order necessary condition for a local maximum: H must be at least negative semidefinite |

Second-order sufficient condition for a local minimum: f″ > 0 | Second-order sufficient condition for a local minimum: H must be positive definite |

Second-order sufficient condition for a local maximum: f″ < 0 | Second-order sufficient condition for a local maximum: H must be negative definite |

Higher-order necessary conditions for a local minimum or local maximum: Calculate a higher ordered derivative that is not zero all odd-ordered derivatives below this one must be zero | |

Higher-order sufficient condition for a local minimum: Highest nonzero derivative must be even-ordered and positive |

Note also that these conditions involve derivatives of *f*(**x**) and not the value of the function. If we *add a constant* to *f*(**x**), the solution **x** * of the minimization problem remains unchanged, although the value of the cost function is altered. In a graph of *f*(**x**) versus **x**, adding a constant to *f*(**x**) changes the origin of the coordinate system but leaves the shape of the surface unchanged. Similarly, if we multiply *f*(**x**) by any positive constant the minimum point **x** * is unchanged but the value *f*(**x** * ) is altered. In a graph of *f*(**x**) versus **x** this is equi valent to a uniform change of scale of the graph along the *f*(**x**) axis, which again leaves the shape of the surface unaltered. Multiplying *f*(**x**) by a negative constant changes the minimum at **x** * to a maximum. We may use this property to convert maximization problems to minimization problems by multiplying *f*(**x**) by −1. The effect of scaling and adding a constant to a function is shown in Example 4.19 . In Examples 4.20 and 4.23 , the local minima for a function are found using optimality conditions, while in Examples 4.21 and 4.22 , the use of necessary conditions is explored.

Effects of Scaling or Adding a Constant to a Function

Discuss the effect of the preceding variations for the function *f* (*x*) = *x* 2 − 2*x* + 2.

** Solution**. Consider the graphs of Fig. 4-9 . Figure 4-9(A) represents the function

*f*(

*x*) =

*x*2 − 2

*x*+ 2, which has a minimum at

*x** = 1. Figures 4-9(B) , (C) , and (D) show the effect of adding a constant to the function [

*f*(

*x*) + 1], multiplying

*f*(

*x*) by positive number [2

*f*(

*x*)], and multiplying it by a negative number [-

*f*(

*x*)]. In all cases, the stationary point remains unchanged.

Local Minima for a Function of Two Variables Using Optimality Conditions

** Solution**. The necessary conditions for the problem give

These equations are linear in variables *x*_{1} and *x*_{2}. Solving the equations simultaneously, we get the stationary point as **x** * = (2.5, −1.5). To check if the stationary point is a local minimum, we evaluate **H** at **x** * .

By either of the tests of Theorems 4.2 and 4.3 or (*M*_{1} = 2 > 0, *M*_{2} = 4 > 0) or (λ_{1} = 5.236 > 0, λ_{2} = 0.764 > 0), **H** is positive definite at the stationary point **x** * . Thus, it is a local minimum with *f*(**x** * ) = 4.75. Figure 4-10 shows a few iso-cost curves for the function of this problem. It can be seen that the point (2.5, −1.5) is the minimum for the function.

As noted earlier, the optimality conditions can also be used to check the optimality of a given point. To illustrate this, let us check the optimality of the point (1, 2). At this point, the gradient vector is calculated as (4, 11), which is not zero. Therefore the first-order necessary condition for a local minimum or a local maximum is violated and the point is not a stationary point.

Local Minima for a Function of Two Variables Using Optimality Conditions

Find a local minimum point for the function

** Solution**. The necessary conditions for optimality are

Since neither *x*_{1} nor *x*_{2} can be zero (the function has singularity at *x*_{1} = 0 or *x*_{2} = 0), the preceding equation gives *x*_{1} = 250*x*_{2}. Substituting this into Eq. (b), we obtain *x*_{2} = 4. Therefore, *x*_{1} * = 1000, and *x*_{2} * = 4 is a stationary point for the function *f*(**x**). Using Eqs. (a) and (b), the Hessian matrix for *f*(**x**) at the point **x** * is given as

Eigenvalues of the above Hessian (without the constant of 1/4) are: λ_{1} = 0.006 and λ_{2} = 500.002. Since both eigenvalues are positive, the Hessian of *f*(**x**) at the point **x** * is positive definite. Therefore, **x** * = (1000, 4) is a local minimum point with *f*(**x** * ) = 3000. Figure 4-12 shows some isocost curves for the function of this problem. It can be seen that *x*_{1} = 1000 and *x*_{2} = 4 is the minimum point. (Note that the horizontal and vertical scales are quite different in Fig. 4-12 this is done to obtain reasonable isocost curves.)

Cylindrical Tank Design Using Necessary Conditions

In Section 2.8 , a minimum cost cylindrical storage tank problem is formulated. The tank is closed at both ends and is required to have volume *V*. The radius *R* and height *H* are selected as design variables. It is desired to design the tank having minimum surface area. For the solution we may simplify the cost function as

The volume constraint is an equality,

This constraint cannot be satisfied if either *R* or *H* is zero. We may then neglect the non-negativity constraints on *R* and *H* if we agree to choose only the positive value for them. We may further use the equality constraint (b) to eliminate *H* from the cost function,

Therefore, the cost function of Eq. (a) becomes

This is an unconstrained problem in terms of *R* for which the necessary condition gives

Using Eq. (e), the second derivative of f ¯ ¯ with respect to *R* at the stationary point is

Since the second derivative is positive for all positive *R*, the solution in Eqs. (f) and (g) is a local minimum. Using Eqs. (a) or (d) the cost function at the optimum is given as

Numerical Solution of Necessary Conditions

Find stationary points for the following function and check sufficiency conditions for them:

** Solution**. The function is plotted in Fig. 4-11 . It can be seen that there are three stationary points:

*x*= 0 (Point A),

*x*between 1 and 2 (Point C), and

*x*between −1 and −2 (Point B). The point

*x*= 0 is a local maximum for the function and the other two are local minima.

The necessary condition is

It can be seen that *x* = 0 satisfies Eq. (b), so it is a stationary point. We must find other roots of Eq. (b). Finding an analytical solution for the equation is difficult, so we must use numerical methods. We can either plot *f*'(*x*) versus *x* and locate the point where *f*a(*x*) = 0, or use a numerical method for solving nonlinear equations. A numerical method for solving such an equation known as the *Newton-Raphson method* is given in Appendix C . By either of the two methods, we find that *x* * = 1.496 and −1.496 satisfy *f*'(*x*) = 0 in Eq. (b). Therefore, these are additional stationary points. To determine whether they are local minimum, maximum, or inflection points, we must determine *f*″ = at the stationary points and use sufficient conditions of Theorem 4.4. Since *f*″ = 2 3 − *cos x*, we have 1.

*x* * = 0 *f*″ = − 1 3 < 0, so this is a local maximum with *f*(0) = 1.

*x* * = 1.496 *f*″ = 0.592 > 0, so this is a local minimum with *f*(1.496) = 0.821.

*x* * = −1.496 *f*″ = 0.592 > 0, so this is a local minimum with *f*(-1.496) = 0.821.

These results agree with the graphical solutions observed in Fig. 4-11 . Note that *x* * = 1.496 and −1.496 are actually global minimum points for the function although the function is unbounded and the feasible set is not closed. Note also that there is no global maximum point for the function since the function is unbounded and *x* is allowed to have any value.

## Contents

If a function $ f $ is Riemann integrable on an interval $ [ a , b ] $, then the function $ F $ defined by

$ F ( x) = intlimits _ < a >^ < x >f ( t) d t , a leq x leq b , $

is continuous on this interval. If, in addition, $ f $ is continuous at a point $ x _ <0>$, then $ F $ is differentiable at this point and $ F ^ < prime >( x _ <0>) = f( x _ <0>) $. In other words, at the points of continuity of a function the following formula holds:

$ frac

Consequently, this formula holds for every Riemann-integrable function on an interval $ [ a , b ] $, except perhaps at a set of points having Lebesgue measure zero, since if a function is Riemann integrable on some interval, then its set of points of discontinuity has measure zero. Thus, if the function $ f $ is continuous on $ [ a , b ] $, then the function $ F $ defined by

$ F ( x) = intlimits _ < a >^ < x >f ( t) d t $

is a primitive of $ f $ on this interval. This theorem shows that the operation of differentiation is inverse to that of taking the definite integral with a variable upper limit, and in this way a relationship is established between definite and indefinite integrals:

$ intlimits f ( x) d x = intlimits _ < a >^ < x >f ( t) d t + C . $

The geometric meaning of this relationship is that the problem of finding the tangent to a curve and the calculation of the area of plane figures are inverse operations in the above sense.

The following Newton–Leibniz formula holds for any primitive $ F $ of an integrable function $ f $ on an interval $ [ a , b] $:

$ intlimits _ < a >^ < b >f ( x) d x = F ( b) - F ( a) . $

It shows that the definite integral of a continuous function over some interval is equal to the difference of the values at the end points of this interval of any primitive of it. This formula is sometimes taken as the definition of the definite integral. Then it is proved that the integral $ int _ ^ **f ( x) d x $ introduced in this way is equal to the limit of the corresponding integral sums.**

** **

**For definite integrals, the formulas for change of variables and integration by parts hold. Suppose, for example, that the function $ f $ is continuous on the interval $ ( a , b ) $ and that $ phi $ is continuous together with its derivative $ phi ^ prime $ on the interval $ ( alpha , eta ) $, where $ ( alpha , eta ) $ is mapped by $ phi $ into $ ( a , b ) $: $ a < phi ( t) < b $ for $ alpha < t < eta $, so that the composite $ f circ phi $ is meaningful in $ ( alpha , eta ) $. Then, for $ alpha _ <0>, eta _ <0>in ( alpha , eta ) $, the following formulas for change of variables holds:**

$ intlimits _ < phi ( alpha _ <0>) > ^ < phi ( eta _ <0>) > f ( x) d x = intlimits _

The formula for integration by parts is:

$ intlimits _ < a >^ < b >u ( x) d v ( x) = left . u ( x) v ( x)
ight | _

where the functions $ u $ and $ v $ have Riemann-integrable derivatives on $ [ a , b ] $.

The Newton–Leibniz formula reduces the calculation of an indefinite integral to finding the values of its primitive. Since the problem of finding a primitive is intrinsically a difficult one, other methods of finding definite integrals are of great importance, among which one should mention the method of residues (cf. Residue of an analytic function Complex integration, method of) and the method of differentiation or integration with respect to the parameter of a parameter-dependent integral. Numerical methods for the approximate computation of integrals have also been developed.

Generalizing the notion of an integral to the case of unbounded functions and to the case of an unbounded interval leads to the notion of the improper integral, which is defined by yet one more limit transition.

The notions of the indefinite and the definite integral carry over to complex-valued functions. The representation of any holomorphic function of a complex variable in the form of a Cauchy integral over a contour played an important role in the development of the theory of analytic functions.

The generalization of the notion of the definite integral of a function of a single variable to the case of a function of several variables leads to the notion of a multiple integral.

For unbounded sets and unbounded functions of several variables, one is led to the notion of the improper integral, as in the one-dimensional case.

The extension of the practical applications of integral calculus necessitated the introduction of the notions of the curvilinear integral, i.e. the integral along a curve, the surface integral, i.e. the integral over a surface, and more generally, the integral over a manifold, which are reducible in some sense to a definite integral (the curvilinear integral reduces to an integral over an interval, the surface integral to an integral over a (plane) region, the integral over an $ n $- dimensional manifold to an integral over an $ n $- dimensional region). Integrals over manifolds, in particular curvilinear and surface integrals, play an important role in the integral calculus of functions of several variables by this means a relationship is established between integration over a region and integration over its boundary or, in the general case, over a manifold and its boundary. This relationship is established by the Stokes formula (see also Ostrogradski formula Green formulas), which is a generalization of the Newton–Leibniz formula to the multi-dimensional case.

Multiple, curvilinear and surface integrals find direct application in mathematical physics, particularly in field theory. Multiple integrals and concepts related to them are widely used in the solution of specific applied problems. The theory of cubature formulas (cf. Cubature formula) has been developed for the numerical calculation of multiple integrals.

The theory and methods of integral calculus of real- or complex-valued functions of a finite number of real or complex variables carry over to more general objects. For example, the theory of integration of functions whose values lie in a normed linear space, functions defined on topological groups, generalized functions, and functions of an infinite number of variables (integrals over trajectories). Finally, a new direction in integral calculus is related to the emergence and development of constructive mathematics.

Integral calculus is applied in many branches of mathematics (in the theory of differential and integral equations, in probability theory and mathematical statistics, in the theory of optimal processes, etc.), and in applications of it. For references see also [1]– to Differential calculus.

## Math 118 - Fundamental Principles of the Calculus

We use a custom version of this textbook that included two additional sections on Double Integrals at the end. In bound copies of the 5th edition this will appear as a supplemental booklet from the publisher.

**Course Description:** **(4 units)** Derivatives extrema. Definite integral fundamental theorem of calculus. Extreme and definite integrals for functions of several variables. Not available for credit toward a degree in mathematics.

**Prerequisites:** MATH 108 or MATH 117 or placement exam in MATH.

Sections | Topics | Approx. Lectures |

1.1-1.9 | Prerequisites, Intro to Business Vocabulary | 4 |

2.1-2.5 | The Derivative: Definitions and Interpretations | 4 |

3.1-3.4 | The Derivative Rules | 3 |

4.1-4.5 | Applications of the Derivative, Extrema | 8 |

5.1-5.6 | The Definite Integral: Definitions and Interpretations | 5 |

6.1-6.3, 6.5-6.7 | Antiderivatives, the Fundamental Theorem, and Applications | 7 |

8.1-8.5 | Functions of Two Variables, Partial Derivatives, and Extrema | 6 |

11.1-11.2/16.1-16.2* | Double Integrals | 3 |

40 |

**Optional:** 4.6, 4.7, 4.8, 6.4, 8.6

**Omitted:** 1.10, 3.5 Chapters 7, 9, 10.

*These section numbers refer to the USC custom printing/Wiley original versions

There are 42 lecture days in each semester so 2 lectures are available for hour exams

On the Fall '19 final exam, student were allowed to use, and were told to expect to need, a scientific calculator (eg the TI-30X). Graphing calculators and phones were not permitted. Students were allowed to prepare and use both sides of an 8.5 x 11 sheet of notes to the final exam, written in their own hand. The group of MATH 118 instructors agreed that material on the optional sections would be included on the final if all instructors of the course covered those sections.

## Math 231 Calculus of Several Variables

**Blue Book Description:** Analytic geometry in space partial differentiation and applications. Students who have passed MATH 230 may not schedule this course.

**Pre-requisites:** MATH 141 or MATH 141H

**Pre-requisite for:** MATH 412, MATH 414, MATH 419, MATH 451

Bachelor of Arts: Quantification

**Suggested Textbook:**

Multivariate Calculus: Early Transcendentals, 8th edition, by James Stewart, published by Cengage. *Check with your instructor to make sure this is the textbook used for your section.*

**Topics:**

Chapter 12: Vectors and the Geometry of Space

12.1 Three-Dimensional Coordinate Systems

12.2 Vectors

12.3 The Dot Product

12.4 The Cross Product

12.5 Equations of Lines and Planes

12.6 Cylinders and Quadric Surfaces

Chapter 13: Vector Functions

13.1 Vector Functions and Space Curves

13.2 Derivatives and Integrals of Vector functions

13.3 Arc Length and Curvature

13.4 Motion in Space: Velocity and Acceleration

Chapter 14: Partial Derivatives

14.1 Functions of Several Variables

14.2 Limits and Continuity

14.3 Partial Derivatives

14.4 Tangent Planes and Linear Approximations

14.5 The Chain Rule

14.6 Directional Derivatives and the Gradient Vector

14.7 Maximum and Minimum Values

14.8 Lagrange Multipliers

## Strategy for Evaluating

then substitute u=cosx. **(b)** If the power of cosine is odd (n=2k+1), save one cosine factor and use the identity sin 2 x + cos 2 x = 1 to convert the remaining factors in terms of sine.

then substitute u=sinx. **(c)** If the powers of both sine and cosine are even then use the half angle identities.

In some cases it may be helpful to use the identity

Now that we have learned strategies for solving integrals with factors of sine and cosine we can use similar techniques to solve integrals with factors of tangent and secant. Using the identity **sec 2 x = 1 + tan 2 x** we are able to convert even powers of secant to tangent and vice versa. Now we will consider two examples to illustrate two common strategies used to solve integrals of the form

Suppose we have an integral such as

Observing that (d/dx)tanx=sec 2 x we can separate a factor of sec 2 x and still be left with an even power of secant. Using the identity **sec 2 x = 1 + tan 2 x** we can convert the remaining sec 2 x to an expression involving tangent. Thus we have:

Then substitute u=tanx to obtain:

__ Note:__ Suppose we tried to use the substitution u=secx, then du=secxtanxdx. When we separate out a factor of secxtanx we are left with an odd power of tangent which is not easily converted to secant.

Since (d/dx)secx=secxtanx we can separate a factor of secxtanx and still be left with an even power of tangent which we can easily convert to an expression involving secant using the identity **sec 2 x = 1 + tan 2 x.** Thus we have:

Then substitute u=secx to obtain:

__ Note:__ Suppose we tried to use the substitution u=tanx, then du=sec 2 xdx. When we separate out a factor of sec 2 x we are left with an odd power of secant which is not easily converted to tangent.

## Antiderivatives

Three examples of a type of problem that arises in various contexts are the following: find the cost function C (x) if marginal cost C '(x) is known find the population P (t) of a biological colony if the rate P '(r) at which the population is changing is known find the displacement s (t) of an object at time t if the velocity v (t) = s '(r) is known.

Notice that all these problems share the same basic format: to find f(x), given f '(x). All such problems are solved by antidifferentiation. An elementary example from business is the case of a manufacturer who determines that over an initial period of production. the marginal cost of production increases linearly and is given by C '(x) = 2x. We shall try to find a corresponding cost function C (x) for which C '(x) = 2x. Although we have no analytical procedures for finding such a C(x), it should be clear that the cost function C(x) = x 2 will give us the known marginal cost C '(x) = 2x. But other cost functions will work as well. For example,

and in fact for any number a ,

Thus, any cost function of the form C (x) = x 2 + a will give the desired marginal revenue C '(x) = 2x more information is needed to determine a specific value for a. We shall return to this in a moment. The process we are now considering is called antidifferentiation. In a general setting, it can be stated as follows:

**Definition **For a given function f(x), a function g such that

is called an antiderivative of f. The process of finding such a function g is called antidifferentiation. Some mathematicians prefer to call this process indefinite integration, or simply integration for reasons that will become apparent in later sections.

In our introductory example, each of the cost functions x 2 , x 2 + 1, and x 2 + 10 is an antiderivative of f(x) = 2x moreover, C(x) = x 2 + a is an antiderivative of f(x) = 2x for any choice of a. In general, whenever g (x) is an antiderivative of f(x), so is g (x) + a for any number a, since

It is possible to prove the following even stronger result:

If g is any antiderivative of f, then every other antiderivative off must have the form g(x) + a for some number a.

Thus, we can think of g(x) + a as the most general antiderivative of f. Consequently, the most general antiderivative of f is not a single function but rather a class of functions g (x) + a that depend on a.

The German mathematician Gottfried Wilhelm Leibniz (1646-1716) introduced the notation

(read as "the antiderivative of f" or "the indefinite integral of f") to represent the most general antiderivative of f. Thus, if g is any antiderivative of f, then for any number a.

**Example 2 **

**Example 3 **

The number a that arises in antidifferentiation is often called an "arbitrary constant." (For reasons which will become apparent later, it is also called a "constant of integration.") In our examples we have used the letter *a* to designate this constant, but in practice c is usually used. (We used the letter *a* instead of *c* for our initial illustration involving cost since c was used to denote cost.) The following example gives us an insight into the significance of this arbitrary constant.

Suppose that during the initial stages of production the marginal cost to produce a commodity is C '(x) = 2x dollars per unit. This time, suppose the manufacturer also knows that the fixed cost of production, C(0), is $500. Find the corresponding cost function C (x).

We have already seen that any cost function for this marginal cost must be of the form C(x) = x 2 + a for some constant a. Since

we have a = 500. Thus, the cost function is given by C(x) = x 2 + 500

From this example, we see that the arbitrary constant c is the fixed cost of production. Knowing only the marginal cost cannot tell us what that fixed cost is the fixed cost is additional information. Each of the cost functions corresponding to a marginal cost of C'(x) = 2x will have the form

The following two results are very useful in the evaluation of antiderivatives. Here, n denotes a real number and c is a constant of integration.

Note that Rule (2) holds for n != - 1. and Rule (3) covers the case that n = -1. To verify Rule (2), we use Definition (1) as follows: