1. Partial Derivatives
1.1 Definition
Let f : D ⊆ R n → R f : D \subseteq \mathbb{R}^n \to \mathbb{R} f : D ⊆ R n → R . The partial derivative of f f f with respect to
x i x_i x i at a = ( a 1 , … , a n ) \mathbf{a} = (a_1, \ldots, a_n) a = ( a 1 , … , a n ) is
∂ f ∂ x i ( a ) = lim h → 0 f ( a 1 , … , a i + h , … , a n ) − f ( a 1 , … , a n ) h \frac{\partial f}{\partial x_i}(\mathbf{a}) = \lim_{h \to 0} \frac{f(a_1, \ldots, a_i + h, \ldots, a_n) - f(a_1, \ldots, a_n)}{h} ∂ x i ∂ f ( a ) = lim h → 0 h f ( a 1 , … , a i + h , … , a n ) − f ( a 1 , … , a n )
provided the limit exists. This is the rate of change of f f f in the direction of the x i x_i x i -axis,
holding all other variables fixed.
1.2 Clairaut's Theorem
Theorem 1.1 (Clairaut's Theorem / Schwarz's Theorem). If f x y f_{xy} f x y and f y x f_{yx} f y x are continuous on an
open set containing ( a , b ) (a, b) ( a , b ) , then
∂ 2 f ∂ x ∂ y ( a , b ) = ∂ 2 f ∂ y ∂ x ( a , b ) \frac{\partial^2 f}{\partial x \partial y}(a,b) = \frac{\partial^2 f}{\partial y \partial x}(a,b) ∂ x ∂ y ∂ 2 f ( a , b ) = ∂ y ∂ x ∂ 2 f ( a , b )
1.3 Differentiability
Definition. f : D ⊆ R n → R f : D \subseteq \mathbb{R}^n \to \mathbb{R} f : D ⊆ R n → R is differentiable at a \mathbf{a} a if
there exists a linear map L : R n → R L : \mathbb{R}^n \to \mathbb{R} L : R n → R such that
lim h → 0 f ( a + h ) − f ( a ) − L ( h ) ∥ h ∥ = 0 \lim_{\mathbf{h} \to \mathbf{0}} \frac{f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - L(\mathbf{h})}{\lVert \mathbf{h} \rVert} = 0 lim h → 0 ∥ h ∥ f ( a + h ) − f ( a ) − L ( h ) = 0
When f f f is differentiable at a \mathbf{a} a , the linear map L L L is given by the gradient.
1.4 The Gradient
The gradient of f f f at a \mathbf{a} a is
∇ f ( a ) = ( ∂ f ∂ x 1 ( a ) , … , ∂ f ∂ x n ( a ) ) \nabla f(\mathbf{a}) = \left(\frac{\partial f}{\partial x_1}(\mathbf{a}), \ldots, \frac{\partial f}{\partial x_n}(\mathbf{a})\right) ∇ f ( a ) = ( ∂ x 1 ∂ f ( a ) , … , ∂ x n ∂ f ( a ) )
The linear approximation of f f f near a \mathbf{a} a is
f ( a + h ) ≈ f ( a ) + ∇ f ( a ) ⋅ h f(\mathbf{a} + \mathbf{h}) \approx f(\mathbf{a}) + \nabla f(\mathbf{a}) \cdot \mathbf{h} f ( a + h ) ≈ f ( a ) + ∇ f ( a ) ⋅ h
Theorem 1.2. If all partial derivatives of f f f exist and are continuous in a neighbourhood of
a \mathbf{a} a , then f f f is differentiable at a \mathbf{a} a .
1.5 Directional Derivatives
The directional derivative of f f f at a \mathbf{a} a in the direction of a unit vector u \mathbf{u} u is
D u f ( a ) = lim h → 0 f ( a + h u ) − f ( a ) h D_{\mathbf{u}} f(\mathbf{a}) = \lim_{h \to 0} \frac{f(\mathbf{a} + h\mathbf{u}) - f(\mathbf{a})}{h} D u f ( a ) = lim h → 0 h f ( a + h u ) − f ( a )
Theorem 1.3. If f f f is differentiable at a \mathbf{a} a , then
D u f ( a ) = ∇ f ( a ) ⋅ u D_{\mathbf{u}} f(\mathbf{a}) = \nabla f(\mathbf{a}) \cdot \mathbf{u} D u f ( a ) = ∇ f ( a ) ⋅ u
Corollary 1.4. The gradient points in the direction of steepest ascent, and ∥ ∇ f ∥ \lVert \nabla f \rVert ∥ ∇ f ∥
is the rate of steepest ascent.
1.6 Chain Rule
Theorem 1.5 (Multivariable Chain Rule). If g : R m → R n \mathbf{g} : \mathbb{R}^m \to \mathbb{R}^n g : R m → R n is
differentiable at a \mathbf{a} a and f : R n → R f : \mathbb{R}^n \to \mathbb{R} f : R n → R is differentiable at
g ( a ) \mathbf{g}(\mathbf{a}) g ( a ) , then
∇ ( f ∘ g ) ( a ) = J g ( a ) T ∇ f ( g ( a ) ) \nabla (f \circ \mathbf{g})(\mathbf{a}) = J\mathbf{g}(\mathbf{a})^T \nabla f(\mathbf{g}(\mathbf{a})) ∇ ( f ∘ g ) ( a ) = J g ( a ) T ∇ f ( g ( a ))
where J g J\mathbf{g} J g is the Jacobian matrix of g \mathbf{g} g .
1.7 Worked Example
Problem. Let f ( x , y ) = x 2 y + sin ( x y ) f(x, y) = x^2 y + \sin(xy) f ( x , y ) = x 2 y + sin ( x y ) . Compute ∇ f \nabla f ∇ f and find the directional derivative
at ( 1 , π ) (1, \pi) ( 1 , π ) in the direction u = ( 1 / 2 , 1 / 2 ) \mathbf{u} = (1/\sqrt{2}, 1/\sqrt{2}) u = ( 1/ 2 , 1/ 2 ) .
Solution.
∂ f ∂ x = 2 x y + y cos ( x y ) \frac{\partial f}{\partial x} = 2xy + y\cos(xy) ∂ x ∂ f = 2 x y + y cos ( x y )
∂ f ∂ y = x 2 + x cos ( x y ) \frac{\partial f}{\partial y} = x^2 + x\cos(xy) ∂ y ∂ f = x 2 + x cos ( x y )
∇ f ( 1 , π ) = ( 2 π + π cos ( π ) , 1 + cos ( π ) ) = ( 2 π − π , 1 − 1 ) = ( π , 0 ) \nabla f(1, \pi) = (2\pi + \pi\cos(\pi), 1 + \cos(\pi)) = (2\pi - \pi, 1 - 1) = (\pi, 0) ∇ f ( 1 , π ) = ( 2 π + π cos ( π ) , 1 + cos ( π )) = ( 2 π − π , 1 − 1 ) = ( π , 0 )
D u f ( 1 , π ) = ∇ f ( 1 , π ) ⋅ u = π ⋅ 1 2 + 0 = π 2 D_{\mathbf{u}} f(1, \pi) = \nabla f(1, \pi) \cdot \mathbf{u} = \pi \cdot \frac{1}{\sqrt{2}} + 0 = \frac{\pi}{\sqrt{2}} D u f ( 1 , π ) = ∇ f ( 1 , π ) ⋅ u = π ⋅ 2 1 + 0 = 2 π ■ \blacksquare ■
2. Multiple Integrals
2.1 Double Integrals
The double integral of f f f over a rectangle R = [ a , b ] × [ c , d ] R = [a,b] \times [c,d] R = [ a , b ] × [ c , d ] is defined as the limit of
Riemann sums:
∬ R f ( x , y ) d A = lim ∥ P ∥ → 0 ∑ i , j f ( x i j ∗ , y i j ∗ ) Δ A i j \iint_R f(x,y)\, dA = \lim_{\lVert P \rVert \to 0} \sum_{i,j} f(x_{ij}^*, y_{ij}^*) \Delta A_{ij} ∬ R f ( x , y ) d A = lim ∥ P ∥ → 0 ∑ i , j f ( x ij ∗ , y ij ∗ ) Δ A ij
Theorem 2.1 (Fubini's Theorem). If f f f is continuous on R = [ a , b ] × [ c , d ] R = [a,b] \times [c,d] R = [ a , b ] × [ c , d ] , then
∬ R f ( x , y ) d A = ∫ a b ( ∫ c d f ( x , y ) d y ) d x = ∫ c d ( ∫ a b f ( x , y ) d x ) d y \iint_R f(x,y)\, dA = \int_a^b \left(\int_c^d f(x,y)\, dy\right) dx = \int_c^d \left(\int_a^b f(x,y)\, dx\right) dy ∬ R f ( x , y ) d A = ∫ a b ( ∫ c d f ( x , y ) d y ) d x = ∫ c d ( ∫ a b f ( x , y ) d x ) d y
2.2 General Regions
For a general region D D D in R 2 \mathbb{R}^2 R 2 :
Type I region : D = { ( x , y ) : a ≤ x ≤ b , g 1 ( x ) ≤ y ≤ g 2 ( x ) } D = \{(x,y) : a \leq x \leq b,\, g_1(x) \leq y \leq g_2(x)\} D = {( x , y ) : a ≤ x ≤ b , g 1 ( x ) ≤ y ≤ g 2 ( x )}
∬ D f d A = ∫ a b ∫ g 1 ( x ) g 2 ( x ) f ( x , y ) d y d x \iint_D f\, dA = \int_a^b \int_{g_1(x)}^{g_2(x)} f(x,y)\, dy\, dx ∬ D f d A = ∫ a b ∫ g 1 ( x ) g 2 ( x ) f ( x , y ) d y d x
Type II region : D = { ( x , y ) : c ≤ y ≤ d , h 1 ( y ) ≤ x ≤ h 2 ( y ) } D = \{(x,y) : c \leq y \leq d,\, h_1(y) \leq x \leq h_2(y)\} D = {( x , y ) : c ≤ y ≤ d , h 1 ( y ) ≤ x ≤ h 2 ( y )}
∬ D f d A = ∫ c d ∫ h 1 ( y ) h 2 ( y ) f ( x , y ) d x d y \iint_D f\, dA = \int_c^d \int_{h_1(y)}^{h_2(y)} f(x,y)\, dx\, dy ∬ D f d A = ∫ c d ∫ h 1 ( y ) h 2 ( y ) f ( x , y ) d x d y
2.3 Triple Integrals
Triple integrals extend naturally to R 3 \mathbb{R}^3 R 3 :
∭ E f ( x , y , z ) d V = ∬ D ( ∫ g 1 ( x , y ) g 2 ( x , y ) f ( x , y , z ) d z ) d A \iiint_E f(x,y,z)\, dV = \iint_D \left(\int_{g_1(x,y)}^{g_2(x,y)} f(x,y,z)\, dz\right) dA ∭ E f ( x , y , z ) d V = ∬ D ( ∫ g 1 ( x , y ) g 2 ( x , y ) f ( x , y , z ) d z ) d A
2.4 Change of Variables
Theorem 2.2 (Change of Variables). Let T : D ⊆ R n → R n T : D \subseteq \mathbb{R}^n \to \mathbb{R}^n T : D ⊆ R n → R n be a
C 1 C^1 C 1 diffeomorphism with Jacobian determinant J T J_T J T . Then
∫ T ( D ) f ( u ) d u = ∫ D f ( T ( x ) ) ∣ J T ( x ) ∣ d x \int_{T(D)} f(\mathbf{u})\, d\mathbf{u} = \int_D f(T(\mathbf{x})) |J_T(\mathbf{x})|\, d\mathbf{x} ∫ T ( D ) f ( u ) d u = ∫ D f ( T ( x )) ∣ J T ( x ) ∣ d x
Polar coordinates: x = r cos θ x = r\cos\theta x = r cos θ , y = r sin θ y = r\sin\theta y = r sin θ , ∣ J ∣ = r |J| = r ∣ J ∣ = r .
∬ D f ( x , y ) d A = ∬ D ′ f ( r cos θ , r sin θ ) r d r d θ \iint_D f(x,y)\, dA = \iint_{D'} f(r\cos\theta, r\sin\theta)\, r\, dr\, d\theta ∬ D f ( x , y ) d A = ∬ D ′ f ( r cos θ , r sin θ ) r d r d θ
Cylindrical coordinates: x = r cos θ x = r\cos\theta x = r cos θ , y = r sin θ y = r\sin\theta y = r sin θ , z = z z = z z = z , ∣ J ∣ = r |J| = r ∣ J ∣ = r .
Spherical coordinates: x = ρ sin ϕ cos θ x = \rho\sin\phi\cos\theta x = ρ sin ϕ cos θ , y = ρ sin ϕ sin θ y = \rho\sin\phi\sin\theta y = ρ sin ϕ sin θ , z = ρ cos ϕ z = \rho\cos\phi z = ρ cos ϕ ,
∣ J ∣ = ρ 2 sin ϕ |J| = \rho^2 \sin\phi ∣ J ∣ = ρ 2 sin ϕ .
2.5 Worked Example
Problem. Compute ∬ D ( x 2 + y 2 ) d A \iint_D (x^2 + y^2)\, dA ∬ D ( x 2 + y 2 ) d A where D D D is the region bounded by x 2 + y 2 = 4 x^2 + y^2 = 4 x 2 + y 2 = 4 .
Solution. Use polar coordinates. The region D ′ D' D ′ is 0 ≤ r ≤ 2 0 \leq r \leq 2 0 ≤ r ≤ 2 , 0 ≤ θ ≤ 2 π 0 \leq \theta \leq 2\pi 0 ≤ θ ≤ 2 π .
∬ D ( x 2 + y 2 ) d A = ∫ 0 2 π ∫ 0 2 r 2 ⋅ r d r d θ = ∫ 0 2 π ∫ 0 2 r 3 d r d θ \iint_D (x^2 + y^2)\, dA = \int_0^{2\pi} \int_0^2 r^2 \cdot r\, dr\, d\theta = \int_0^{2\pi} \int_0^2 r^3\, dr\, d\theta ∬ D ( x 2 + y 2 ) d A = ∫ 0 2 π ∫ 0 2 r 2 ⋅ r d r d θ = ∫ 0 2 π ∫ 0 2 r 3 d r d θ
= ∫ 0 2 π [ r 4 4 ] 0 2 d θ = ∫ 0 2 π 4 d θ = 8 π = \int_0^{2\pi} \left[\frac{r^4}{4}\right]_0^2 d\theta = \int_0^{2\pi} 4\, d\theta = 8\pi = ∫ 0 2 π [ 4 r 4 ] 0 2 d θ = ∫ 0 2 π 4 d θ = 8 π
■ \blacksquare ■
3. Vector Calculus
3.1 Vector Fields
A vector field on R n \mathbb{R}^n R n is a function F : D ⊆ R n → R n \mathbf{F} : D \subseteq \mathbb{R}^n \to \mathbb{R}^n F : D ⊆ R n → R n .
A vector field F = ( P , Q , R ) \mathbf{F} = (P, Q, R) F = ( P , Q , R ) on R 3 \mathbb{R}^3 R 3 is conservative if there exists a scalar
potential ϕ \phi ϕ such that F = ∇ ϕ \mathbf{F} = \nabla \phi F = ∇ ϕ .
Theorem 3.1. F \mathbf{F} F is conservative (on a simply connected domain) if and only if
∇ × F = 0 \nabla \times \mathbf{F} = \mathbf{0} ∇ × F = 0 .
3.2 Line Integrals
Definition. The line integral of a vector field F \mathbf{F} F along a curve C C C parameterised by
r ( t ) \mathbf{r}(t) r ( t ) for a ≤ t ≤ b a \leq t \leq b a ≤ t ≤ b is
∫ C F ⋅ d r = ∫ a b F ( r ( t ) ) ⋅ r ′ ( t ) d t \int_C \mathbf{F} \cdot d\mathbf{r} = \int_a^b \mathbf{F}(\mathbf{r}(t)) \cdot \mathbf{r}'(t)\, dt ∫ C F ⋅ d r = ∫ a b F ( r ( t )) ⋅ r ′ ( t ) d t
Theorem 3.2 (Fundamental Theorem for Line Integrals). If F = ∇ ϕ \mathbf{F} = \nabla \phi F = ∇ ϕ and C C C is a
piecewise smooth curve from A A A to B B B , then
∫ C F ⋅ d r = ϕ ( B ) − ϕ ( A ) \int_C \mathbf{F} \cdot d\mathbf{r} = \phi(B) - \phi(A) ∫ C F ⋅ d r = ϕ ( B ) − ϕ ( A )
Corollary 3.3. The line integral of a conservative field around any closed curve is zero.
3.3 Green's Theorem
Theorem 3.4 (Green's Theorem). Let C C C be a positively oriented, piecewise smooth, simple closed
curve bounding a region D D D . If P P P and Q Q Q have continuous partial derivatives on an open set
containing D D D , then
∮ C P d x + Q d y = ∬ D ( ∂ Q ∂ x − ∂ P ∂ y ) d A \oint_C P\, dx + Q\, dy = \iint_D \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right) dA ∮ C P d x + Q d y = ∬ D ( ∂ x ∂ Q − ∂ y ∂ P ) d A
Worked Example. Evaluate ∮ C ( x 2 − y ) d x + ( y 2 + x ) d y \oint_C (x^2 - y)\, dx + (y^2 + x)\, dy ∮ C ( x 2 − y ) d x + ( y 2 + x ) d y where C C C is the unit circle
traversed counterclockwise.
Solution. By Green's theorem with P = x 2 − y P = x^2 - y P = x 2 − y and Q = y 2 + x Q = y^2 + x Q = y 2 + x :
∂ Q ∂ x = 1 , ∂ P ∂ y = − 1 \frac{\partial Q}{\partial x} = 1, \quad \frac{\partial P}{\partial y} = -1 ∂ x ∂ Q = 1 , ∂ y ∂ P = − 1
∮ C P d x + Q d y = ∬ D ( 1 − ( − 1 ) ) d A = 2 ∬ D d A = 2 ⋅ π ⋅ 1 2 = 2 π \oint_C P\, dx + Q\, dy = \iint_D (1 - (-1))\, dA = 2 \iint_D dA = 2 \cdot \pi \cdot 1^2 = 2\pi ∮ C P d x + Q d y = ∬ D ( 1 − ( − 1 )) d A = 2 ∬ D d A = 2 ⋅ π ⋅ 1 2 = 2 π
■ \blacksquare ■
3.4 Stokes' Theorem
Theorem 3.5 (Stokes' Theorem). Let S S S be an oriented surface with piecewise smooth boundary curve
C C C (positively oriented). If F \mathbf{F} F has continuous partial derivatives on an open set containing
S S S , then
∮ C F ⋅ d r = ∬ S ( ∇ × F ) ⋅ d S \oint_C \mathbf{F} \cdot d\mathbf{r} = \iint_S (\nabla \times \mathbf{F}) \cdot d\mathbf{S} ∮ C F ⋅ d r = ∬ S ( ∇ × F ) ⋅ d S
where d S = n d S d\mathbf{S} = \mathbf{n}\, dS d S = n d S is the vector surface element with unit normal n \mathbf{n} n .
3.5 Divergence Theorem
Theorem 3.6 (Divergence Theorem / Gauss's Theorem). Let E E E be a solid region bounded by a closed
surface S S S with outward normal n \mathbf{n} n . If F \mathbf{F} F has continuous partial derivatives on an
open set containing E E E , then
∬ S F ⋅ d S = ∭ E ∇ ⋅ F d V \iint_S \mathbf{F} \cdot d\mathbf{S} = \iiint_E \nabla \cdot \mathbf{F}\, dV ∬ S F ⋅ d S = ∭ E ∇ ⋅ F d V
where ∇ ⋅ F = ∂ P ∂ x + ∂ Q ∂ y + ∂ R ∂ z \nabla \cdot \mathbf{F} = \frac{\partial P}{\partial x} + \frac{\partial Q}{\partial y} + \frac{\partial R}{\partial z} ∇ ⋅ F = ∂ x ∂ P + ∂ y ∂ Q + ∂ z ∂ R
is the divergence of F \mathbf{F} F .
Worked Example. Compute the flux of F = ( x 3 , y 3 , z 3 ) \mathbf{F} = (x^3, y^3, z^3) F = ( x 3 , y 3 , z 3 ) through the unit sphere S S S .
Solution. By the divergence theorem:
∇ ⋅ F = 3 x 2 + 3 y 2 + 3 z 2 = 3 ( x 2 + y 2 + z 2 ) = 3 ρ 2 \nabla \cdot \mathbf{F} = 3x^2 + 3y^2 + 3z^2 = 3(x^2 + y^2 + z^2) = 3\rho^2 ∇ ⋅ F = 3 x 2 + 3 y 2 + 3 z 2 = 3 ( x 2 + y 2 + z 2 ) = 3 ρ 2
Using spherical coordinates:
∭ E 3 ρ 2 ⋅ ρ 2 sin ϕ d ρ d ϕ d θ = 3 ∫ 0 2 π ∫ 0 π ∫ 0 1 ρ 4 sin ϕ d ρ d ϕ d θ \iiint_E 3\rho^2 \cdot \rho^2 \sin\phi\, d\rho\, d\phi\, d\theta = 3 \int_0^{2\pi} \int_0^{\pi} \int_0^1 \rho^4 \sin\phi\, d\rho\, d\phi\, d\theta ∭ E 3 ρ 2 ⋅ ρ 2 sin ϕ d ρ d ϕ d θ = 3 ∫ 0 2 π ∫ 0 π ∫ 0 1 ρ 4 sin ϕ d ρ d ϕ d θ
= 3 ⋅ 2 π ⋅ 2 ⋅ 1 5 = 12 π 5 = 3 \cdot 2\pi \cdot 2 \cdot \frac{1}{5} = \frac{12\pi}{5} = 3 ⋅ 2 π ⋅ 2 ⋅ 5 1 = 5 12 π
■ \blacksquare ■
When applying Green's, Stokes', or the Divergence theorem, verify that the field has continuous partial
derivatives on the region (including interior). If there are singularities inside the region, the
theorems do not apply directly; the singularity must be handled separately.
4. Optimization
4.1 Local Extrema
Theorem 4.1 (First Derivative Test). If f f f has a local extremum at an interior point a \mathbf{a} a
and ∇ f ( a ) \nabla f(\mathbf{a}) ∇ f ( a ) exists, then ∇ f ( a ) = 0 \nabla f(\mathbf{a}) = \mathbf{0} ∇ f ( a ) = 0 .
Points where ∇ f = 0 \nabla f = \mathbf{0} ∇ f = 0 are called critical points (or stationary points).
4.2 Second Derivative Test
Theorem 4.2 (Second Derivative Test). Let f f f have continuous second partial derivatives near a
critical point ( a , b ) (a,b) ( a , b ) with f x ( a , b ) = f y ( a , b ) = 0 f_x(a,b) = f_y(a,b) = 0 f x ( a , b ) = f y ( a , b ) = 0 . Let
D = f x x ( a , b ) f y y ( a , b ) − [ f x y ( a , b ) ] 2 D = f_{xx}(a,b) f_{yy}(a,b) - [f_{xy}(a,b)]^2 D = f xx ( a , b ) f y y ( a , b ) − [ f x y ( a , b ) ] 2
be the Hessian determinant . Then:
If D > 0 D > 0 D > 0 and f x x ( a , b ) > 0 f_{xx}(a,b) > 0 f xx ( a , b ) > 0 : local minimum.
If D > 0 D > 0 D > 0 and f x x ( a , b ) < 0 f_{xx}(a,b) < 0 f xx ( a , b ) < 0 : local maximum.
If D < 0 D < 0 D < 0 : saddle point.
If D = 0 D = 0 D = 0 : the test is inconclusive.
4.3 Lagrange Multipliers
Theorem 4.3 (Method of Lagrange Multipliers). To find the extrema of f ( x , y , z ) f(x,y,z) f ( x , y , z ) subject to the
constraint g ( x , y , z ) = 0 g(x,y,z) = 0 g ( x , y , z ) = 0 , solve the system:
∇ f = λ ∇ g , g = 0 \nabla f = \lambda \nabla g, \quad g = 0 ∇ f = λ ∇ g , g = 0
More generally, for k k k constraints g 1 = 0 , … , g k = 0 g_1 = 0, \ldots, g_k = 0 g 1 = 0 , … , g k = 0 :
∇ f = λ 1 ∇ g 1 + ⋯ + λ k ∇ g k \nabla f = \lambda_1 \nabla g_1 + \cdots + \lambda_k \nabla g_k ∇ f = λ 1 ∇ g 1 + ⋯ + λ k ∇ g k
4.4 Worked Example
Problem. Find the maximum of f ( x , y ) = x y f(x,y) = xy f ( x , y ) = x y subject to x 2 + y 2 = 1 x^2 + y^2 = 1 x 2 + y 2 = 1 .
Solution. Set g ( x , y ) = x 2 + y 2 − 1 g(x,y) = x^2 + y^2 - 1 g ( x , y ) = x 2 + y 2 − 1 . The Lagrange multiplier equations:
∇ f = λ ∇ g ⟹ ( y , x ) = λ ( 2 x , 2 y ) \nabla f = \lambda \nabla g \implies (y, x) = \lambda(2x, 2y) ∇ f = λ ∇ g ⟹ ( y , x ) = λ ( 2 x , 2 y )
This gives y = 2 λ x y = 2\lambda x y = 2 λ x and x = 2 λ y x = 2\lambda y x = 2 λ y . Multiplying: x y = 4 λ 2 x y xy = 4\lambda^2 xy x y = 4 λ 2 x y .
Case 1: x y ≠ 0 xy \neq 0 x y = 0 . Then 4 λ 2 = 1 4\lambda^2 = 1 4 λ 2 = 1 , so λ = ± 1 / 2 \lambda = \pm 1/2 λ = ± 1/2 .
λ = 1 / 2 \lambda = 1/2 λ = 1/2 : y = x y = x y = x , and x 2 + x 2 = 1 x^2 + x^2 = 1 x 2 + x 2 = 1 , so x = ± 1 / 2 x = \pm 1/\sqrt{2} x = ± 1/ 2 . Points: ( 1 / 2 , 1 / 2 ) (1/\sqrt{2}, 1/\sqrt{2}) ( 1/ 2 , 1/ 2 ) and ( − 1 / 2 , − 1 / 2 ) (-1/\sqrt{2}, -1/\sqrt{2}) ( − 1/ 2 , − 1/ 2 ) with f = 1 / 2 f = 1/2 f = 1/2 .
λ = − 1 / 2 \lambda = -1/2 λ = − 1/2 : y = − x y = -x y = − x , and x 2 + x 2 = 1 x^2 + x^2 = 1 x 2 + x 2 = 1 , so x = ± 1 / 2 x = \pm 1/\sqrt{2} x = ± 1/ 2 . Points: ( 1 / 2 , − 1 / 2 ) (1/\sqrt{2}, -1/\sqrt{2}) ( 1/ 2 , − 1/ 2 ) and ( − 1 / 2 , 1 / 2 ) (-1/\sqrt{2}, 1/\sqrt{2}) ( − 1/ 2 , 1/ 2 ) with f = − 1 / 2 f = -1/2 f = − 1/2 .
Case 2: x y = 0 xy = 0 x y = 0 . Then either x = 0 x = 0 x = 0 or y = 0 y = 0 y = 0 . From the constraint: ( 0 , ± 1 ) (0, \pm 1) ( 0 , ± 1 ) or ( ± 1 , 0 ) (\pm 1, 0) ( ± 1 , 0 ) with f = 0 f = 0 f = 0 .
Maximum: f = 1 / 2 f = 1/2 f = 1/2 at ( ± 1 / 2 , ± 1 / 2 ) (\pm 1/\sqrt{2}, \pm 1/\sqrt{2}) ( ± 1/ 2 , ± 1/ 2 ) . Minimum: f = − 1 / 2 f = -1/2 f = − 1/2 at ( ± 1 / 2 , ∓ 1 / 2 ) (\pm 1/\sqrt{2}, \mp 1/\sqrt{2}) ( ± 1/ 2 , ∓ 1/ 2 ) . ■ \blacksquare ■
The Lagrange multiplier method finds candidates for constrained extrema but does not guarantee they
are extrema. Always check which candidate gives the maximum/minimum, or use additional reasoning (e.g.,
compactness of the constraint set).