# Definition

Lagrange multipliers is a method for finding extrema of a differentiable function of several variables subject to a given constraint (the equation(s) that may describe the boundary of a region).

Suppose $f(x_1,x_2,...,x_n)$ is a function subject to constraint $g(x_1,x_2,...,x_n)=k$, then extrema of the function will exist at the point where $\nabla f$ is parallel to $\nabla g$, the equation for the same is: $$$$\label{eq:elm}\nabla{f(x_1,x_2,...,x_n)}= \lambda\hspace{2mm} \nabla{g(x_1,x_2,...,x_n)}\tag{1}$$$$ where '$\lambda$' is the Lagrange Multiplier

$\nabla f = [f_{x_{1}}(x_1,x_2,...,x_n),f_{x_{2}}(x_1,x_2,...,x_n),...,f_{x_{n}}(x_1,x_2,...,x_n)]$ and

$\nabla g= [g_{x_{1}}(x_1,x_2,...,x_n),g_{x_{2}}(x_1,x_2,...,x_n),...,g_{x_{n}}(x_1,x_2,...,x_n)]$

1

# Motivation

The method of Lagrange multipliers is useful in solving constrained-optimization problems. For example, the production of two different types of goods has to be maximised subject to the given budget. This can be formulated as a constrained optimization problem:

the production function of goods given by $$f(x,y)=x^{\frac{2}{3}} y^{\frac{1}{3}}$$ where $x$ and $y$ are the number of two-different goods produced and $f$ needs to be maximized w.r.t the budget constraint $$g(x,y)=p_1 x+p_2 y$$ where $p_1$ and $p_2$ are fixed prices of $x$ and $y$ respectively. This problem can be solved using the concept of Lagrange multiplier. $$\nabla f(x,y) = \lambda g(x,y)$$ $$\implies [(\frac{2}{3}x^{\frac{-1}{3}}, \frac{1}{3}y^{\frac{-2}{3}})] = \lambda [(p_1,p_2)]$$

Solving the above equation for  $\lambda$ will give the maximum values of $x$ and $y$ (the maximum number of two different goods that can be produced together within a defined budget).

Many practical uses in science, engineering, economics, and in everyday life can be formulated as constrained optimization problems. In constrained optimization, there are some restrictions like which points within the domain of $f$ are to be analyzed for extrema. The method of Lagrange multipliers is useful in dealing with non-linear, equality, and inequality constraints in optimization problems.

1

# Bird's Eye View

The method of Lagrange Multipliers is a useful way to determine the extrema of the surface of a multivariable function subject to the constraint(s). Suppose, the function $g(x,y)=k$, is a closed curve, the following animation visualizes the extrema of $f(x,y)=x^2+y^2+x^3-y^3$, w.r.t constraint $g(x,y)=k$

1

1

Video 1: Extrema over g(x,y)=k

# Context of Definition

Taking the same example as used in the Bird's Eye View, the following animation shows the contour plots of the surface of the function $f(x,y)= x^2+y^2+x^3-y^3$ and the constraint circle. The contour curves of the plot are plotted by keeping the function $f$ constant ie, $f (x,y) = c$ ($f = 1$,...,$f = 4$), the levels of the function $f(x,y)$ increases on moving away from the centre. The minimum with respect to the constraint occurs at the point where the constraint touches the contour curve-1 (ie, $f=1$).  Similarly, the maximum occurs at the point where the constraint touches contour curve- 4 (ie, $f=4$).

2

2

Video 2: Constraint circle with contour plot of the surface x^2+y^2+x^3-y^3

##### Proof:

For a function $w=f(x,y)$ subject to constraint $g(x,y)=c$, assuming that $f(x,y)$ is the level curves and $g(x,y)$ is a circle. The extrema of the function $f$ would be any point on the circle.

2

2

Video 3: Geometric Proof

Now, $w$ = 17 has no points on the circle. So, the maximum value of $w$ < 17 on the constraint circle. Move down the level curves until they first touch the circle ie, when w = 14. Let P be the point where the function $w$ first touches the circle. P gives a local maximum for the function $w$ on $g = c$, because moving away in either direction from P on the circle will give a smaller valued level curve.

Since the circle is a level curve for $g$, $\nabla g$ is perpendicular to it( gradient is perpendicular to the tangent to any curve that lies on the level surface). Similarly, $\nabla f$ is perpendicular to the level curve $w$ = 14 since the curves themselves are tangent, then these two gradients must be parallel. Likewise, on moving down the level curves, the last one to touch the circle ($w$ = 4) will give a local minimum and the same argument will apply.

2

### Using Lagrange Multiplier to solve constrained optimization problems

Problem: To find the extrema of a multivariable function $f(x,y,z)$ subject to constraints $g(x,y,z) = k$ and $h(x,y,z)=0$

Solution: The unknowns, in this case, would be $x,y,z,\lambda$ and $\mu$. Therefore, 5 equations would be required to find the values of the unknowns.

$$\nabla f = \lambda \nabla g+\mu \nabla h$$

The set of equations would be:

$$f_x(x,y,z)= \lambda g_x(x,y,z)+\mu h_x(x,y,z)$$

$$f_y(x,y,z)= \lambda g_y(x,y,z)+\mu h_y(x,y,z)$$

$$f_z(x,y,z)= \lambda g_z(x,y,z)+\mu h_z(x,y,z)$$

and the equations of the constraints: $$g(x,y,z)=k$$ $$h(x,y,z)=0$$

Solving these equations will give the extrema of the function $f$.

3

##### Example:

Problem: The plane $x+y−z=1$ intersects the cylinder $x^2+y^2=1$ in an ellipse. Find the points on the ellipse closest to and farthest from the origin.

Solution: $f(x,y)= x^2+y^2+z^2$ is subject to constraints $g(x,y)= x^2+y^2-1 =0$ and $h(x,y)= x+y−z-1 =0$

The equation for the same would be: $$\nabla f = \lambda \nabla g+\mu \nabla h$$ where $\nabla f = [2x,2y,2z]$,  $\nabla g = [2x,2y,0]$ and $\nabla h = [1,1,-1]$

To determine the unknowns: $x,y,z,\lambda$ and $\mu$, 5 set of equations are required.

$$$$\label{eq: eq1} f_x(x,y,z)= \lambda g_x(x,y,z)+\mu h_x(x,y,z): \hspace{10mm} 2x = \lambda 2x+\mu \tag{2.a}$$$$

$$$$\label{eq: eq2} f_y(x,y,z)= \lambda g_y(x,y,z)+\mu h_y(x,y,z): \hspace{10mm} 2y= \lambda 2y+\mu \tag{2.b}$$$$

$$$$\label{eq: eq3} f_z(x,y,z)= \lambda g_z(x,y,z)+\mu h_z(x,y,z): \hspace{10mm} 2z= 0-\mu \tag{2.c}$$$$

and the equations of the constraints: $$$$\label{eq: eq4} x^2+y^2=1 \tag{2.d}$$$$ $$$$\label{eq: eq5} x+y−z=1\tag{2.e}$$$$

Subtracting  eq ($\ref{eq: eq1}$) and eq ($\ref{eq: eq2}$) gives $\lambda =1$  or  $x=y$.

case 1:   $\lambda =1$ then $\mu =0$,  so  $z =0$

Solving  eq ($\ref{eq: eq4}$)  and  eq ($\ref{eq: eq5}$) gives $x = 1, y=0$  or  $x =0, y=1$. So the points would be $(1,0,0)$ and $(0,1,0)$ both of them are at a distance $1$ from the origin.

case 2:   $x=y$

From  eq ($\ref{eq: eq4}) : 2x^2=1$, gives $x=y= \pm \frac{1}{\sqrt{2}}$

From  eq ($\ref{eq: eq5}) : z = -1 \pm \sqrt{2}$

The points obtained are: $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},-1 + \sqrt{2}) \approx 1.08$  and  $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},-1 - \sqrt{2}) \approx 2.6$

Thus, the points closest to the origin are: $(1,0,0)$   and  $(0,1,0)$

the point farthest from the origin is  $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},-1 + \sqrt{2})$

The following animation visualizes the constraints mentioned in the above example.

4

5

Video 4: Constraints g and h

##### Lagrangian Function

The Lagrangian function is used to convert a constrained optimization problem to an unconstrained problem. It is denoted by $\mathcal{L}$.

For example, the Lagrangian function to optimize $f(x,y)$ subject to constraint $g(x,y)=c$ would be be: $$\mathcal{L} (x,y,\lambda)=f(x,y)-\lambda(g(x,y)-c)$$

where $\mathcal{L} (x,y,\lambda)$ is the Lagrangian function,

$f(x,y)$ is the objective function,

$(g(x,y)-c)$ is a term that involves the functional constraint and $\lambda$ is the Lagrange multiplier.

Now, to find a critical point for the Lagrangian function $\mathcal{L} (x,y,\lambda)$ :-

First, compute the partial derivatives of $\mathcal{L}$ w.r.t $x$,$y$,$\lambda$

$$\mathcal{L}_x = f_x-\lambda g_x$$ $$\mathcal{L}_y = f_y-\lambda g_y$$ $$\mathcal{L}_\lambda =-(g(x,y)-c)$$

Assuming that $(x_0,y_0)$ is a critical point of $f$ subject to constraint $g(x,y)=c$ and $\lambda_0$ is the corresponding Lagrange multiplier, then at the point $(x_0,y_0,\lambda_0)$

$$\mathcal{L}_x =\mathcal{L}_y =\mathcal{L}_\lambda=0$$

Therefore, $(x_0,y_0,\lambda_0)$ is a critical point for the unconstrained Lagrangian function $\mathcal{L} (x,y,\lambda)$.

5

# Applications

• $\lambda$- dispatch problem or economic dispatch, in this problem the objective is to minimize generating costs, and the variables are subjected to the power balance constraint. This is an important application of the Lagrange multipliers method in power systems.
• Computational programming methods like the Barrier and Interior Point Method, Penalizing and Augmented Lagrange method, have been developed based on the basic rules of the Lagrange multipliers method.
• With the introduction of the multipliers and with the use of the principle of virtual works, Lagrange was able to reduce the problems of equilibrium to the determination of the necessary conditions(constraints) in order that a function (case of a point or a system of points) or a functional (case of a rigid body) has a maximum or a minimum.

7

# History

In his book Mecanique Analytique( Analytical Mechanics), Lagrange pointed out that it is important to reduce mechanics to purely analytical operations and to free it from intuitive geometrical considerations. For this reason, he introduced the method of multipliers (Methode des multipliers). In this way, Lagrange was able to study the equilibrium of a system of points and analyze the conditions of translational and rotational equilibrium. He also introduced the method of multipliers in his article "Essai d’une nouvelle methode pour determiner les maxima et les minima des formules integrals indefinies"

The method of multipliers allowed Lagrange to treat the questions of maxima and minima in differential calculus and in the calculus of variations in the same way as problems of statics: "if the equilibrium of a point or a system of points is required, there is an analogy between statics and differential calculus; if the equilibrium of a rigid body is required, there is an analogy between statics and calculus of variations".

8

# Pause and Ponder

• The Method of Lagrange Multipliers is an alternative method for determining the extrema of a surface subject to a constraint.

9

# References

$[1]$ https://math.etsu.edu/multicalc/prealpha/Chap2/Chap2-9/part1.htm

$[2]$ https://www.whitman.edu/mathematics/calculus_online/section14.08.html

$[3]$ https://math.la.asu.edu/~surgent/mat272/Sect_6_4.pdf

10

• https://math.la.asu.edu/~surgent/mat272/Sect_6_4.pdf