# Difference between revisions of "Interior-point method for NLP"

Cindyxchen (Talk | contribs) (→Application) |
Cindyxchen (Talk | contribs) |
||

(3 intermediate revisions by one user not shown) | |||

Line 16: | Line 16: | ||

The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. | The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods. | ||

− | Starting with a general optimization problem: | + | <br>Starting with a general optimization problem: |

<math> | <math> | ||

Line 26: | Line 26: | ||

</math> | </math> | ||

− | Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor: | + | <br>Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor: |

<math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math> | <math>\nabla_x L (x, \lambda_h, \lambda_g)=0</math> | ||

Line 34: | Line 34: | ||

<math>(s, \lambda_g, \mu) \ge = 0</math><br> | <math>(s, \lambda_g, \mu) \ge = 0</math><br> | ||

− | Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br> | + | <br>Solve the nonlinear equations iteratively by Newton's methods. First determine <math>\Delta x</math> and <math>\Delta \lambda_h</math> with reduced linear equations.<br> |

Next, calculate slack variables and corresponding multipliers with: <br> | Next, calculate slack variables and corresponding multipliers with: <br> | ||

<math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br> | <math>\Delta s = -g(x) - s - \nabla g(x) \Delta x</math> <br> | ||

− | <math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br> | + | <math>\Delta \lambda_g = -\lambda_g + [s^{-1}] * {\mu e - [\lambda_g] \Delta s}</math><br><br> |

+ | To calculate the perturbation factor, <math>\mu</math>, use primal-dual distances: <br> | ||

+ | <math>\mu = \sigma * pdad = \sigma * \dfrac{\lambda_g^t s}{niq}</math><br> | ||

+ | where <math>\sigma</math> defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints. | ||

+ | <br><math>\sigma</math> ranges between 0 and 1. For the extreme conditions:<br> | ||

+ | <math>\sigma = 0</math> corresponds to ''affine-scaling direction'' where the optimal point is obtained through non-perturbed solution of KKT<br> | ||

+ | <math>\sigma = 1</math> corresponds to ''centralization direction'' where the non-optimal solution is found with a primal-dual distance equal to the initial value of <math>\mu</math><br> | ||

+ | <br> In a conventional primal-dual IP method, a constant value is assigned to <math>\sigma</math> (usually close to 0.1) for the iterations. This results in a search direction where 90% is defined towards the optimal point and 10% is allocated to trajectory of centralization. | ||

− | + | =Illustrative Example= | |

− | + | Perform 1 iteration of IP method to solve the following NLP:<br> | |

− | + | <math> | |

− | ( | + | \begin{align} |

− | + | \text{min} & ~~ f = 0.25x_1^2 + x_2^2\\ | |

− | + | \text{s.t.} & ~~ 1 \le x1 - x2 \le 7\\ | |

− | + | \end{align} | |

− | + | </math> | |

− | + | ||

− | + | <br>To solve, first form the Lagrange function: | |

− | with | + | <br><math>L(x,\lambda)=f(x) + \lambda F(x) = 0.25x_1^2 + x_2^2 + \lambda(x_1 - x_2)</math> |

− | + | <br><math>U(x,z) = f_x^T (x) + zF_x^T (x) = 0</math> | |

+ | <br><math>f_x = x_1 - x_2</math> | ||

+ | <br> | ||

+ | <math>F_x = \begin{bmatrix} | ||

+ | 0.5x_1 & 2x_2 \\ | ||

+ | \end{bmatrix}</math> | ||

+ | <math>S = U_x (x,z) = f_{xx} + \sum_{i=1}^3 z_i f_{xx} (x) = \begin{bmatrix} | ||

+ | \lambda^T (1-x_2)+1 & \lambda^T (x_1-1)+1 \\ | ||

+ | \lambda^T (1-x_2)-1 & \lambda^T (x_1-1)-1 \\ | ||

+ | \end{bmatrix}</math><br><br> | ||

+ | |||

+ | Using the initial solution <math>x=[1,0], \lambda = -2, k = 3</math>:<br> | ||

+ | |||

+ | <math>S = \begin{bmatrix} | ||

+ | 2x_2-1 & 3-2x_1 \\ | ||

+ | 2x_2-3 & 1-2x_1 \\ | ||

+ | \end{bmatrix} = \begin{bmatrix} | ||

+ | -1 & 1 \\ | ||

+ | -3 & -1 \\ | ||

+ | \end{bmatrix}</math> | ||

+ | <br><br> | ||

+ | |||

+ | Solving with Newton's Method: | ||

+ | <br><math> | ||

+ | \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix}^{new} = \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix}^{old} + \begin{bmatrix} \delta x_1 \\ \delta x_2 \\ \end{bmatrix} = | ||

+ | \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} + \begin{bmatrix} 5.25 \\ -0.25 \\ \end{bmatrix} = \begin{bmatrix} 6.25 \\ -0.25 \\ \end{bmatrix}</math><br><br> | ||

+ | |||

+ | So after 1 iteration: <br> | ||

+ | <math>f(x_1, x_2) = f(6.25,-0.25) = 0.25(6.25)^2 + (-0.25)^2 = 9.828</math><br><br> | ||

+ | |||

+ | After running through multiple iterations, the perturbation factor can be minimized to reach a solution close to the true value. | ||

=Conclusion= | =Conclusion= | ||

Line 60: | Line 97: | ||

=References= | =References= | ||

1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br> | 1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. [https://people.kth.se/~andersf/doc/sirev41494.pdf Link].<br> | ||

− | 2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link] | + | 2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. [http://www.math.uiuc.edu/documenta/vol-ismp/20_shanno-david.pdf Link]<br> |

+ | 3. Castronuovo, Edgardo D.; Campagnolo, Jorge M.; Salgado, Roberto. "New Versions of Interior Point Methods Applied to the Optimal Power Flow Problem." Optimization Online (2011). [http://www.optimization-online.org/DB_FILE/2001/11/405.pdf Link] <br> | ||

+ | 4. Momoh, James A. ''Electric Power System Applications of Optimization.'' Second Edition. CRC Press, 2008. 233-256. Online [https://books.google.com/books?id=3ifNBQAAQBAJ&lpg=PR8&ots=aq8f36GTc-&dq=example%20of%20nlp%20ip%20method&pg=PA257#v=onepage&q=example%20of%20nlp%20ip%20method&f=false Link] |

## Latest revision as of 01:22, 7 June 2015

Author names: Cindy Chen

Steward: Dajun Yue and Fengqi You

## Contents |

# Introduction

The interior point (IP) method for nonlinear programming was pioneered by Anthony V. Fiacco and Garth P. McCormick in the early 1960s. The basis of IP method restricts the constraints into the objective function (duality) by creating a barrier function. This limits potential solutions to iterate in only the feasible region, resulting in a much more efficient algorithm with regards to time complexity.

# Algorithm

To ensure the program remains within the feasible region, a perturbation factor, , is added to "penalize" close approaches to the boundaries. This approach is analogous to the use of an invisible fence to keep dogs in an unfenced yard. As the dog moves closer to the boundaries, the more shock he will feel. In the case of the IP method, the amount of shock is determined by . A large value of gives the analytic center of the feasible region. As decreases and approaches 0, the optimal value is calculated by tracing out a central path. With small incremental decreases in during each iteration, a smooth curve is generated for the central path. This method is accurate, but time consuming and computationally intense. Instead, Newton's method is often used to approximate the central path for non-linear programming. Using one Newton step to estimate each decrease in for each iteration, a polynomial ordered time complexity is achieved, resulting in a small zig-zag central path and convergence to the optimal solution.

The logarithmic barrier function is based on the logarithmic interior function:

# Application

The IP method for NLP have been commonly used to solve Optimal Power Flow (OPF) problems, where a set of nonlinear equations are used to find the optimal solution of a power network in terms of speed and reliability. To solve these problems, the perturbation factor is used in addition to the typical Karush-Kuhn-Tucker (KKT) methods.

Starting with a general optimization problem:

Modify the KKT conditions by adding convergence properties with slack variables and the perturbation factor:

Solve the nonlinear equations iteratively by Newton's methods. First determine and with reduced linear equations.

Next, calculate slack variables and corresponding multipliers with:

To calculate the perturbation factor, , use primal-dual distances:

where defines the trajectory of the optimal solution, pdad is the primal-dual average distance, and niq accounts for the inequality constraints.

ranges between 0 and 1. For the extreme conditions:

corresponds to *affine-scaling direction* where the optimal point is obtained through non-perturbed solution of KKT

corresponds to *centralization direction* where the non-optimal solution is found with a primal-dual distance equal to the initial value of

In a conventional primal-dual IP method, a constant value is assigned to (usually close to 0.1) for the iterations. This results in a search direction where 90% is defined towards the optimal point and 10% is allocated to trajectory of centralization.

# Illustrative Example

Perform 1 iteration of IP method to solve the following NLP:

To solve, first form the Lagrange function:

Using the initial solution :

Solving with Newton's Method:

So after 1 iteration:

After running through multiple iterations, the perturbation factor can be minimized to reach a solution close to the true value.

# Conclusion

The IP method was later adapted for linear programming by Karmarkar in 1984. As a polynomial-time linear programming method, it solved complex linear problems 50 times faster than the simplex method. Multiple solvers utilize the IP method for non-linear programming, such as IPOPT and KNITRO, both of which were developed by IEMS professors at Northwestern University. Although successful, the IP method is no longer as popular since the creation of more competitive methods, such as sequential quadratic programming.

# References

1. Forsgren, Anders; Gill, Philip E.; Wright, Margaret H. "Interior Methods for Nonlinear Optimization." Society for Industrial Applied Mathematics Review. 44.4: 525-597. Link.

2. Shanno, David. "Who Invented the Interior Point Method?" Documenta Mathematica Extra Volume ISMP (2012): 55-64. Link

3. Castronuovo, Edgardo D.; Campagnolo, Jorge M.; Salgado, Roberto. "New Versions of Interior Point Methods Applied to the Optimal Power Flow Problem." Optimization Online (2011). Link

4. Momoh, James A. *Electric Power System Applications of Optimization.* Second Edition. CRC Press, 2008. 233-256. Online Link