Rosenbrock

Minimize 2-variable nonsmooth Rosenbrock function, subject to a simple bound constraint. Taken from: GRANSO demo examples 1, 2, & 3

Problem Description

\[\min_{x_1,x_2} w|x_1^2-x_2|+(1-x_1)^2,\]
\[\text{s.t. }c_1(x_1,x_2) = \sqrt{2}x_1-1 \leq 0, c_(x_1,x_2)=2x_2-1\leq0,\]

where \(w\) is a constant (e.g., \(w=8\))

Modules Importing

Import all necessary modules and add PyGRANSO src folder to system path.

[1]:
import time
import torch
from pygranso.pygranso import pygranso
from pygranso.pygransoStruct import pygransoStruct

Function Set-Up

Encode the optimization variables, and objective and constraint functions.

Note: please strictly follow the format of comb_fn, which will be used in the PyGRANSO main algortihm.

[2]:
device = torch.device('cpu')
# variables and corresponding dimensions.
var_in = {"x1": [1], "x2": [1]}

def comb_fn(X_struct):
    x1 = X_struct.x1
    x2 = X_struct.x2

    # objective function
    f = (8 * abs(x1**2 - x2) + (1 - x1)**2)

    # inequality constraint, matrix form
    ci = pygransoStruct()
    ci.c1 = (2**0.5)*x1-1
    ci.c2 = 2*x2-1

    # equality constraint
    ce = None

    return [f,ci,ce]

User Options

Specify user-defined options for PyGRANSO

[3]:
opts = pygransoStruct()
# option for switching QP solver. We only have osqp as the only qp solver in current version. Default is osqp
# opts.QPsolver = 'osqp'

# set an intial point
# All the user-provided data (vector/matrix/tensor) must be in torch tensor format.
# As PyTorch tensor is single precision by default, one must explicitly set `dtype=torch.double`.
# Also, please make sure the device of provided torch tensor is the same as opts.torch_device.
opts.x0 = torch.ones((2,1), device=device, dtype=torch.double)
opts.torch_device = device

Main Algorithm

[4]:
start = time.time()
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
end = time.time()
print("Total Wall Time: {}s".format(end - start))
print(soln.final.x)


╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║  PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface,  ║
║  the default is osqp. Users may provide their own wrapper for the QP solver.                  ║
║  To disable this notice, set opts.quadprog_info_msg = False                                   ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             ║
Version 1.0.0                                                                                                    ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications:                                                                                          ║
 # of variables                     :   2                                                                        ║
 # of inequality constraints        :   2                                                                        ║
 # of equality constraints          :   0                                                                        ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
   0 ║ 1.000000 │  1.41421356237 ║  0.00000000000 ║ 1.000000 │   -  ║ -  │     1 │ 0.000000 ║     1 │ 0.579471   ║
   1 ║ 1.000000 │  0.70773811042 ║  0.70773811042 ║ 0.000000 │   -  ║ S  │     3 │ 1.500000 ║     1 │ 10.07366   ║
   2 ║ 1.000000 │  0.25401310554 ║  0.25401310554 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.198885   ║
   3 ║ 1.000000 │  0.21478744238 ║  0.21478744238 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.136352   ║
   4 ║ 1.000000 │  0.21422378595 ║  0.21422378595 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.332997   ║
   5 ║ 1.000000 │  0.15330884270 ║  0.15330884270 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.122691   ║
   6 ║ 1.000000 │  0.14804462353 ║  0.14804462353 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.012623   ║
   7 ║ 1.000000 │  0.10856024489 ║  0.10856024489 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.042111   ║
   8 ║ 1.000000 │  0.10482600240 ║  0.10482593538 ║ 6.70e-08 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.003212   ║
   9 ║ 0.590490 │  0.05930348165 ║  0.09776366663 ║ 0.001575 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.028507   ║
  10 ║ 0.590490 │  0.05288121922 ║  0.08955480909 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.013736   ║
  11 ║ 0.590490 │  0.05256976230 ║  0.08902735406 ║ 0.000000 │   -  ║ S  │     6 │ 0.031250 ║     1 │ 0.005027   ║
  12 ║ 0.590490 │  0.05213546649 ║  0.08829187029 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.001898   ║
  13 ║ 0.590490 │  0.05097806280 ║  0.08624466915 ║ 5.14e-05 │   -  ║ S  │     4 │ 1.750000 ║     1 │ 2.18e-05   ║
  14 ║ 0.590490 │  0.05079563998 ║  0.08602286233 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     2 │ 3.00e-05   ║
  15 ║ 0.590490 │  0.05077733823 ║  0.08599186816 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 3.47e-04   ║
  16 ║ 0.590490 │  0.05071543228 ║  0.08588702943 ║ 2.70e-10 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 1.05e-05   ║
  17 ║ 0.590490 │  0.05067270160 ║  0.08581466510 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 3.74e-06   ║
  18 ║ 0.590490 │  0.05066273369 ║  0.08579778436 ║ 0.000000 │   -  ║ S  │     4 │ 0.125000 ║     3 │ 1.64e-05   ║
  19 ║ 0.590490 │  0.05066184116 ║  0.08579627286 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     4 │ 1.09e-05   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
  20 ║ 0.590490 │  0.05065742201 ║  0.08578809116 ║ 2.85e-07 │   -  ║ S  │     3 │ 1.500000 ║     4 │ 8.47e-07   ║
  21 ║ 0.228768 │  0.01962567651 ║  0.08578802937 ║ 1.27e-07 │   -  ║ SI │    24 │ 1.19e-07 ║     4 │ 2.88e-07   ║
  22 ║ 0.228768 │  0.01962563551 ║  0.08578840563 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     4 │ 8.95e-07   ║
  23 ║ 0.228768 │  0.01962558257 ║  0.08578817425 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     4 │ 6.35e-08   ║
  24 ║ 0.228768 │  0.01962531715 ║  0.08578701404 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     4 │ 6.13e-08   ║
  25 ║ 0.228768 │  0.01962530099 ║  0.08578694338 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     4 │ 6.35e-08   ║
  26 ║ 0.228768 │  0.01962527238 ║  0.08578681833 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     4 │ 6.36e-08   ║
  27 ║ 0.228768 │  0.01962526814 ║  0.08578679978 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     4 │ 7.48e-08   ║
  28 ║ 0.228768 │  0.01962525407 ║  0.08578673829 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     4 │ 9.96e-08   ║
  29 ║ 0.228768 │  0.01962521414 ║  0.08578643555 ║ 1.94e-08 │   -  ║ S  │     8 │ 4.250000 ║     4 │ 2.00e-07   ║
  30 ║ 0.228768 │  0.01962521395 ║  0.08578643827 ║ 1.94e-08 │   -  ║ SI │    33 │ 2.33e-10 ║     4 │ 2.52e-08   ║
  31 ║ 0.109419 │  0.00938668494 ║  0.08578653377 ║ 9.14e-09 │   -  ║ SI │    27 │ 1.49e-08 ║     4 │ 1.22e-08   ║
  32 ║ 0.109419 │  0.00938667939 ║  0.08578656654 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     4 │ 8.19e-09   ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   ║
Optimization results:                                                                                            ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
   F ║          │                ║  0.08578656654 ║ 0.000000 │   -  ║    │       │          ║       │            ║
   B ║          │                ║  0.08578632948 ║ 5.69e-07 │   -  ║    │       │          ║       │            ║
  MF ║          │                ║  0.08578643763 ║ 0.000000 │   -  ║    │       │          ║       │            ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations:              32                                                                                      ║
Function evaluations:    153                                                                                     ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances.                           ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
Total Wall Time: 0.4226830005645752s
tensor([[0.7071],
        [0.5000]], dtype=torch.float64)

PyGRANSO Restarting

(Optional) The following example shows how to set various PyGRANSO options (such as simpler ASCII printing) and how to restart PyGRANSO

[5]:
opts = pygransoStruct()
opts.torch_device = device
# set an infeasible initial point
opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)

# By default PyGRANSO will print using extended ASCII characters to 'draw' table borders and some color prints.
# If user wants to create a log txt file of the console output, please set opts.print_ascii = True
opts.print_ascii = True

# By default, PyGRANSO prints an info message about QP solvers, since
# PyGRANSO can be used with any QP solver that has a quadprog-compatible
# interface.  Let's disable this message since we've already seen it
# hundreds of times and can now recite it from memory.  ;-)
opts.quadprog_info_msg  = False

# Try a very short run.
opts.maxit = 10 # default is 1000

# PyGRANSO's penalty parameter is on the *objective* function, thus
# higher penalty parameter values favor objective minimization more
# highly than attaining feasibility.  Let's set PyGRANSO to start off
# with a higher initial value of the penalty parameter.  PyGRANSO will
# automatically tune the penalty parameter to promote progress towards
# feasibility.  PyGRANSO only adjusts the penalty parameter in a
# monotonically decreasing fashion.
opts.mu0 = 100  # default is 1

# start main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)


==================================================================================================================
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             |
Version 1.0.0                                                                                                    |
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       |
==================================================================================================================
Problem specifications:                                                                                          |
 # of variables                     :   2                                                                        |
 # of inequality constraints        :   2                                                                        |
 # of equality constraints          :   0                                                                        |
==================================================================================================================
     | <--- Penalty Function --> |                | Total Violation | <--- Line Search ---> | <- Stationarity -> |
Iter |    Mu    |      Value     |    Objective   |   Ineq   |  Eq  | SD | Evals |     t    | Grads |    Value   |
=====|===========================|================|=================|=======================|====================|
   0 | 100.0000 |  21841.7781746 |  218.250000000 | 10.00000 |   -  | -  |     1 | 0.000000 |     1 | 9732.768   |
   1 | 34.86784 |  1509.66611872 |  42.9789783006 | 11.08181 |   -  | S  |    10 | 0.001953 |     1 | 546.6038   |
   2 | 34.86784 |  1378.57842221 |  39.2815768212 | 8.914529 |   -  | S  |     3 | 1.500000 |     1 | 4.455384   |
   3 | 12.15767 |  285.452828054 |  22.6935200208 | 9.552604 |   -  | S  |     2 | 2.000000 |     1 | 0.297144   |
   4 | 12.15767 |  264.999595731 |  21.0732808630 | 8.797697 |   -  | S  |     2 | 2.000000 |     1 | 0.603629   |
   5 | 4.239116 |  60.5144787250 |  12.1478493778 | 9.018338 |   -  | S  |     2 | 2.000000 |     1 | 0.111610   |
   6 | 4.239116 |  53.5399399407 |  10.5181947367 | 8.952094 |   -  | S  |     2 | 0.500000 |     1 | 0.164082   |
   7 | 3.815204 |  48.9917031616 |  10.4947962860 | 8.951912 |   -  | S  |     4 | 0.125000 |     1 | 0.033640   |
   8 | 3.815204 |  48.7011303503 |  10.4372013183 | 8.881076 |   -  | S  |     2 | 2.000000 |     1 | 0.018555   |
   9 | 3.815204 |  48.2564717826 |  10.3422772655 | 8.798572 |   -  | S  |     2 | 2.000000 |     1 | 0.057946   |
  10 | 3.815204 |  39.4225027901 |  9.27057783616 | 4.053355 |   -  | S  |     5 | 16.00000 |     1 | 0.001796   |
==================================================================================================================
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   |
Optimization results:                                                                                            |
==================================================================================================================
   F |          |                |  9.27057783616 | 4.053355 |   -  |    |       |          |       |            |
  MF |          |                |  9.27057783616 | 4.053355 |   -  |    |       |          |       |            |
==================================================================================================================
Iterations:              10                                                                                      |
Function evaluations:    35                                                                                      |
PyGRANSO termination code: 4 --- max iterations reached.                                                         |
==================================================================================================================

Let’s restart PyGRANSO from the last iterate of the previous run

[6]:
opts = pygransoStruct()
opts.torch_device = device
# set the initial point and penalty parameter to their final values from the previous run
opts.x0 = soln.final.x
opts.mu0 = soln.final.mu
opts.opt_tol = 1e-6

# PREPARE TO RESTART PyGRANSO IN FULL-MEMORY MODE
# Set the last BFGS inverse Hessian approximation as the initial
# Hessian for the next run.  Generally this is a good thing to do, and
# often it is necessary to retain this information when restarting (as
# on difficult nonsmooth problems, PyGRANSO may not be able to restart
# without it).  However, your mileage may vary.  In the test, with
# the above settings, omitting H0 causes PyGRANSO to take an additional
# 16 iterations to converge on this problem.
opts.H0 = soln.H_final     # try running with this commented out

# When restarting, soln.H_final may fail PyGRANSO's initial check to
# assess whether or not the user-provided H0 is positive definite.  If
# it fails this test, the test may be disabled by setting opts.checkH0
# to false.
# opts.checkH0 = False       % Not needed for this example

# If one desires to restart PyGRANSO as if it had never stopped (e.g.
# to continue optimization after it hit its maxit limit), then one must
# also disable scaling the initial BFGS inverse Hessian approximation
# on the very first iterate.
opts.scaleH0 = False

# Restart PyGRANSO
opts.maxit = 100 # increase maximum allowed iterations

# Main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)


╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║  PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface,  ║
║  the default is osqp. Users may provide their own wrapper for the QP solver.                  ║
║  To disable this notice, set opts.quadprog_info_msg = False                                   ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             ║
Version 1.0.0                                                                                                    ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications:                                                                                          ║
 # of variables                     :   2                                                                        ║
 # of inequality constraints        :   2                                                                        ║
 # of equality constraints          :   0                                                                        ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
   0 ║ 3.815204 │  39.4225027901 ║  9.27057783616 ║ 4.053355 │   -  ║ -  │     1 │ 0.000000 ║     1 │ 0.161642   ║
   1 ║ 2.503156 │  27.1660390047 ║  9.40556725606 ║ 3.622442 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.052571   ║
   2 ║ 2.252840 │  24.6945188331 ║  9.60502524451 ║ 3.055934 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.123979   ║
   3 ║ 2.027556 │  22.2891608988 ║  9.93655318601 ║ 2.142243 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.327632   ║
   4 ║ 1.642320 │  17.8466452739 ║  10.2830257908 ║ 0.958623 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.398523   ║
   5 ║ 1.642320 │  13.1488213993 ║  8.00624651871 ║ 0.000000 │   -  ║ S  │     4 │ 8.000000 ║     1 │ 1.154525   ║
   6 ║ 1.642320 │  11.5460120591 ║  6.65815886309 ║ 0.611182 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 1.564767   ║
   7 ║ 1.642320 │  9.64643631604 ║  5.87366310852 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 1.133714   ║
   8 ║ 1.642320 │  3.21798697724 ║  1.95941493549 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.753217   ║
   9 ║ 1.642320 │  3.20698340411 ║  1.95271491909 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.220366   ║
  10 ║ 1.642320 │  2.59256069646 ║  1.57859624223 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.043082   ║
  11 ║ 1.642320 │  2.09671972770 ║  1.27668134739 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.278937   ║
  12 ║ 1.642320 │  1.73910377054 ║  1.05893091751 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.273996   ║
  13 ║ 1.642320 │  1.64784761383 ║  1.00336553528 ║ 0.000000 │   -  ║ S  │     3 │ 1.500000 ║     1 │ 0.197039   ║
  14 ║ 1.642320 │  1.52746248381 ║  0.93006367811 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.016653   ║
  15 ║ 1.642320 │  1.39572672366 ║  0.84985048340 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.150180   ║
  16 ║ 1.642320 │  1.12391051398 ║  0.68434305758 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.020073   ║
  17 ║ 1.642320 │  0.91199315687 ║  0.55530772041 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.265923   ║
  18 ║ 1.642320 │  0.73621537898 ║  0.44827757835 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.274315   ║
  19 ║ 1.642320 │  0.66922940171 ║  0.40749017763 ║ 0.000000 │   -  ║ S  │     3 │ 1.500000 ║     1 │ 0.166023   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
  20 ║ 1.642320 │  0.65673189601 ║  0.39988051374 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.012112   ║
  21 ║ 1.642320 │  0.46186812731 ║  0.28122901468 ║ 0.000000 │   -  ║ S  │     4 │ 8.000000 ║     1 │ 0.086222   ║
  22 ║ 1.642320 │  0.39915827413 ║  0.24304532290 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.127273   ║
  23 ║ 1.642320 │  0.35431501107 ║  0.21574050158 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.004660   ║
  24 ║ 1.642320 │  0.33232210837 ║  0.20234914160 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.164148   ║
  25 ║ 1.642320 │  0.25925238218 ║  0.15785737894 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.062718   ║
  26 ║ 1.642320 │  0.25600945034 ║  0.15588277522 ║ 0.000000 │   -  ║ S  │     3 │ 1.500000 ║     1 │ 0.118204   ║
  27 ║ 1.642320 │  0.21667523919 ║  0.13193238594 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.003458   ║
  28 ║ 1.642320 │  0.17790982347 ║  0.10832833313 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.050594   ║
  29 ║ 1.642320 │  0.17051247205 ║  0.10382412570 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.001414   ║
  30 ║ 1.642320 │  0.16160174138 ║  0.09672898541 ║ 0.002740 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.001201   ║
  31 ║ 1.642320 │  0.14809398483 ║  0.08991753054 ║ 4.21e-04 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.071809   ║
  32 ║ 1.642320 │  0.14522491546 ║  0.08842666871 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.006831   ║
  33 ║ 1.642320 │  0.14411244414 ║  0.08774929092 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.001069   ║
  34 ║ 1.642320 │  0.14302671565 ║  0.08565902896 ║ 0.001581 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 9.51e-04   ║
  35 ║ 1.642320 │  0.14225247491 ║  0.08656823696 ║ 7.97e-05 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.008330   ║
  36 ║ 1.642320 │  0.14173105591 ║  0.08629927646 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.003548   ║
  37 ║ 1.642320 │  0.14147487321 ║  0.08614328819 ║ 0.000000 │   -  ║ S  │     4 │ 0.125000 ║     2 │ 4.60e-04   ║
  38 ║ 1.642320 │  0.14129124284 ║  0.08603147664 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 4.36e-04   ║
  39 ║ 1.642320 │  0.14102011969 ║  0.08586637047 ║ 3.41e-08 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 9.41e-06   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
  40 ║ 1.642320 │  0.14092146669 ║  0.08580631945 ║ 4.09e-09 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 7.95e-07   ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   ║
Optimization results:                                                                                            ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
   F ║          │                ║  0.08580631945 ║ 4.09e-09 │   -  ║    │       │          ║       │            ║
   B ║          │                ║  0.08580631945 ║ 4.09e-09 │   -  ║    │       │          ║       │            ║
  MF ║          │                ║  0.08603147664 ║ 0.000000 │   -  ║    │       │          ║       │            ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations:              40                                                                                      ║
Function evaluations:    83                                                                                      ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances.                           ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
[7]:
soln.final.x
[7]:
tensor([[0.7071],
        [0.5000]], dtype=torch.float64)

Results Logs

(Optional) opts below shows the importance of using an initial point that is neither near nor on a nonsmooth manifold, that is, the functions (objective and constraints) should be smooth at and about the initial point.

[8]:
opts = pygransoStruct()
opts.torch_device = device
# Set a randomly generated starting point.  In theory, with probability
# one, a randomly selected point will not be on a nonsmooth manifold.
opts.x0 = torch.randn((2,1), device=device, dtype=torch.double)   # randomly generated is okay
opts.maxit = 100  # we'll use this value of maxit later
opts.opt_tol = 1e-6

# However, (0,0) or (1,1) are on the nonsmooth manifold and if PyGRANSO
# is started at either of them, it will break down on the first
# iteration.  This example highlights that it is imperative to start
# PyGRANSO at a point where the functions are smooth.

# Uncomment either of the following two lines to try starting PyGRANSO
# from (0,0) or (1,1), where the functions are not differentiable.

# opts.x0 = torch.ones((2,1), device=device, dtype=torch.double)     # uncomment this line to try this point
# opts.x0 = torch.zeros((2,1), device=device, dtype=torch.double)    # uncomment this line to try this point

# Uncomment the following two lines to try starting PyGRANSO from a
# uniformly perturbed version of (1,1).  pert_level needs to be at
# least 1e-3 or so to get consistently reliable optimization quality.

# pert_level = 1e-3
# opts.x0 = (torch.ones((2,1)) + pert_level * (torch.randn((2,1)) - 0.5)).to(device=device, dtype=torch.double)

The opts below show how to use opts.halt_log_fn to create a history of iterates

NOTE: NO NEED TO CHANGE ANYTHING BELOW

[9]:
# SETUP THE LOGGING FEATURES

# Set up PyGRANSO's logging functions; pass opts.maxit to it so that
# storage can be preallocated for efficiency.

class HaltLog:
    def __init__(self):
        pass

    def haltLog(self, iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized,
                ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level):

        # DON'T CHANGE THIS
        # increment the index/count
        self.index += 1

        # EXAMPLE:
        # store history of x iterates in a preallocated cell array
        self.x_iterates.append(x)
        self.f.append(penaltyfn_parts.f)
        self.tv.append(penaltyfn_parts.tv)

        # keep this false unless you want to implement a custom termination
        # condition
        halt = False
        return halt

    # Once PyGRANSO has run, you may call this function to get retreive all
    # the logging data stored in the shared variables, which is populated
    # by haltLog being called on every iteration of PyGRANSO.
    def getLog(self):
        # EXAMPLE
        # return x_iterates, trimmed to correct size
        log = pygransoStruct()
        log.x   = self.x_iterates[0:self.index]
        log.f   = self.f[0:self.index]
        log.tv  = self.tv[0:self.index]
        return log

    def makeHaltLogFunctions(self,maxit):
        # don't change these lambda functions
        halt_log_fn = lambda iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level: self.haltLog(iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level)

        get_log_fn = lambda : self.getLog()

        # Make your shared variables here to store PyGRANSO history data
        # EXAMPLE - store history of iterates x_0,x_1,...,x_k
        self.index       = 0
        self.x_iterates  = []
        self.f           = []
        self.tv          = []

        # Only modify the body of logIterate(), not its name or arguments.
        # Store whatever data you wish from the current PyGRANSO iteration info,
        # given by the input arguments, into shared variables of
        # makeHaltLogFunctions, so that this data can be retrieved after PyGRANSO
        # has been terminated.
        #
        # DESCRIPTION OF INPUT ARGUMENTS
        #   iter                current iteration number
        #   x                   current iterate x
        #   penaltyfn_parts     struct containing the following
        #       OBJECTIVE AND CONSTRAINTS VALUES
        #       .f              objective value at x
        #       .f_grad         objective gradient at x
        #       .ci             inequality constraint at x
        #       .ci_grad        inequality gradient at x
        #       .ce             equality constraint at x
        #       .ce_grad        equality gradient at x
        #       TOTAL VIOLATION VALUES (inf norm, for determining feasibiliy)
        #       .tvi            total violation of inequality constraints at x
        #       .tve            total violation of equality constraints at x
        #       .tv             total violation of all constraints at x
        #       TOTAL VIOLATION VALUES (one norm, for L1 penalty function)
        #       .tvi_l1         total violation of inequality constraints at x
        #       .tvi_l1_grad    its gradient
        #       .tve_l1         total violation of equality constraints at x
        #       .tve_l1_grad    its gradient
        #       .tv_l1          total violation of all constraints at x
        #       .tv_l1_grad     its gradient
        #       PENALTY FUNCTION VALUES
        #       .p              penalty function value at x
        #       .p_grad         penalty function gradient at x
        #       .mu             current value of the penalty parameter
        #       .feasible_to_tol logical indicating whether x is feasible
        #   d                   search direction
        #   get_BFGS_state_fn   function handle to get the (L)BFGS state data
        #                       FULL MEMORY:
        #                       - returns BFGS inverse Hessian approximation
        #                       LIMITED MEMORY:
        #                       - returns a struct with current L-BFGS state:
        #                           .S          matrix of the BFGS s vectors
        #                           .Y          matrix of the BFGS y vectors
        #                           .rho        row vector of the 1/sty values
        #                           .gamma      H0 scaling factor
        #   H_regularized       regularized version of H
        #                       [] if no regularization was applied to H
        #   fn_evals            number of function evaluations incurred during
        #                       this iteration
        #   alpha               size of accepted size
        #   n_gradients         number of previous gradients used for computing
        #                       the termination QP
        #   stat_vec            stationarity measure vector
        #   stat_val            approximate value of stationarity:
        #                           norm(stat_vec)
        #                       gradients (result of termination QP)
        #   fallback_level      number of strategy needed for a successful step
        #                       to be taken.  See bfgssqpOptionsAdvanced.
        #
        # OUTPUT ARGUMENT
        #   halt                set this to true if you wish optimization to
        #                       be halted at the current iterate.  This can be
        #                       used to create a custom termination condition,
        return [halt_log_fn, get_log_fn]

mHLF_obj = HaltLog()
[halt_log_fn, get_log_fn] = mHLF_obj.makeHaltLogFunctions(opts.maxit)

#  Set PyGRANSO's logging function in opts
opts.halt_log_fn = halt_log_fn

# Main algorithm with logging enabled.
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)

# GET THE HISTORY OF ITERATES
# Even if an error is thrown, the log generated until the error can be
# obtained by calling get_log_fn()
log = get_log_fn()


╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║  PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface,  ║
║  the default is osqp. Users may provide their own wrapper for the QP solver.                  ║
║  To disable this notice, set opts.quadprog_info_msg = False                                   ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             ║
Version 1.0.0                                                                                                    ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications:                                                                                          ║
 # of variables                     :   2                                                                        ║
 # of inequality constraints        :   2                                                                        ║
 # of equality constraints          :   0                                                                        ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
   0 ║ 1.000000 │  5.91478160154 ║  5.51653284672 ║ 0.398249 │   -  ║ -  │     1 │ 0.000000 ║     1 │ 9.144089   ║
   1 ║ 0.348678 │  0.62468632153 ║  1.79158287318 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 2.153127   ║
   2 ║ 0.348678 │  0.53421825751 ║  1.53212299950 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.269070   ║
   3 ║ 0.348678 │  0.19520379803 ║  0.55983902524 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.138582   ║
   4 ║ 0.348678 │  0.17201277586 ║  0.49332782323 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.074744   ║
   5 ║ 0.348678 │  0.13420574786 ║  0.38489832585 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.033149   ║
   6 ║ 0.348678 │  0.12140505653 ║  0.34818630166 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.061771   ║
   7 ║ 0.348678 │  0.10616098112 ║  0.30446672036 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.025510   ║
   8 ║ 0.348678 │  0.09861370484 ║  0.28282134339 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.004640   ║
   9 ║ 0.348678 │  0.07435712849 ║  0.21325416183 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.033337   ║
  10 ║ 0.348678 │  0.07349145772 ║  0.21077144232 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.081071   ║
  11 ║ 0.348678 │  0.06086053106 ║  0.17454629843 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.060785   ║
  12 ║ 0.348678 │  0.05528000599 ║  0.15854150882 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.006487   ║
  13 ║ 0.348678 │  0.05415635969 ║  0.15531892272 ║ 0.000000 │   -  ║ S  │     5 │ 0.187500 ║     1 │ 0.142723   ║
  14 ║ 0.348678 │  0.05137077396 ║  0.14732994087 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.012874   ║
  15 ║ 0.348678 │  0.04882162145 ║  0.14001904285 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.007669   ║
  16 ║ 0.348678 │  0.04687607525 ║  0.13443927084 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 4.45e-04   ║
  17 ║ 0.348678 │  0.04220315351 ║  0.12103746218 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.020577   ║
  18 ║ 0.348678 │  0.04085127012 ║  0.08516696610 ║ 0.007515 │   -  ║ S  │     4 │ 1.750000 ║     1 │ 0.006555   ║
  19 ║ 0.348678 │  0.03800712919 ║  0.09930640794 ║ 0.002332 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.020930   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
  20 ║ 0.348678 │  0.03233074008 ║  0.09272365699 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 8.68e-04   ║
  21 ║ 0.121577 │  0.01110758374 ║  0.09136280137 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.003355   ║
  22 ║ 0.121577 │  0.01060454734 ║  0.08722519451 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 4.25e-04   ║
  23 ║ 0.121577 │  0.01049931349 ║  0.08635961832 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 3.08e-04   ║
  24 ║ 0.121577 │  0.01047461989 ║  0.08615650700 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     2 │ 6.81e-05   ║
  25 ║ 0.121577 │  0.01046947623 ║  0.08611419906 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 7.01e-05   ║
  26 ║ 0.121577 │  0.01045898771 ║  0.08602792815 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 7.99e-05   ║
  27 ║ 0.121577 │  0.01045690779 ║  0.08601082031 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 1.08e-04   ║
  28 ║ 0.121577 │  0.01044739431 ║  0.08580158663 ║ 1.20e-05 │   -  ║ S  │     9 │ 2.218750 ║     1 │ 6.63e-06   ║
  29 ║ 0.109419 │  0.00940025575 ║  0.08587490291 ║ 3.91e-06 │   -  ║ SI │    17 │ 1.53e-05 ║     2 │ 6.54e-07   ║
  30 ║ 0.109419 │  0.00939588755 ║  0.08587072155 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     3 │ 6.46e-07   ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   ║
Optimization results:                                                                                            ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
   F ║          │                ║  0.08587072155 ║ 0.000000 │   -  ║    │       │          ║       │            ║
   B ║          │                ║  0.08580454545 ║ 0.000000 │   -  ║    │       │          ║       │            ║
  MF ║          │                ║  0.08580454545 ║ 0.000000 │   -  ║    │       │          ║       │            ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations:              30                                                                                      ║
Function evaluations:    75                                                                                      ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances.                           ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
[10]:
print(log.f[0:3])
print(log.x[0:3])
[5.516532846720021, 1.7915828731811967, 1.5321229994988945]
[tensor([[0.2745],
        [0.6991]], dtype=torch.float64), tensor([[0.4303],
        [0.0018]], dtype=torch.float64), tensor([[0.3361],
        [0.2494]], dtype=torch.float64)]

LFBGS Restarting

(Optional)

(Note that this example problem only has two variables!)

If PyGRANSO runs in limited-memory mode, that is, if opts.limited_mem_size > 0, then PyGRANSO’s restart procedure is slightly different from the BFGS restarting, as soln.H_final will instead contain the most current L-BFGS state, not a full inverse Hessian approximation.

Instead the BFGS standard procedure, users should do the following: 1) If you set a specific H0, you will need to set opts.H0 to whatever you used previously. By default, PyGRANSO uses the identity for H0.

  1. Warm-start PyGRANSO with the most recent L-BFGS data by setting: opts.limited_mem_warm_start = soln.H_final;

NOTE: how to set opts.scaleH0 so that PyGRANSO will be restarted as if it had never terminated depends on the previously used values of opts.scaleH0 and opts.limited_mem_fixed_scaling.

[11]:
opts = pygransoStruct()
opts.torch_device = device
# set an infeasible initial point
opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)

opts.print_ascii = True
opts.quadprog_info_msg  = False
opts.maxit = 10 # default is 1000
opts.mu0 = 100  # default is 1
opts.print_frequency = 2


# By default, PyGRANSO uses full-memory BFGS updating.  For nonsmooth
# problems, full-memory BFGS is generally recommended.  However, if
# this is not feasible, one may optionally enable limited-memory BFGS
# updating by setting opts.limited_mem_size to a positive integer
# (significantly) less than the number of variables.
opts.limited_mem_size = 1

# start main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)


==================================================================================================================
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             |
Version 1.0.0                                                                                                    |
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       |
==================================================================================================================
Problem specifications:                                                                                          |
 # of variables                     :   2                                                                        |
 # of inequality constraints        :   2                                                                        |
 # of equality constraints          :   0                                                                        |
==================================================================================================================
Limited-memory mode enabled with size = 1.                                                                       |
NOTE: limited-memory mode is generally NOT                                                                       |
recommended for nonsmooth problems.                                                                              |
==================================================================================================================
     | <--- Penalty Function --> |                | Total Violation | <--- Line Search ---> | <- Stationarity -> |
Iter |    Mu    |      Value     |    Objective   |   Ineq   |  Eq  | SD | Evals |     t    | Grads |    Value   |
=====|===========================|================|=================|=======================|====================|
   0 | 100.0000 |  21841.7781746 |  218.250000000 | 10.00000 |   -  | -  |     1 | 0.000000 |     1 | 9732.768   |
   2 | 34.86784 |  1378.57842221 |  39.2815768212 | 8.914529 |   -  | S  |     3 | 1.500000 |     1 | 4.455384   |
   4 | 12.15767 |  262.610250639 |  20.8774186596 | 8.789579 |   -  | S  |     2 | 2.000000 |     1 | 0.604009   |
   6 | 4.239116 |  57.9458175917 |  11.5708792009 | 8.895520 |   -  | S  |     3 | 0.750000 |     1 | 0.165224   |
   8 | 1.642320 |  26.2056730861 |  10.5532019880 | 8.873935 |   -  | S  |     1 | 1.000000 |     1 | 0.027766   |
  10 | 1.642320 |  25.9163909350 |  10.4174724535 | 8.807564 |   -  | S  |     2 | 2.000000 |     1 | 0.021455   |
==================================================================================================================
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   |
Optimization results:                                                                                            |
==================================================================================================================
   F |          |                |  10.4174724535 | 8.807564 |   -  |    |       |          |       |            |
  MF |          |                |  75.6886238113 | 8.192103 |   -  |    |       |          |       |            |
==================================================================================================================
Iterations:              10                                                                                      |
Function evaluations:    29                                                                                      |
PyGRANSO termination code: 4 --- max iterations reached.                                                         |
==================================================================================================================
[12]:
# Restart
opts = pygransoStruct()
opts.torch_device = device
# set the initial point and penalty parameter to their final values from the previous run
opts.x0 = soln.final.x
opts.mu0 = soln.final.mu
opts.limited_mem_size = 1
opts.quadprog_info_msg  = False
opts.print_frequency = 2

opts.limited_mem_warm_start = soln.H_final
opts.scaleH0 = False

# In contrast to full-memory BFGS updating, limited-memory BFGS
# permits that H0 can be scaled on every iteration.  By default,
# PyGRANSO will reuse the scaling parameter that is calculated on the
# very first iteration for all subsequent iterations as well.  Set
# this option to false to force PyGRANSO to calculate a new scaling
# parameter on every iteration.  Note that opts.scaleH0 has no effect
# when opts.limited_mem_fixed_scaling is set to true.
opts.limited_mem_fixed_scaling = False

# Restart PyGRANSO
opts.maxit = 100 # increase maximum allowed iterations

# Main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)


═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             ║
Version 1.0.0                                                                                                    ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications:                                                                                          ║
 # of variables                     :   2                                                                        ║
 # of inequality constraints        :   2                                                                        ║
 # of equality constraints          :   0                                                                        ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Limited-memory mode enabled with size = 1.                                                                      NOTE: limited-memory mode is generally NOT                                                                      recommended for nonsmooth problems.                                                                              ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
   0 ║ 1.642320 │  25.9163909350 ║  10.4174724535 ║ 8.807564 │   -  ║ -  │     1 │ 0.000000 ║     1 │ 0.142170   ║
   2 ║ 1.642320 │  13.8167164344 ║  6.50516654793 ║ 3.133149 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 3.836712   ║
   4 ║ 1.642320 │  6.43976368442 ║  3.92113741712 ║ 1.49e-13 │   -  ║ S  │     4 │ 8.000000 ║     1 │ 3.895256   ║
   6 ║ 1.642320 │  4.83429798431 ║  2.94357800080 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.051607   ║
   8 ║ 1.642320 │  4.65433194405 ║  2.83399764834 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.018191   ║
  10 ║ 1.642320 │  3.93915291951 ║  2.39852899290 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.007871   ║
  12 ║ 1.642320 │  3.11071061794 ║  1.89409493820 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.062960   ║
  14 ║ 1.642320 │  2.65454712546 ║  1.61633944493 ║ 0.000000 │   -  ║ S  │     7 │ 0.046875 ║     1 │ 0.586153   ║
  16 ║ 1.642320 │  2.32279937155 ║  1.41434002467 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.126551   ║
  18 ║ 1.642320 │  2.07095045978 ║  1.26099057897 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.018684   ║
  20 ║ 1.642320 │  1.90771826108 ║  1.16159937250 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.021229   ║
  22 ║ 1.642320 │  1.59963185116 ║  0.97400721713 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.037486   ║
  24 ║ 1.642320 │  1.37732033554 ║  0.83864293283 ║ 0.000000 │   -  ║ S  │     3 │ 0.750000 ║     1 │ 0.208485   ║
  26 ║ 1.642320 │  1.21815586560 ║  0.74172854449 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.033736   ║
  28 ║ 1.642320 │  1.01220061824 ║  0.61632350383 ║ 0.000000 │   -  ║ S  │     3 │ 4.000000 ║     1 │ 0.002341   ║
  30 ║ 1.642320 │  0.84526282661 ║  0.51467598178 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.031024   ║
  32 ║ 1.642320 │  0.72210442469 ║  0.43968549429 ║ 0.000000 │   -  ║ S  │     4 │ 0.375000 ║     1 │ 0.165416   ║
  34 ║ 1.642320 │  0.68375411607 ║  0.41633419797 ║ 0.000000 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 0.010354   ║
  36 ║ 1.642320 │  0.62473059481 ║  0.38039509382 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.018070   ║
  38 ║ 1.642320 │  0.54461333491 ║  0.33161212585 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.533683   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║   Ineq   │  Eq  ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
  40 ║ 1.642320 │  0.38148532649 ║  0.23228436028 ║ 0.000000 │   -  ║ S  │     4 │ 0.125000 ║     1 │ 0.005209   ║
  42 ║ 1.642320 │  0.34267381138 ║  0.20865223780 ║ 0.000000 │   -  ║ S  │     2 │ 0.500000 ║     1 │ 0.099888   ║
  44 ║ 1.642320 │  0.23778293036 ║  0.14478474538 ║ 0.000000 │   -  ║ S  │     4 │ 0.125000 ║     1 │ 0.084472   ║
  46 ║ 1.642320 │  0.22843538334 ║  0.13909307436 ║ 0.000000 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 0.002119   ║
  48 ║ 1.642320 │  0.19511910847 ║  0.11744870451 ║ 0.002231 │   -  ║ S  │     1 │ 1.000000 ║     1 │ 2.756868   ║
  50 ║ 1.642320 │  0.14343687320 ║  0.08733793941 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     1 │ 0.001760   ║
  52 ║ 1.642320 │  0.14327776834 ║  0.08653941321 ║ 6.90e-04 │   -  ║ S  │     2 │ 2.000000 ║     1 │ 4.74e-04   ║
  54 ║ 1.197252 │  0.10274997930 ║  0.08582154855 ║ 0.000000 │   -  ║ S  │     3 │ 0.250000 ║     2 │ 9.34e-06   ║
  56 ║ 1.197252 │  0.10272513127 ║  0.08580003011 ║ 9.15e-07 │   -  ║ S  │     1 │ 1.000000 ║     2 │ 6.54e-08   ║
  58 ║ 1.197252 │  0.10271231607 ║  0.08578987986 ║ 2.52e-07 │   -  ║ S  │     1 │ 1.000000 ║     4 │ 1.14e-07   ║
  60 ║ 1.077526 │  0.09243733513 ║  0.08578659987 ║ 1.19e-08 │   -  ║ S  │     2 │ 0.500000 ║     4 │ 4.63e-09   ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   ║
Optimization results:                                                                                            ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
   F ║          │                ║  0.08578659987 ║ 1.19e-08 │   -  ║    │       │          ║       │            ║
   B ║          │                ║  0.08578659987 ║ 1.19e-08 │   -  ║    │       │          ║       │            ║
  MF ║          │                ║  0.08580650897 ║ 0.000000 │   -  ║    │       │          ║       │            ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations:              60                                                                                      ║
Function evaluations:    137                                                                                     ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances.                           ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝