Rosenbrock¶
Minimize 2-variable nonsmooth Rosenbrock function, subject to a simple bound constraint. Taken from: GRANSO demo examples 1, 2, & 3
Problem Description¶
where \(w\) is a constant (e.g., \(w=8\))
Modules Importing¶
Import all necessary modules and add PyGRANSO src folder to system path.
[1]:
import time
import torch
from pygranso.pygranso import pygranso
from pygranso.pygransoStruct import pygransoStruct
Function Set-Up¶
Encode the optimization variables, and objective and constraint functions.
Note: please strictly follow the format of comb_fn, which will be used in the PyGRANSO main algortihm.
[2]:
device = torch.device('cpu')
# variables and corresponding dimensions.
var_in = {"x1": [1], "x2": [1]}
def comb_fn(X_struct):
x1 = X_struct.x1
x2 = X_struct.x2
# objective function
f = (8 * abs(x1**2 - x2) + (1 - x1)**2)
# inequality constraint, matrix form
ci = pygransoStruct()
ci.c1 = (2**0.5)*x1-1
ci.c2 = 2*x2-1
# equality constraint
ce = None
return [f,ci,ce]
User Options¶
Specify user-defined options for PyGRANSO
[3]:
opts = pygransoStruct()
# option for switching QP solver. We only have osqp as the only qp solver in current version. Default is osqp
# opts.QPsolver = 'osqp'
# set an intial point
# All the user-provided data (vector/matrix/tensor) must be in torch tensor format.
# As PyTorch tensor is single precision by default, one must explicitly set `dtype=torch.double`.
# Also, please make sure the device of provided torch tensor is the same as opts.torch_device.
opts.x0 = torch.ones((2,1), device=device, dtype=torch.double)
opts.torch_device = device
Main Algorithm¶
[4]:
start = time.time()
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
end = time.time()
print("Total Wall Time: {}s".format(end - start))
print(soln.final.x)
╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║
║ the default is osqp. Users may provide their own wrapper for the QP solver. ║
║ To disable this notice, set opts.quadprog_info_msg = False ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║
Version 1.2.0 ║
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications: ║
# of variables : 2 ║
# of inequality constraints : 2 ║
# of equality constraints : 0 ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
0 ║ 1.000000 │ 1.41421356237 ║ 0.00000000000 ║ 1.000000 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.579471 ║
1 ║ 1.000000 │ 0.70773811042 ║ 0.70773811042 ║ 0.000000 │ - ║ S │ 3 │ 1.500000 ║ 1 │ 10.07366 ║
2 ║ 1.000000 │ 0.25401310554 ║ 0.25401310554 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.198885 ║
3 ║ 1.000000 │ 0.21478744238 ║ 0.21478744238 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.135710 ║
4 ║ 1.000000 │ 0.21422378595 ║ 0.21422378595 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.332997 ║
5 ║ 1.000000 │ 0.15330884270 ║ 0.15330884270 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.122691 ║
6 ║ 1.000000 │ 0.14804462353 ║ 0.14804462353 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.012623 ║
7 ║ 1.000000 │ 0.10856024489 ║ 0.10856024489 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.042111 ║
8 ║ 1.000000 │ 0.10482595154 ║ 0.10482595154 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.003211 ║
9 ║ 0.810000 │ 0.07758251262 ║ 0.09438278485 ║ 0.001132 │ - ║ S │ 3 │ 1.500000 ║ 1 │ 0.038778 ║
10 ║ 0.810000 │ 0.07197699268 ║ 0.08886048479 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012307 ║
11 ║ 0.810000 │ 0.07055904204 ║ 0.08710992844 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.003100 ║
12 ║ 0.810000 │ 0.07048871361 ║ 0.08702310322 ║ 0.000000 │ - ║ S │ 7 │ 0.046875 ║ 1 │ 0.003061 ║
13 ║ 0.810000 │ 0.07020995506 ║ 0.08667895687 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.001026 ║
14 ║ 0.810000 │ 0.06962027906 ║ 0.08595096180 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 5.51e-06 ║
15 ║ 0.810000 │ 0.06952233581 ║ 0.08581975963 ║ 8.33e-06 │ - ║ S │ 3 │ 4.000000 ║ 2 │ 5.10e-06 ║
16 ║ 0.729000 │ 0.06255153247 ║ 0.08579440422 ║ 4.17e-06 │ - ║ S │ 2 │ 0.500000 ║ 3 │ 3.98e-06 ║
17 ║ 0.729000 │ 0.06254666970 ║ 0.08579790082 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 4 │ 7.59e-19 ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Optimization results: ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
F ║ │ ║ 0.08579790082 ║ 0.000000 │ - ║ │ │ ║ │ ║
B ║ │ ║ 0.08578643763 ║ 0.000000 │ - ║ │ │ ║ │ ║
MF ║ │ ║ 0.08578643763 ║ 0.000000 │ - ║ │ │ ║ │ ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations: 17 ║
Function evaluations: 40 ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
Total Wall Time: 0.18392133712768555s
tensor([[0.7071],
[0.5000]], dtype=torch.float64)
PyGRANSO Restarting¶
(Optional) The following example shows how to set various PyGRANSO options (such as simpler ASCII printing) and how to restart PyGRANSO
[5]:
opts = pygransoStruct()
opts.torch_device = device
# set an infeasible initial point
opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)
# By default PyGRANSO will print using extended ASCII characters to 'draw' table borders and some color prints.
# If user wants to create a log txt file of the console output, please set opts.print_ascii = True
opts.print_ascii = True
# By default, PyGRANSO prints an info message about QP solvers, since
# PyGRANSO can be used with any QP solver that has a quadprog-compatible
# interface. Let's disable this message since we've already seen it
# hundreds of times and can now recite it from memory. ;-)
opts.quadprog_info_msg = False
# Try a very short run.
opts.maxit = 10 # default is 1000
# PyGRANSO's penalty parameter is on the *objective* function, thus
# higher penalty parameter values favor objective minimization more
# highly than attaining feasibility. Let's set PyGRANSO to start off
# with a higher initial value of the penalty parameter. PyGRANSO will
# automatically tune the penalty parameter to promote progress towards
# feasibility. PyGRANSO only adjusts the penalty parameter in a
# monotonically decreasing fashion.
opts.mu0 = 100 # default is 1
# start main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
==================================================================================================================
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation |
Version 1.2.0 |
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang |
==================================================================================================================
Problem specifications: |
# of variables : 2 |
# of inequality constraints : 2 |
# of equality constraints : 0 |
==================================================================================================================
| <--- Penalty Function --> | | Total Violation | <--- Line Search ---> | <- Stationarity -> |
Iter | Mu | Value | Objective | Ineq | Eq | SD | Evals | t | Grads | Value |
=====|===========================|================|=================|=======================|====================|
0 | 100.0000 | 21841.7781746 | 218.250000000 | 10.00000 | - | - | 1 | 0.000000 | 1 | 9732.770 |
1 | 34.86784 | 1509.66611872 | 42.9789783006 | 11.08181 | - | S | 10 | 0.001953 | 1 | 546.6040 |
2 | 34.86784 | 1378.57842221 | 39.2815768212 | 8.914529 | - | S | 3 | 1.500000 | 1 | 4.455384 |
3 | 12.15767 | 285.452828054 | 22.6935200208 | 9.552604 | - | S | 2 | 2.000000 | 1 | 0.297144 |
4 | 12.15767 | 264.999595731 | 21.0732808630 | 8.797697 | - | S | 2 | 2.000000 | 1 | 0.603629 |
5 | 4.239116 | 60.5144787250 | 12.1478493778 | 9.018338 | - | S | 2 | 2.000000 | 1 | 0.111610 |
6 | 4.239116 | 53.5399399407 | 10.5181947367 | 8.952094 | - | S | 2 | 0.500000 | 1 | 0.164082 |
7 | 3.815204 | 48.9917031616 | 10.4947962860 | 8.951912 | - | S | 4 | 0.125000 | 1 | 0.033640 |
8 | 3.815204 | 48.7011303503 | 10.4372013183 | 8.881076 | - | S | 2 | 2.000000 | 1 | 0.018555 |
9 | 3.815204 | 48.2564717826 | 10.3422772655 | 8.798572 | - | S | 2 | 2.000000 | 1 | 0.057946 |
10 | 3.815204 | 39.4225027901 | 9.27057783616 | 4.053355 | - | S | 5 | 16.00000 | 1 | 0.001796 |
==================================================================================================================
Optimization results: |
F = final iterate, B = Best (to tolerance), MF = Most Feasible |
==================================================================================================================
F | | | 9.27057783616 | 4.053355 | - | | | | | |
MF | | | 9.27057783616 | 4.053355 | - | | | | | |
==================================================================================================================
Iterations: 10 |
Function evaluations: 35 |
PyGRANSO termination code: 4 --- max iterations reached. |
==================================================================================================================
Let’s restart PyGRANSO from the last iterate of the previous run
[6]:
opts = pygransoStruct()
opts.torch_device = device
# set the initial point and penalty parameter to their final values from the previous run
opts.x0 = soln.final.x
opts.mu0 = soln.final.mu
opts.opt_tol = 1e-6
# PREPARE TO RESTART PyGRANSO IN FULL-MEMORY MODE
# Set the last BFGS inverse Hessian approximation as the initial
# Hessian for the next run. Generally this is a good thing to do, and
# often it is necessary to retain this information when restarting (as
# on difficult nonsmooth problems, PyGRANSO may not be able to restart
# without it). However, your mileage may vary. In the test, with
# the above settings, omitting H0 causes PyGRANSO to take an additional
# 16 iterations to converge on this problem.
opts.H0 = soln.H_final # try running with this commented out
# When restarting, soln.H_final may fail PyGRANSO's initial check to
# assess whether or not the user-provided H0 is positive definite. If
# it fails this test, the test may be disabled by setting opts.checkH0
# to false.
# opts.checkH0 = False % Not needed for this example
# If one desires to restart PyGRANSO as if it had never stopped (e.g.
# to continue optimization after it hit its maxit limit), then one must
# also disable scaling the initial BFGS inverse Hessian approximation
# on the very first iterate.
opts.scaleH0 = False
# Restart PyGRANSO
opts.maxit = 100 # increase maximum allowed iterations
# Main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║
║ the default is osqp. Users may provide their own wrapper for the QP solver. ║
║ To disable this notice, set opts.quadprog_info_msg = False ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║
Version 1.2.0 ║
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications: ║
# of variables : 2 ║
# of inequality constraints : 2 ║
# of equality constraints : 0 ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
0 ║ 3.815204 │ 39.4225027901 ║ 9.27057783616 ║ 4.053355 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.161642 ║
1 ║ 2.503156 │ 27.1659262914 ║ 9.40498130422 ║ 3.623796 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.052069 ║
2 ║ 2.252840 │ 24.6931839860 ║ 9.60278347893 ║ 3.059650 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.121925 ║
3 ║ 2.027556 │ 22.3469956308 ║ 9.74369172380 ║ 2.591115 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.107446 ║
4 ║ 2.027556 │ 21.7313246108 ║ 9.83432474079 ║ 1.791681 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.249443 ║
5 ║ 2.027556 │ 18.8070548305 ║ 9.27572664351 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 1.149786 ║
6 ║ 2.027556 │ 15.5789878011 ║ 7.13239951765 ║ 1.117649 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 1.812487 ║
7 ║ 2.027556 │ 5.32456682805 ║ 2.62610104757 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.324572 ║
8 ║ 2.027556 │ 5.06514188742 ║ 2.49815146400 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.840053 ║
9 ║ 2.027556 │ 4.34855971809 ║ 2.14472981556 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.161357 ║
10 ║ 2.027556 │ 4.17458588015 ║ 2.05892511204 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.438844 ║
11 ║ 2.027556 │ 3.88777301389 ║ 1.91746767656 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.026078 ║
12 ║ 2.027556 │ 3.02755206246 ║ 1.49320271480 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.341024 ║
13 ║ 2.027556 │ 2.82661618788 ║ 1.39410020980 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.138146 ║
14 ║ 2.027556 │ 2.61942625386 ║ 1.29191317368 ║ 0.000000 │ - ║ S │ 4 │ 3.000000 ║ 1 │ 0.271047 ║
15 ║ 2.027556 │ 2.41981099682 ║ 1.19346200337 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.022472 ║
16 ║ 2.027556 │ 1.68687275803 ║ 0.83197346564 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.273910 ║
17 ║ 2.027556 │ 1.62715982554 ║ 0.80252277047 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.038027 ║
18 ║ 2.027556 │ 1.53557783057 ║ 0.75735410592 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.015193 ║
19 ║ 2.027556 │ 1.04198868285 ║ 0.51391364968 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.094309 ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
20 ║ 2.027556 │ 0.93149970489 ║ 0.45941997346 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.110416 ║
21 ║ 2.027556 │ 0.82149497454 ║ 0.40516513040 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.029324 ║
22 ║ 2.027556 │ 0.71807054560 ║ 0.35415572251 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.173107 ║
23 ║ 2.027556 │ 0.71364643723 ║ 0.35197373175 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012969 ║
24 ║ 2.027556 │ 0.54336780853 ║ 0.26799152256 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.049872 ║
25 ║ 2.027556 │ 0.52920946359 ║ 0.26100856119 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.090110 ║
26 ║ 2.027556 │ 0.38297864798 ║ 0.18888684491 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.078443 ║
27 ║ 2.027556 │ 0.34625417862 ║ 0.17077416634 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.148051 ║
28 ║ 2.027556 │ 0.33665070462 ║ 0.16603768844 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.003684 ║
29 ║ 2.027556 │ 0.26789115070 ║ 0.13212515763 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.089877 ║
30 ║ 2.027556 │ 0.25588826429 ║ 0.12620527840 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.054336 ║
31 ║ 2.027556 │ 0.19518453283 ║ 0.09626591659 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.028392 ║
32 ║ 2.027556 │ 0.19483151740 ║ 0.09609180774 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 1 │ 0.026507 ║
33 ║ 2.027556 │ 0.17882219593 ║ 0.08819593616 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 7.43e-05 ║
34 ║ 2.027556 │ 0.17424853428 ║ 0.08592330867 ║ 3.42e-05 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 4.88e-05 ║
35 ║ 1.077526 │ 0.09246135145 ║ 0.08580889928 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 2 │ 6.04e-18 ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Optimization results: ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
F ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║
B ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║
MF ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations: 35 ║
Function evaluations: 77 ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
[7]:
soln.final.x
[7]:
tensor([[0.7071],
[0.5000]], dtype=torch.float64)
Results Logs¶
(Optional) opts below shows the importance of using an initial point that is neither near nor on a nonsmooth manifold, that is, the functions (objective and constraints) should be smooth at and about the initial point.
[8]:
opts = pygransoStruct()
opts.torch_device = device
# Set a randomly generated starting point. In theory, with probability
# one, a randomly selected point will not be on a nonsmooth manifold.
opts.x0 = torch.randn((2,1), device=device, dtype=torch.double) # randomly generated is okay
opts.maxit = 100 # we'll use this value of maxit later
opts.opt_tol = 1e-6
# However, (0,0) or (1,1) are on the nonsmooth manifold and if PyGRANSO
# is started at either of them, it will break down on the first
# iteration. This example highlights that it is imperative to start
# PyGRANSO at a point where the functions are smooth.
# Uncomment either of the following two lines to try starting PyGRANSO
# from (0,0) or (1,1), where the functions are not differentiable.
# opts.x0 = torch.ones((2,1), device=device, dtype=torch.double) # uncomment this line to try this point
# opts.x0 = torch.zeros((2,1), device=device, dtype=torch.double) # uncomment this line to try this point
# Uncomment the following two lines to try starting PyGRANSO from a
# uniformly perturbed version of (1,1). pert_level needs to be at
# least 1e-3 or so to get consistently reliable optimization quality.
# pert_level = 1e-3
# opts.x0 = (torch.ones((2,1)) + pert_level * (torch.randn((2,1)) - 0.5)).to(device=device, dtype=torch.double)
The opts below show how to use opts.halt_log_fn to create a history of iterates
NOTE: NO NEED TO CHANGE ANYTHING BELOW
[9]:
# SETUP THE LOGGING FEATURES
# Set up PyGRANSO's logging functions; pass opts.maxit to it so that
# storage can be preallocated for efficiency.
class HaltLog:
def __init__(self):
pass
def haltLog(self, iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized,
ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level):
# DON'T CHANGE THIS
# increment the index/count
self.index += 1
# EXAMPLE:
# store history of x iterates in a preallocated cell array
self.x_iterates.append(x)
self.f.append(penaltyfn_parts.f)
self.tv.append(penaltyfn_parts.tv)
# keep this false unless you want to implement a custom termination
# condition
halt = False
return halt
# Once PyGRANSO has run, you may call this function to get retreive all
# the logging data stored in the shared variables, which is populated
# by haltLog being called on every iteration of PyGRANSO.
def getLog(self):
# EXAMPLE
# return x_iterates, trimmed to correct size
log = pygransoStruct()
log.x = self.x_iterates[0:self.index]
log.f = self.f[0:self.index]
log.tv = self.tv[0:self.index]
return log
def makeHaltLogFunctions(self,maxit):
# don't change these lambda functions
halt_log_fn = lambda iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level: self.haltLog(iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level)
get_log_fn = lambda : self.getLog()
# Make your shared variables here to store PyGRANSO history data
# EXAMPLE - store history of iterates x_0,x_1,...,x_k
self.index = 0
self.x_iterates = []
self.f = []
self.tv = []
# Only modify the body of logIterate(), not its name or arguments.
# Store whatever data you wish from the current PyGRANSO iteration info,
# given by the input arguments, into shared variables of
# makeHaltLogFunctions, so that this data can be retrieved after PyGRANSO
# has been terminated.
#
# DESCRIPTION OF INPUT ARGUMENTS
# iter current iteration number
# x current iterate x
# penaltyfn_parts struct containing the following
# OBJECTIVE AND CONSTRAINTS VALUES
# .f objective value at x
# .f_grad objective gradient at x
# .ci inequality constraint at x
# .ci_grad inequality gradient at x
# .ce equality constraint at x
# .ce_grad equality gradient at x
# TOTAL VIOLATION VALUES (inf norm, for determining feasibiliy)
# .tvi total violation of inequality constraints at x
# .tve total violation of equality constraints at x
# .tv total violation of all constraints at x
# TOTAL VIOLATION VALUES (one norm, for L1 penalty function)
# .tvi_l1 total violation of inequality constraints at x
# .tvi_l1_grad its gradient
# .tve_l1 total violation of equality constraints at x
# .tve_l1_grad its gradient
# .tv_l1 total violation of all constraints at x
# .tv_l1_grad its gradient
# PENALTY FUNCTION VALUES
# .p penalty function value at x
# .p_grad penalty function gradient at x
# .mu current value of the penalty parameter
# .feasible_to_tol logical indicating whether x is feasible
# d search direction
# get_BFGS_state_fn function handle to get the (L)BFGS state data
# FULL MEMORY:
# - returns BFGS inverse Hessian approximation
# LIMITED MEMORY:
# - returns a struct with current L-BFGS state:
# .S matrix of the BFGS s vectors
# .Y matrix of the BFGS y vectors
# .rho row vector of the 1/sty values
# .gamma H0 scaling factor
# H_regularized regularized version of H
# [] if no regularization was applied to H
# fn_evals number of function evaluations incurred during
# this iteration
# alpha size of accepted size
# n_gradients number of previous gradients used for computing
# the termination QP
# stat_vec stationarity measure vector
# stat_val approximate value of stationarity:
# norm(stat_vec)
# gradients (result of termination QP)
# fallback_level number of strategy needed for a successful step
# to be taken. See bfgssqpOptionsAdvanced.
#
# OUTPUT ARGUMENT
# halt set this to true if you wish optimization to
# be halted at the current iterate. This can be
# used to create a custom termination condition,
return [halt_log_fn, get_log_fn]
mHLF_obj = HaltLog()
[halt_log_fn, get_log_fn] = mHLF_obj.makeHaltLogFunctions(opts.maxit)
# Set PyGRANSO's logging function in opts
opts.halt_log_fn = halt_log_fn
# Main algorithm with logging enabled.
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
# GET THE HISTORY OF ITERATES
# Even if an error is thrown, the log generated until the error can be
# obtained by calling get_log_fn()
log = get_log_fn()
╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║
║ the default is osqp. Users may provide their own wrapper for the QP solver. ║
║ To disable this notice, set opts.quadprog_info_msg = False ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║
Version 1.2.0 ║
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications: ║
# of variables : 2 ║
# of inequality constraints : 2 ║
# of equality constraints : 0 ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
0 ║ 1.000000 │ 16.4067374414 ║ 16.4067374414 ║ 0.000000 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 16.89371 ║
1 ║ 1.000000 │ 12.1088353467 ║ 11.5118588213 ║ 0.596977 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 19.28340 ║
2 ║ 1.000000 │ 1.11708224486 ║ 1.11708224486 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.478277 ║
3 ║ 1.000000 │ 0.55748343428 ║ 0.55748343428 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.825482 ║
4 ║ 1.000000 │ 0.54324826834 ║ 0.54324826834 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.072797 ║
5 ║ 1.000000 │ 0.43272406722 ║ 0.43272406722 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.029511 ║
6 ║ 1.000000 │ 0.39143728403 ║ 0.39143728403 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.165019 ║
7 ║ 1.000000 │ 0.33585127390 ║ 0.33585127390 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.111788 ║
8 ║ 1.000000 │ 0.27769332848 ║ 0.27769332848 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.016351 ║
9 ║ 1.000000 │ 0.27206257507 ║ 0.27206257507 ║ 0.000000 │ - ║ S │ 6 │ 0.093750 ║ 1 │ 0.185187 ║
10 ║ 1.000000 │ 0.25369049840 ║ 0.25369049840 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.018059 ║
11 ║ 1.000000 │ 0.24095795441 ║ 0.24095795441 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.002996 ║
12 ║ 1.000000 │ 0.22956867248 ║ 0.22956867248 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.014434 ║
13 ║ 1.000000 │ 0.16940732170 ║ 0.16940732170 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.080774 ║
14 ║ 1.000000 │ 0.14128118763 ║ 0.14128118763 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.100849 ║
15 ║ 1.000000 │ 0.13056475076 ║ 0.13056475076 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.005391 ║
16 ║ 1.000000 │ 0.12030008103 ║ 0.12030008103 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.035053 ║
17 ║ 1.000000 │ 0.09388862233 ║ 0.09388862233 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.001793 ║
18 ║ 1.000000 │ 0.08668315753 ║ 0.08668315753 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.001835 ║
19 ║ 1.000000 │ 0.08658279210 ║ 0.08658279210 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.001426 ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
20 ║ 1.000000 │ 0.08597722224 ║ 0.08597722224 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 3.76e-05 ║
21 ║ 0.900000 │ 0.07729405862 ║ 0.08588228736 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 1.39e-05 ║
22 ║ 0.900000 │ 0.07725449555 ║ 0.08583832839 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 1 │ 1.42e-04 ║
23 ║ 0.900000 │ 0.07725013449 ║ 0.08583348277 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 5.96e-05 ║
24 ║ 0.900000 │ 0.07723808110 ║ 0.08580155461 ║ 9.62e-06 │ - ║ S │ 4 │ 1.250000 ║ 2 │ 7.31e-06 ║
25 ║ 0.900000 │ 0.07721211695 ║ 0.08579124105 ║ 6.63e-13 │ - ║ S │ 1 │ 1.000000 ║ 3 │ 4.06e-13 ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Optimization results: ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
F ║ │ ║ 0.08579124105 ║ 6.63e-13 │ - ║ │ │ ║ │ ║
B ║ │ ║ 0.08579124105 ║ 6.63e-13 │ - ║ │ │ ║ │ ║
MF ║ │ ║ 0.08580793766 ║ 0.000000 │ - ║ │ │ ║ │ ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations: 25 ║
Function evaluations: 60 ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
[10]:
print(log.f[0:3])
print(log.x[0:3])
[16.406737441369938, 11.51185882133243, 1.117082244864862]
[tensor([[-0.8448],
[-0.9117]], dtype=torch.float64), tensor([[ 1.1292],
[-0.1617]], dtype=torch.float64), tensor([[0.3125],
[0.0171]], dtype=torch.float64)]
LFBGS Restarting¶
(Optional)
(Note that this example problem only has two variables!)
If PyGRANSO runs in limited-memory mode, that is, if opts.limited_mem_size > 0, then PyGRANSO’s restart procedure is slightly different from the BFGS restarting, as soln.H_final will instead contain the most current L-BFGS state, not a full inverse Hessian approximation.
Instead the BFGS standard procedure, users should do the following: 1) If you set a specific H0, you will need to set opts.H0 to whatever you used previously. By default, PyGRANSO uses the identity for H0.
Warm-start PyGRANSO with the most recent L-BFGS data by setting: opts.limited_mem_warm_start = soln.H_final;
NOTE: how to set opts.scaleH0 so that PyGRANSO will be restarted as if it had never terminated depends on the previously used values of opts.scaleH0 and opts.limited_mem_fixed_scaling.
[11]:
opts = pygransoStruct()
opts.torch_device = device
# set an infeasible initial point
opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)
opts.print_ascii = True
opts.quadprog_info_msg = False
opts.maxit = 10 # default is 1000
opts.mu0 = 100 # default is 1
opts.print_frequency = 2
# By default, PyGRANSO uses full-memory BFGS updating. For nonsmooth
# problems, full-memory BFGS is generally recommended. However, if
# this is not feasible, one may optionally enable limited-memory BFGS
# updating by setting opts.limited_mem_size to a positive integer
# (significantly) less than the number of variables.
opts.limited_mem_size = 1
# start main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
==================================================================================================================
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation |
Version 1.2.0 |
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang |
==================================================================================================================
Problem specifications: |
# of variables : 2 |
# of inequality constraints : 2 |
# of equality constraints : 0 |
==================================================================================================================
Limited-memory mode enabled with size = 1. |
NOTE: limited-memory mode is generally NOT |
recommended for nonsmooth problems. |
==================================================================================================================
| <--- Penalty Function --> | | Total Violation | <--- Line Search ---> | <- Stationarity -> |
Iter | Mu | Value | Objective | Ineq | Eq | SD | Evals | t | Grads | Value |
=====|===========================|================|=================|=======================|====================|
0 | 100.0000 | 21841.7781746 | 218.250000000 | 10.00000 | - | - | 1 | 0.000000 | 1 | 9732.770 |
2 | 34.86784 | 1378.57842221 | 39.2815768212 | 8.914529 | - | S | 3 | 1.500000 | 1 | 4.455384 |
4 | 12.15767 | 262.610250639 | 20.8774186596 | 8.789579 | - | S | 2 | 2.000000 | 1 | 0.604009 |
6 | 4.239116 | 57.9458175917 | 11.5708792009 | 8.895520 | - | S | 3 | 0.750000 | 1 | 0.165224 |
8 | 1.642320 | 26.2056730861 | 10.5532019880 | 8.873935 | - | S | 1 | 1.000000 | 1 | 0.027766 |
10 | 1.642320 | 25.9163909350 | 10.4174724535 | 8.807564 | - | S | 2 | 2.000000 | 1 | 0.021455 |
==================================================================================================================
Optimization results: |
F = final iterate, B = Best (to tolerance), MF = Most Feasible |
==================================================================================================================
F | | | 10.4174724535 | 8.807564 | - | | | | | |
MF | | | 75.6886238113 | 8.192103 | - | | | | | |
==================================================================================================================
Iterations: 10 |
Function evaluations: 29 |
PyGRANSO termination code: 4 --- max iterations reached. |
==================================================================================================================
[12]:
# Restart
opts = pygransoStruct()
opts.torch_device = device
# set the initial point and penalty parameter to their final values from the previous run
opts.x0 = soln.final.x
opts.mu0 = soln.final.mu
opts.limited_mem_size = 1
opts.quadprog_info_msg = False
opts.print_frequency = 2
opts.limited_mem_warm_start = soln.H_final
opts.scaleH0 = False
# In contrast to full-memory BFGS updating, limited-memory BFGS
# permits that H0 can be scaled on every iteration. By default,
# PyGRANSO will reuse the scaling parameter that is calculated on the
# very first iteration for all subsequent iterations as well. Set
# this option to false to force PyGRANSO to calculate a new scaling
# parameter on every iteration. Note that opts.scaleH0 has no effect
# when opts.limited_mem_fixed_scaling is set to true.
opts.limited_mem_fixed_scaling = False
# Restart PyGRANSO
opts.maxit = 100 # increase maximum allowed iterations
# Main algorithm
soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║
Version 1.2.0 ║
Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications: ║
# of variables : 2 ║
# of inequality constraints : 2 ║
# of equality constraints : 0 ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Limited-memory mode enabled with size = 1. ║
NOTE: limited-memory mode is generally NOT ║
recommended for nonsmooth problems. ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
0 ║ 1.642320 │ 25.9163909350 ║ 10.4174724535 ║ 8.807564 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.142170 ║
2 ║ 1.642320 │ 13.8167164344 ║ 6.50516654793 ║ 3.133149 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 3.838176 ║
4 ║ 1.642320 │ 6.43976368442 ║ 3.92113741712 ║ 0.000000 │ - ║ S │ 4 │ 8.000000 ║ 1 │ 3.895256 ║
6 ║ 1.642320 │ 4.81165411491 ║ 2.92979027070 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.052493 ║
8 ║ 1.642320 │ 4.63155845135 ║ 2.82013099132 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.017665 ║
10 ║ 1.642320 │ 4.03639569243 ║ 2.45773959349 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.013749 ║
12 ║ 1.642320 │ 3.00639292821 ║ 1.83057645887 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.028975 ║
14 ║ 1.642320 │ 2.28891992127 ║ 1.39371100989 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.048897 ║
16 ║ 1.642320 │ 1.95780587454 ║ 1.19209745051 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.265478 ║
18 ║ 1.642320 │ 1.49471266813 ║ 0.91012249177 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.084726 ║
20 ║ 1.642320 │ 1.34989358124 ║ 0.82194292989 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.216509 ║
22 ║ 1.642320 │ 1.13057014213 ║ 0.68839806928 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.018354 ║
24 ║ 1.642320 │ 1.00727210829 ║ 0.61332256067 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.009416 ║
26 ║ 1.642320 │ 0.91876697706 ║ 0.55943226303 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012977 ║
28 ║ 1.642320 │ 0.79484835473 ║ 0.48397888143 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.096405 ║
30 ║ 1.642320 │ 0.65714700682 ║ 0.40013327247 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.142885 ║
32 ║ 1.642320 │ 0.57064376781 ║ 0.34746191622 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.005519 ║
34 ║ 1.642320 │ 0.50072121946 ║ 0.30488645320 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.002562 ║
36 ║ 1.642320 │ 0.46398321205 ║ 0.28251687839 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.010241 ║
38 ║ 1.642320 │ 0.38322497032 ║ 0.23334362003 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.145492 ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
40 ║ 1.642320 │ 0.31487530037 ║ 0.19172587420 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.006704 ║
42 ║ 1.642320 │ 0.30050834308 ║ 0.18297791129 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.007435 ║
44 ║ 1.642320 │ 0.26424400929 ║ 0.16089675380 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.795419 ║
46 ║ 1.642320 │ 0.20338505133 ║ 0.12384006214 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.004990 ║
48 ║ 1.642320 │ 0.17520417328 ║ 0.10668087731 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.003565 ║
50 ║ 1.077526 │ 0.10049441692 ║ 0.08991368997 ║ 0.002780 │ - ║ S │ 6 │ 0.031250 ║ 1 │ 0.033750 ║
52 ║ 1.077526 │ 0.09397044638 ║ 0.08720941715 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 2.31e-04 ║
54 ║ 1.077526 │ 0.09254093483 ║ 0.08588275676 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 2 │ 2.09e-04 ║
56 ║ 1.077526 │ 0.09244295357 ║ 0.08579182510 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 1.78e-06 ║
58 ║ 1.077526 │ 0.09243954005 ║ 0.08578865717 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 4 │ 6.93e-15 ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Optimization results: ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
F ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║
B ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║
MF ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations: 58 ║
Function evaluations: 127 ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝