{ "cells": [ { "cell_type": "markdown", "id": "5257bb27", "metadata": {}, "source": [ "# Rosenbrock\n", "\n", "Minimize 2-variable nonsmooth Rosenbrock function, subject to a simple bound constraint. Taken from: [GRANSO](http://www.timmitchell.com/software/GRANSO/) demo examples 1, 2, & 3 " ] }, { "cell_type": "markdown", "id": "eaeebfa4", "metadata": {}, "source": [ "## Problem Description" ] }, { "cell_type": "markdown", "id": "7a94952c", "metadata": {}, "source": [ "$$\\min_{x_1,x_2} w|x_1^2-x_2|+(1-x_1)^2,$$\n", "$$\\text{s.t. }c_1(x_1,x_2) = \\sqrt{2}x_1-1 \\leq 0, c_(x_1,x_2)=2x_2-1\\leq0,$$\n", "\n", "where $w$ is a constant (e.g., $w=8$)" ] }, { "cell_type": "markdown", "id": "08dfdd50", "metadata": {}, "source": [ "## Modules Importing\n", "Import all necessary modules and add PyGRANSO src folder to system path." ] }, { "cell_type": "code", "execution_count": 1, "id": "90ed32f9", "metadata": {}, "outputs": [], "source": [ "import time\n", "import torch\n", "from pygranso.pygranso import pygranso\n", "from pygranso.pygransoStruct import pygransoStruct" ] }, { "cell_type": "markdown", "id": "ec80716b", "metadata": {}, "source": [ "## Function Set-Up\n", "\n", "Encode the optimization variables, and objective and constraint functions.\n", "\n", "Note: please strictly follow the format of comb_fn, which will be used in the PyGRANSO main algortihm." ] }, { "cell_type": "code", "execution_count": 2, "id": "fb360e75", "metadata": {}, "outputs": [], "source": [ "device = torch.device('cpu')\n", "# variables and corresponding dimensions.\n", "var_in = {\"x1\": [1], \"x2\": [1]}\n", "\n", "def comb_fn(X_struct):\n", " x1 = X_struct.x1\n", " x2 = X_struct.x2\n", " \n", " # objective function\n", " f = (8 * abs(x1**2 - x2) + (1 - x1)**2)\n", "\n", " # inequality constraint, matrix form\n", " ci = pygransoStruct()\n", " ci.c1 = (2**0.5)*x1-1 \n", " ci.c2 = 2*x2-1 \n", "\n", " # equality constraint \n", " ce = None\n", "\n", " return [f,ci,ce]" ] }, { "cell_type": "markdown", "id": "f0f55ace", "metadata": {}, "source": [ "## User Options\n", "Specify user-defined options for PyGRANSO" ] }, { "cell_type": "code", "execution_count": 3, "id": "f3a65b57", "metadata": {}, "outputs": [], "source": [ "opts = pygransoStruct()\n", "# option for switching QP solver. We only have osqp as the only qp solver in current version. Default is osqp\n", "# opts.QPsolver = 'osqp'\n", "\n", "# set an intial point\n", "# All the user-provided data (vector/matrix/tensor) must be in torch tensor format. \n", "# As PyTorch tensor is single precision by default, one must explicitly set `dtype=torch.double`.\n", "# Also, please make sure the device of provided torch tensor is the same as opts.torch_device.\n", "opts.x0 = torch.ones((2,1), device=device, dtype=torch.double)\n", "opts.torch_device = device" ] }, { "cell_type": "markdown", "id": "8bca18c7", "metadata": {}, "source": [ "## Main Algorithm" ] }, { "cell_type": "code", "execution_count": 4, "id": "632976b3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[33m╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗\n", "\u001b[0m\u001b[33m║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║\n", "\u001b[0m\u001b[33m║ the default is osqp. Users may provide their own wrapper for the QP solver. ║\n", "\u001b[0m\u001b[33m║ To disable this notice, set opts.quadprog_info_msg = False ║\n", "\u001b[0m\u001b[33m╚═══════════════════════════════════════════════════════════════════════════════════════════════╝\n", "\u001b[0m═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║ \n", "Version 1.2.0 ║ \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n", "Problem specifications: ║ \n", " # of variables : 2 ║ \n", " # of inequality constraints : 2 ║ \n", " # of equality constraints : 0 ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 0 ║ 1.000000 │ 1.41421356237 ║ 0.00000000000 ║ 1.000000 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.579471 ║ \n", " 1 ║ 1.000000 │ 0.70773811042 ║ 0.70773811042 ║ 0.000000 │ - ║ S │ 3 │ 1.500000 ║ 1 │ 10.07366 ║ \n", " 2 ║ 1.000000 │ 0.25401310554 ║ 0.25401310554 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.198885 ║ \n", " 3 ║ 1.000000 │ 0.21478744238 ║ 0.21478744238 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.135710 ║ \n", " 4 ║ 1.000000 │ 0.21422378595 ║ 0.21422378595 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.332997 ║ \n", " 5 ║ 1.000000 │ 0.15330884270 ║ 0.15330884270 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.122691 ║ \n", " 6 ║ 1.000000 │ 0.14804462353 ║ 0.14804462353 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.012623 ║ \n", " 7 ║ 1.000000 │ 0.10856024489 ║ 0.10856024489 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.042111 ║ \n", " 8 ║ 1.000000 │ 0.10482595154 ║ 0.10482595154 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.003211 ║ \n", " 9 ║ 0.810000 │ 0.07758251262 ║ 0.09438278485 ║ 0.001132 │ - ║ S │ 3 │ 1.500000 ║ 1 │ 0.038778 ║ \n", " 10 ║ 0.810000 │ 0.07197699268 ║ 0.08886048479 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012307 ║ \n", " 11 ║ 0.810000 │ 0.07055904204 ║ 0.08710992844 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.003100 ║ \n", " 12 ║ 0.810000 │ 0.07048871361 ║ 0.08702310322 ║ 0.000000 │ - ║ S │ 7 │ 0.046875 ║ 1 │ 0.003061 ║ \n", " 13 ║ 0.810000 │ 0.07020995506 ║ 0.08667895687 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.001026 ║ \n", " 14 ║ 0.810000 │ 0.06962027906 ║ 0.08595096180 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 5.51e-06 ║ \n", " 15 ║ 0.810000 │ 0.06952233581 ║ 0.08581975963 ║ 8.33e-06 │ - ║ S │ 3 │ 4.000000 ║ 2 │ 5.10e-06 ║ \n", " 16 ║ 0.729000 │ 0.06255153247 ║ 0.08579440422 ║ 4.17e-06 │ - ║ S │ 2 │ 0.500000 ║ 3 │ 3.98e-06 ║ \n", " 17 ║ 0.729000 │ 0.06254666970 ║ 0.08579790082 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 4 │ 7.59e-19 ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Optimization results: ║ \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " F ║ │ ║ 0.08579790082 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " B ║ │ ║ 0.08578643763 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " MF ║ │ ║ 0.08578643763 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Iterations: 17 ║ \n", "Function evaluations: 40 ║ \n", "PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n", "Total Wall Time: 0.18392133712768555s\n", "tensor([[0.7071],\n", " [0.5000]], dtype=torch.float64)\n" ] } ], "source": [ "start = time.time()\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)\n", "end = time.time()\n", "print(\"Total Wall Time: {}s\".format(end - start))\n", "print(soln.final.x)" ] }, { "cell_type": "markdown", "id": "790abfe3", "metadata": {}, "source": [ "## PyGRANSO Restarting\n", "**(Optional)** The following example shows how to set various PyGRANSO options (such as simpler ASCII printing) and how to restart PyGRANSO" ] }, { "cell_type": "code", "execution_count": 5, "id": "69083bd3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "==================================================================================================================\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation | \n", "Version 1.2.0 | \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang | \n", "==================================================================================================================\n", "Problem specifications: | \n", " # of variables : 2 | \n", " # of inequality constraints : 2 | \n", " # of equality constraints : 0 | \n", "==================================================================================================================\n", " | <--- Penalty Function --> | | Total Violation | <--- Line Search ---> | <- Stationarity -> | \n", "Iter | Mu | Value | Objective | Ineq | Eq | SD | Evals | t | Grads | Value | \n", "=====|===========================|================|=================|=======================|====================|\n", " 0 | 100.0000 | 21841.7781746 | 218.250000000 | 10.00000 | - | - | 1 | 0.000000 | 1 | 9732.770 | \n", " 1 | 34.86784 | 1509.66611872 | 42.9789783006 | 11.08181 | - | S | 10 | 0.001953 | 1 | 546.6040 | \n", " 2 | 34.86784 | 1378.57842221 | 39.2815768212 | 8.914529 | - | S | 3 | 1.500000 | 1 | 4.455384 | \n", " 3 | 12.15767 | 285.452828054 | 22.6935200208 | 9.552604 | - | S | 2 | 2.000000 | 1 | 0.297144 | \n", " 4 | 12.15767 | 264.999595731 | 21.0732808630 | 8.797697 | - | S | 2 | 2.000000 | 1 | 0.603629 | \n", " 5 | 4.239116 | 60.5144787250 | 12.1478493778 | 9.018338 | - | S | 2 | 2.000000 | 1 | 0.111610 | \n", " 6 | 4.239116 | 53.5399399407 | 10.5181947367 | 8.952094 | - | S | 2 | 0.500000 | 1 | 0.164082 | \n", " 7 | 3.815204 | 48.9917031616 | 10.4947962860 | 8.951912 | - | S | 4 | 0.125000 | 1 | 0.033640 | \n", " 8 | 3.815204 | 48.7011303503 | 10.4372013183 | 8.881076 | - | S | 2 | 2.000000 | 1 | 0.018555 | \n", " 9 | 3.815204 | 48.2564717826 | 10.3422772655 | 8.798572 | - | S | 2 | 2.000000 | 1 | 0.057946 | \n", " 10 | 3.815204 | 39.4225027901 | 9.27057783616 | 4.053355 | - | S | 5 | 16.00000 | 1 | 0.001796 | \n", "==================================================================================================================\n", "Optimization results: | \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible | \n", "==================================================================================================================\n", " F | | | 9.27057783616 | 4.053355 | - | | | | | | \n", " MF | | | 9.27057783616 | 4.053355 | - | | | | | | \n", "==================================================================================================================\n", "Iterations: 10 | \n", "Function evaluations: 35 | \n", "PyGRANSO termination code: 4 --- max iterations reached. | \n", "==================================================================================================================\n" ] } ], "source": [ "opts = pygransoStruct()\n", "opts.torch_device = device\n", "# set an infeasible initial point\n", "opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)\n", "\n", "# By default PyGRANSO will print using extended ASCII characters to 'draw' table borders and some color prints. \n", "# If user wants to create a log txt file of the console output, please set opts.print_ascii = True\n", "opts.print_ascii = True\n", "\n", "# By default, PyGRANSO prints an info message about QP solvers, since\n", "# PyGRANSO can be used with any QP solver that has a quadprog-compatible\n", "# interface. Let's disable this message since we've already seen it \n", "# hundreds of times and can now recite it from memory. ;-)\n", "opts.quadprog_info_msg = False\n", "\n", "# Try a very short run. \n", "opts.maxit = 10 # default is 1000\n", "\n", "# PyGRANSO's penalty parameter is on the *objective* function, thus\n", "# higher penalty parameter values favor objective minimization more\n", "# highly than attaining feasibility. Let's set PyGRANSO to start off\n", "# with a higher initial value of the penalty parameter. PyGRANSO will\n", "# automatically tune the penalty parameter to promote progress towards \n", "# feasibility. PyGRANSO only adjusts the penalty parameter in a\n", "# monotonically decreasing fashion.\n", "opts.mu0 = 100 # default is 1\n", "\n", "# start main algorithm\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)" ] }, { "cell_type": "markdown", "id": "167e4901", "metadata": {}, "source": [ "Let's restart PyGRANSO from the last iterate of the previous run" ] }, { "cell_type": "code", "execution_count": 6, "id": "53087da4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[33m╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗\n", "\u001b[0m\u001b[33m║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║\n", "\u001b[0m\u001b[33m║ the default is osqp. Users may provide their own wrapper for the QP solver. ║\n", "\u001b[0m\u001b[33m║ To disable this notice, set opts.quadprog_info_msg = False ║\n", "\u001b[0m\u001b[33m╚═══════════════════════════════════════════════════════════════════════════════════════════════╝\n", "\u001b[0m═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║ \n", "Version 1.2.0 ║ \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n", "Problem specifications: ║ \n", " # of variables : 2 ║ \n", " # of inequality constraints : 2 ║ \n", " # of equality constraints : 0 ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 0 ║ 3.815204 │ 39.4225027901 ║ 9.27057783616 ║ 4.053355 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.161642 ║ \n", " 1 ║ 2.503156 │ 27.1659262914 ║ 9.40498130422 ║ 3.623796 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.052069 ║ \n", " 2 ║ 2.252840 │ 24.6931839860 ║ 9.60278347893 ║ 3.059650 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.121925 ║ \n", " 3 ║ 2.027556 │ 22.3469956308 ║ 9.74369172380 ║ 2.591115 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.107446 ║ \n", " 4 ║ 2.027556 │ 21.7313246108 ║ 9.83432474079 ║ 1.791681 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.249443 ║ \n", " 5 ║ 2.027556 │ 18.8070548305 ║ 9.27572664351 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 1.149786 ║ \n", " 6 ║ 2.027556 │ 15.5789878011 ║ 7.13239951765 ║ 1.117649 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 1.812487 ║ \n", " 7 ║ 2.027556 │ 5.32456682805 ║ 2.62610104757 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.324572 ║ \n", " 8 ║ 2.027556 │ 5.06514188742 ║ 2.49815146400 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.840053 ║ \n", " 9 ║ 2.027556 │ 4.34855971809 ║ 2.14472981556 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.161357 ║ \n", " 10 ║ 2.027556 │ 4.17458588015 ║ 2.05892511204 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.438844 ║ \n", " 11 ║ 2.027556 │ 3.88777301389 ║ 1.91746767656 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.026078 ║ \n", " 12 ║ 2.027556 │ 3.02755206246 ║ 1.49320271480 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.341024 ║ \n", " 13 ║ 2.027556 │ 2.82661618788 ║ 1.39410020980 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.138146 ║ \n", " 14 ║ 2.027556 │ 2.61942625386 ║ 1.29191317368 ║ 0.000000 │ - ║ S │ 4 │ 3.000000 ║ 1 │ 0.271047 ║ \n", " 15 ║ 2.027556 │ 2.41981099682 ║ 1.19346200337 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.022472 ║ \n", " 16 ║ 2.027556 │ 1.68687275803 ║ 0.83197346564 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.273910 ║ \n", " 17 ║ 2.027556 │ 1.62715982554 ║ 0.80252277047 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.038027 ║ \n", " 18 ║ 2.027556 │ 1.53557783057 ║ 0.75735410592 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.015193 ║ \n", " 19 ║ 2.027556 │ 1.04198868285 ║ 0.51391364968 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.094309 ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 20 ║ 2.027556 │ 0.93149970489 ║ 0.45941997346 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.110416 ║ \n", " 21 ║ 2.027556 │ 0.82149497454 ║ 0.40516513040 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.029324 ║ \n", " 22 ║ 2.027556 │ 0.71807054560 ║ 0.35415572251 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.173107 ║ \n", " 23 ║ 2.027556 │ 0.71364643723 ║ 0.35197373175 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012969 ║ \n", " 24 ║ 2.027556 │ 0.54336780853 ║ 0.26799152256 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.049872 ║ \n", " 25 ║ 2.027556 │ 0.52920946359 ║ 0.26100856119 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.090110 ║ \n", " 26 ║ 2.027556 │ 0.38297864798 ║ 0.18888684491 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.078443 ║ \n", " 27 ║ 2.027556 │ 0.34625417862 ║ 0.17077416634 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.148051 ║ \n", " 28 ║ 2.027556 │ 0.33665070462 ║ 0.16603768844 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.003684 ║ \n", " 29 ║ 2.027556 │ 0.26789115070 ║ 0.13212515763 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.089877 ║ \n", " 30 ║ 2.027556 │ 0.25588826429 ║ 0.12620527840 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.054336 ║ \n", " 31 ║ 2.027556 │ 0.19518453283 ║ 0.09626591659 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.028392 ║ \n", " 32 ║ 2.027556 │ 0.19483151740 ║ 0.09609180774 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 1 │ 0.026507 ║ \n", " 33 ║ 2.027556 │ 0.17882219593 ║ 0.08819593616 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 7.43e-05 ║ \n", " 34 ║ 2.027556 │ 0.17424853428 ║ 0.08592330867 ║ 3.42e-05 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 4.88e-05 ║ \n", " 35 ║ 1.077526 │ 0.09246135145 ║ 0.08580889928 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 2 │ 6.04e-18 ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Optimization results: ║ \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " F ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " B ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " MF ║ │ ║ 0.08580889928 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Iterations: 35 ║ \n", "Function evaluations: 77 ║ \n", "PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n" ] } ], "source": [ "opts = pygransoStruct()\n", "opts.torch_device = device\n", "# set the initial point and penalty parameter to their final values from the previous run\n", "opts.x0 = soln.final.x\n", "opts.mu0 = soln.final.mu\n", "opts.opt_tol = 1e-6\n", "\n", "# PREPARE TO RESTART PyGRANSO IN FULL-MEMORY MODE\n", "# Set the last BFGS inverse Hessian approximation as the initial\n", "# Hessian for the next run. Generally this is a good thing to do, and\n", "# often it is necessary to retain this information when restarting (as\n", "# on difficult nonsmooth problems, PyGRANSO may not be able to restart\n", "# without it). However, your mileage may vary. In the test, with\n", "# the above settings, omitting H0 causes PyGRANSO to take an additional \n", "# 16 iterations to converge on this problem. \n", "opts.H0 = soln.H_final # try running with this commented out\n", "\n", "# When restarting, soln.H_final may fail PyGRANSO's initial check to\n", "# assess whether or not the user-provided H0 is positive definite. If\n", "# it fails this test, the test may be disabled by setting opts.checkH0 \n", "# to false.\n", "# opts.checkH0 = False % Not needed for this example \n", "\n", "# If one desires to restart PyGRANSO as if it had never stopped (e.g.\n", "# to continue optimization after it hit its maxit limit), then one must\n", "# also disable scaling the initial BFGS inverse Hessian approximation \n", "# on the very first iterate. \n", "opts.scaleH0 = False\n", "\n", "# Restart PyGRANSO\n", "opts.maxit = 100 # increase maximum allowed iterations\n", "\n", "# Main algorithm\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)" ] }, { "cell_type": "code", "execution_count": 7, "id": "6ef5310c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[0.7071],\n", " [0.5000]], dtype=torch.float64)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "soln.final.x" ] }, { "cell_type": "markdown", "id": "f1d015a6", "metadata": {}, "source": [ "## Results Logs\n", "\n", "**(Optional)** opts below shows the importance of using an initial point that is neither near\n", "nor on a nonsmooth manifold, that is, the functions \n", "(objective and constraints) should be smooth at and *about* \n", "the initial point." ] }, { "cell_type": "code", "execution_count": 8, "id": "a6c968a6", "metadata": {}, "outputs": [], "source": [ "opts = pygransoStruct()\n", "opts.torch_device = device\n", "# Set a randomly generated starting point. In theory, with probability \n", "# one, a randomly selected point will not be on a nonsmooth manifold.\n", "opts.x0 = torch.randn((2,1), device=device, dtype=torch.double) # randomly generated is okay\n", "opts.maxit = 100 # we'll use this value of maxit later\n", "opts.opt_tol = 1e-6\n", "\n", "# However, (0,0) or (1,1) are on the nonsmooth manifold and if PyGRANSO\n", "# is started at either of them, it will break down on the first\n", "# iteration. This example highlights that it is imperative to start\n", "# PyGRANSO at a point where the functions are smooth.\n", "\n", "# Uncomment either of the following two lines to try starting PyGRANSO\n", "# from (0,0) or (1,1), where the functions are not differentiable. \n", " \n", "# opts.x0 = torch.ones((2,1), device=device, dtype=torch.double) # uncomment this line to try this point\n", "# opts.x0 = torch.zeros((2,1), device=device, dtype=torch.double) # uncomment this line to try this point\n", "\n", "# Uncomment the following two lines to try starting PyGRANSO from a\n", "# uniformly perturbed version of (1,1). pert_level needs to be at\n", "# least 1e-3 or so to get consistently reliable optimization quality.\n", "\n", "# pert_level = 1e-3\n", "# opts.x0 = (torch.ones((2,1)) + pert_level * (torch.randn((2,1)) - 0.5)).to(device=device, dtype=torch.double)" ] }, { "cell_type": "markdown", "id": "bcf39d83", "metadata": {}, "source": [ "The opts below show how to use opts.halt_log_fn to create a history of iterates\n", "\n", "NOTE: NO NEED TO CHANGE ANYTHING BELOW" ] }, { "cell_type": "code", "execution_count": 9, "id": "42b9acec", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[33m╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗\n", "\u001b[0m\u001b[33m║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║\n", "\u001b[0m\u001b[33m║ the default is osqp. Users may provide their own wrapper for the QP solver. ║\n", "\u001b[0m\u001b[33m║ To disable this notice, set opts.quadprog_info_msg = False ║\n", "\u001b[0m\u001b[33m╚═══════════════════════════════════════════════════════════════════════════════════════════════╝\n", "\u001b[0m═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║ \n", "Version 1.2.0 ║ \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n", "Problem specifications: ║ \n", " # of variables : 2 ║ \n", " # of inequality constraints : 2 ║ \n", " # of equality constraints : 0 ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 0 ║ 1.000000 │ 16.4067374414 ║ 16.4067374414 ║ 0.000000 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 16.89371 ║ \n", " 1 ║ 1.000000 │ 12.1088353467 ║ 11.5118588213 ║ 0.596977 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 19.28340 ║ \n", " 2 ║ 1.000000 │ 1.11708224486 ║ 1.11708224486 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.478277 ║ \n", " 3 ║ 1.000000 │ 0.55748343428 ║ 0.55748343428 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.825482 ║ \n", " 4 ║ 1.000000 │ 0.54324826834 ║ 0.54324826834 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.072797 ║ \n", " 5 ║ 1.000000 │ 0.43272406722 ║ 0.43272406722 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.029511 ║ \n", " 6 ║ 1.000000 │ 0.39143728403 ║ 0.39143728403 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.165019 ║ \n", " 7 ║ 1.000000 │ 0.33585127390 ║ 0.33585127390 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.111788 ║ \n", " 8 ║ 1.000000 │ 0.27769332848 ║ 0.27769332848 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.016351 ║ \n", " 9 ║ 1.000000 │ 0.27206257507 ║ 0.27206257507 ║ 0.000000 │ - ║ S │ 6 │ 0.093750 ║ 1 │ 0.185187 ║ \n", " 10 ║ 1.000000 │ 0.25369049840 ║ 0.25369049840 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.018059 ║ \n", " 11 ║ 1.000000 │ 0.24095795441 ║ 0.24095795441 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.002996 ║ \n", " 12 ║ 1.000000 │ 0.22956867248 ║ 0.22956867248 ║ 0.000000 │ - ║ S │ 2 │ 2.000000 ║ 1 │ 0.014434 ║ \n", " 13 ║ 1.000000 │ 0.16940732170 ║ 0.16940732170 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.080774 ║ \n", " 14 ║ 1.000000 │ 0.14128118763 ║ 0.14128118763 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.100849 ║ \n", " 15 ║ 1.000000 │ 0.13056475076 ║ 0.13056475076 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.005391 ║ \n", " 16 ║ 1.000000 │ 0.12030008103 ║ 0.12030008103 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.035053 ║ \n", " 17 ║ 1.000000 │ 0.09388862233 ║ 0.09388862233 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.001793 ║ \n", " 18 ║ 1.000000 │ 0.08668315753 ║ 0.08668315753 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.001835 ║ \n", " 19 ║ 1.000000 │ 0.08658279210 ║ 0.08658279210 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.001426 ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 20 ║ 1.000000 │ 0.08597722224 ║ 0.08597722224 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 3.76e-05 ║ \n", " 21 ║ 0.900000 │ 0.07729405862 ║ 0.08588228736 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 1.39e-05 ║ \n", " 22 ║ 0.900000 │ 0.07725449555 ║ 0.08583832839 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 1 │ 1.42e-04 ║ \n", " 23 ║ 0.900000 │ 0.07725013449 ║ 0.08583348277 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 5.96e-05 ║ \n", " 24 ║ 0.900000 │ 0.07723808110 ║ 0.08580155461 ║ 9.62e-06 │ - ║ S │ 4 │ 1.250000 ║ 2 │ 7.31e-06 ║ \n", " 25 ║ 0.900000 │ 0.07721211695 ║ 0.08579124105 ║ 6.63e-13 │ - ║ S │ 1 │ 1.000000 ║ 3 │ 4.06e-13 ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Optimization results: ║ \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " F ║ │ ║ 0.08579124105 ║ 6.63e-13 │ - ║ │ │ ║ │ ║ \n", " B ║ │ ║ 0.08579124105 ║ 6.63e-13 │ - ║ │ │ ║ │ ║ \n", " MF ║ │ ║ 0.08580793766 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Iterations: 25 ║ \n", "Function evaluations: 60 ║ \n", "PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n" ] } ], "source": [ "# SETUP THE LOGGING FEATURES\n", " \n", "# Set up PyGRANSO's logging functions; pass opts.maxit to it so that\n", "# storage can be preallocated for efficiency.\n", "\n", "class HaltLog:\n", " def __init__(self):\n", " pass\n", "\n", " def haltLog(self, iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized,\n", " ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level):\n", "\n", " # DON'T CHANGE THIS\n", " # increment the index/count \n", " self.index += 1 \n", "\n", " # EXAMPLE:\n", " # store history of x iterates in a preallocated cell array\n", " self.x_iterates.append(x)\n", " self.f.append(penaltyfn_parts.f)\n", " self.tv.append(penaltyfn_parts.tv)\n", "\n", " # keep this false unless you want to implement a custom termination\n", " # condition\n", " halt = False\n", " return halt\n", " \n", " # Once PyGRANSO has run, you may call this function to get retreive all\n", " # the logging data stored in the shared variables, which is populated \n", " # by haltLog being called on every iteration of PyGRANSO.\n", " def getLog(self):\n", " # EXAMPLE\n", " # return x_iterates, trimmed to correct size \n", " log = pygransoStruct()\n", " log.x = self.x_iterates[0:self.index]\n", " log.f = self.f[0:self.index]\n", " log.tv = self.tv[0:self.index]\n", " return log\n", "\n", " def makeHaltLogFunctions(self,maxit):\n", " # don't change these lambda functions \n", " halt_log_fn = lambda iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level: self.haltLog(iteration, x, penaltyfn_parts, d,get_BFGS_state_fn, H_regularized, ls_evals, alpha, n_gradients, stat_vec, stat_val, fallback_level)\n", " \n", " get_log_fn = lambda : self.getLog()\n", "\n", " # Make your shared variables here to store PyGRANSO history data\n", " # EXAMPLE - store history of iterates x_0,x_1,...,x_k\n", " self.index = 0\n", " self.x_iterates = []\n", " self.f = []\n", " self.tv = []\n", "\n", " # Only modify the body of logIterate(), not its name or arguments.\n", " # Store whatever data you wish from the current PyGRANSO iteration info,\n", " # given by the input arguments, into shared variables of\n", " # makeHaltLogFunctions, so that this data can be retrieved after PyGRANSO\n", " # has been terminated.\n", " # \n", " # DESCRIPTION OF INPUT ARGUMENTS\n", " # iter current iteration number\n", " # x current iterate x \n", " # penaltyfn_parts struct containing the following\n", " # OBJECTIVE AND CONSTRAINTS VALUES\n", " # .f objective value at x\n", " # .f_grad objective gradient at x\n", " # .ci inequality constraint at x\n", " # .ci_grad inequality gradient at x\n", " # .ce equality constraint at x\n", " # .ce_grad equality gradient at x\n", " # TOTAL VIOLATION VALUES (inf norm, for determining feasibiliy)\n", " # .tvi total violation of inequality constraints at x\n", " # .tve total violation of equality constraints at x\n", " # .tv total violation of all constraints at x\n", " # TOTAL VIOLATION VALUES (one norm, for L1 penalty function)\n", " # .tvi_l1 total violation of inequality constraints at x\n", " # .tvi_l1_grad its gradient\n", " # .tve_l1 total violation of equality constraints at x\n", " # .tve_l1_grad its gradient\n", " # .tv_l1 total violation of all constraints at x\n", " # .tv_l1_grad its gradient\n", " # PENALTY FUNCTION VALUES \n", " # .p penalty function value at x\n", " # .p_grad penalty function gradient at x\n", " # .mu current value of the penalty parameter\n", " # .feasible_to_tol logical indicating whether x is feasible\n", " # d search direction\n", " # get_BFGS_state_fn function handle to get the (L)BFGS state data \n", " # FULL MEMORY: \n", " # - returns BFGS inverse Hessian approximation \n", " # LIMITED MEMORY:\n", " # - returns a struct with current L-BFGS state:\n", " # .S matrix of the BFGS s vectors\n", " # .Y matrix of the BFGS y vectors\n", " # .rho row vector of the 1/sty values\n", " # .gamma H0 scaling factor\n", " # H_regularized regularized version of H \n", " # [] if no regularization was applied to H\n", " # fn_evals number of function evaluations incurred during\n", " # this iteration\n", " # alpha size of accepted size\n", " # n_gradients number of previous gradients used for computing\n", " # the termination QP\n", " # stat_vec stationarity measure vector \n", " # stat_val approximate value of stationarity:\n", " # norm(stat_vec)\n", " # gradients (result of termination QP)\n", " # fallback_level number of strategy needed for a successful step\n", " # to be taken. See bfgssqpOptionsAdvanced.\n", " #\n", " # OUTPUT ARGUMENT\n", " # halt set this to true if you wish optimization to \n", " # be halted at the current iterate. This can be \n", " # used to create a custom termination condition,\n", " return [halt_log_fn, get_log_fn]\n", "\n", "mHLF_obj = HaltLog()\n", "[halt_log_fn, get_log_fn] = mHLF_obj.makeHaltLogFunctions(opts.maxit)\n", "\n", "# Set PyGRANSO's logging function in opts\n", "opts.halt_log_fn = halt_log_fn\n", "\n", "# Main algorithm with logging enabled.\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)\n", "\n", "# GET THE HISTORY OF ITERATES\n", "# Even if an error is thrown, the log generated until the error can be\n", "# obtained by calling get_log_fn()\n", "log = get_log_fn()" ] }, { "cell_type": "code", "execution_count": 10, "id": "557f5a1a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[16.406737441369938, 11.51185882133243, 1.117082244864862]\n", "[tensor([[-0.8448],\n", " [-0.9117]], dtype=torch.float64), tensor([[ 1.1292],\n", " [-0.1617]], dtype=torch.float64), tensor([[0.3125],\n", " [0.0171]], dtype=torch.float64)]\n" ] } ], "source": [ "print(log.f[0:3])\n", "print(log.x[0:3])" ] }, { "cell_type": "markdown", "id": "fc946bc6", "metadata": {}, "source": [ "## LFBGS Restarting\n", " \n", "**(Optional)**\n", "\n", " (Note that this example problem only has two variables!)\n", " \n", " If PyGRANSO runs in limited-memory mode, that is, if \n", " opts.limited_mem_size > 0, then PyGRANSO's restart procedure is \n", " slightly different from the BFGS restarting, as soln.H_final will instead contain the most \n", " current L-BFGS state, not a full inverse Hessian approximation. \n", " \n", " Instead the BFGS standard procedure, users should do the following: \n", " 1) If you set a specific H0, you will need to set opts.H0 to whatever\n", " you used previously. By default, PyGRANSO uses the identity for H0.\n", " \n", " 2) Warm-start PyGRANSO with the most recent L-BFGS data by setting:\n", " opts.limited_mem_warm_start = soln.H_final;\n", " \n", " NOTE: how to set opts.scaleH0 so that PyGRANSO will be restarted as if\n", " it had never terminated depends on the previously used values of \n", " opts.scaleH0 and opts.limited_mem_fixed_scaling. " ] }, { "cell_type": "code", "execution_count": 11, "id": "8f78321f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "==================================================================================================================\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation | \n", "Version 1.2.0 | \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang | \n", "==================================================================================================================\n", "Problem specifications: | \n", " # of variables : 2 | \n", " # of inequality constraints : 2 | \n", " # of equality constraints : 0 | \n", "==================================================================================================================\n", "Limited-memory mode enabled with size = 1. | \n", "NOTE: limited-memory mode is generally NOT | \n", "recommended for nonsmooth problems. | \n", "==================================================================================================================\n", " | <--- Penalty Function --> | | Total Violation | <--- Line Search ---> | <- Stationarity -> | \n", "Iter | Mu | Value | Objective | Ineq | Eq | SD | Evals | t | Grads | Value | \n", "=====|===========================|================|=================|=======================|====================|\n", " 0 | 100.0000 | 21841.7781746 | 218.250000000 | 10.00000 | - | - | 1 | 0.000000 | 1 | 9732.770 | \n", " 2 | 34.86784 | 1378.57842221 | 39.2815768212 | 8.914529 | - | S | 3 | 1.500000 | 1 | 4.455384 | \n", " 4 | 12.15767 | 262.610250639 | 20.8774186596 | 8.789579 | - | S | 2 | 2.000000 | 1 | 0.604009 | \n", " 6 | 4.239116 | 57.9458175917 | 11.5708792009 | 8.895520 | - | S | 3 | 0.750000 | 1 | 0.165224 | \n", " 8 | 1.642320 | 26.2056730861 | 10.5532019880 | 8.873935 | - | S | 1 | 1.000000 | 1 | 0.027766 | \n", " 10 | 1.642320 | 25.9163909350 | 10.4174724535 | 8.807564 | - | S | 2 | 2.000000 | 1 | 0.021455 | \n", "==================================================================================================================\n", "Optimization results: | \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible | \n", "==================================================================================================================\n", " F | | | 10.4174724535 | 8.807564 | - | | | | | | \n", " MF | | | 75.6886238113 | 8.192103 | - | | | | | | \n", "==================================================================================================================\n", "Iterations: 10 | \n", "Function evaluations: 29 | \n", "PyGRANSO termination code: 4 --- max iterations reached. | \n", "==================================================================================================================\n" ] } ], "source": [ "opts = pygransoStruct()\n", "opts.torch_device = device\n", "# set an infeasible initial point\n", "opts.x0 = 5.5*torch.ones((2,1), device=device, dtype=torch.double)\n", "\n", "opts.print_ascii = True\n", "opts.quadprog_info_msg = False\n", "opts.maxit = 10 # default is 1000\n", "opts.mu0 = 100 # default is 1\n", "opts.print_frequency = 2\n", "\n", "\n", "# By default, PyGRANSO uses full-memory BFGS updating. For nonsmooth\n", "# problems, full-memory BFGS is generally recommended. However, if\n", "# this is not feasible, one may optionally enable limited-memory BFGS\n", "# updating by setting opts.limited_mem_size to a positive integer\n", "# (significantly) less than the number of variables.\n", "opts.limited_mem_size = 1\n", "\n", "# start main algorithm\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)" ] }, { "cell_type": "code", "execution_count": 12, "id": "e363ffda", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗\n", "PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║ \n", "Version 1.2.0 ║ \n", "Licensed under the AGPLv3, Copyright (C) 2021-2022 Tim Mitchell and Buyun Liang ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n", "Problem specifications: ║ \n", " # of variables : 2 ║ \n", " # of inequality constraints : 2 ║ \n", " # of equality constraints : 0 ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣\n", "\u001b[33mLimited-memory mode enabled with size = 1. \u001b[0m ║ \n", "\u001b[33mNOTE: limited-memory mode is generally NOT \u001b[0m ║ \n", "\u001b[33mrecommended for nonsmooth problems. \u001b[0m ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 0 ║ 1.642320 │ 25.9163909350 ║ 10.4174724535 ║ 8.807564 │ - ║ - │ 1 │ 0.000000 ║ 1 │ 0.142170 ║ \n", " 2 ║ 1.642320 │ 13.8167164344 ║ 6.50516654793 ║ 3.133149 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 3.838176 ║ \n", " 4 ║ 1.642320 │ 6.43976368442 ║ 3.92113741712 ║ 0.000000 │ - ║ S │ 4 │ 8.000000 ║ 1 │ 3.895256 ║ \n", " 6 ║ 1.642320 │ 4.81165411491 ║ 2.92979027070 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.052493 ║ \n", " 8 ║ 1.642320 │ 4.63155845135 ║ 2.82013099132 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.017665 ║ \n", " 10 ║ 1.642320 │ 4.03639569243 ║ 2.45773959349 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.013749 ║ \n", " 12 ║ 1.642320 │ 3.00639292821 ║ 1.83057645887 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.028975 ║ \n", " 14 ║ 1.642320 │ 2.28891992127 ║ 1.39371100989 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.048897 ║ \n", " 16 ║ 1.642320 │ 1.95780587454 ║ 1.19209745051 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.265478 ║ \n", " 18 ║ 1.642320 │ 1.49471266813 ║ 0.91012249177 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.084726 ║ \n", " 20 ║ 1.642320 │ 1.34989358124 ║ 0.82194292989 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.216509 ║ \n", " 22 ║ 1.642320 │ 1.13057014213 ║ 0.68839806928 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.018354 ║ \n", " 24 ║ 1.642320 │ 1.00727210829 ║ 0.61332256067 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.009416 ║ \n", " 26 ║ 1.642320 │ 0.91876697706 ║ 0.55943226303 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.012977 ║ \n", " 28 ║ 1.642320 │ 0.79484835473 ║ 0.48397888143 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.096405 ║ \n", " 30 ║ 1.642320 │ 0.65714700682 ║ 0.40013327247 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.142885 ║ \n", " 32 ║ 1.642320 │ 0.57064376781 ║ 0.34746191622 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.005519 ║ \n", " 34 ║ 1.642320 │ 0.50072121946 ║ 0.30488645320 ║ 0.000000 │ - ║ S │ 3 │ 4.000000 ║ 1 │ 0.002562 ║ \n", " 36 ║ 1.642320 │ 0.46398321205 ║ 0.28251687839 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.010241 ║ \n", " 38 ║ 1.642320 │ 0.38322497032 ║ 0.23334362003 ║ 0.000000 │ - ║ S │ 3 │ 0.250000 ║ 1 │ 0.145492 ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " ║ <--- Penalty Function --> ║ ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║ \n", "Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║ \n", "═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣\n", " 40 ║ 1.642320 │ 0.31487530037 ║ 0.19172587420 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.006704 ║ \n", " 42 ║ 1.642320 │ 0.30050834308 ║ 0.18297791129 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.007435 ║ \n", " 44 ║ 1.642320 │ 0.26424400929 ║ 0.16089675380 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 1 │ 0.795419 ║ \n", " 46 ║ 1.642320 │ 0.20338505133 ║ 0.12384006214 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 0.004990 ║ \n", " 48 ║ 1.642320 │ 0.17520417328 ║ 0.10668087731 ║ 0.000000 │ - ║ S │ 4 │ 0.125000 ║ 1 │ 0.003565 ║ \n", " 50 ║ 1.077526 │ 0.10049441692 ║ 0.08991368997 ║ 0.002780 │ - ║ S │ 6 │ 0.031250 ║ 1 │ 0.033750 ║ \n", " 52 ║ 1.077526 │ 0.09397044638 ║ 0.08720941715 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 1 │ 2.31e-04 ║ \n", " 54 ║ 1.077526 │ 0.09254093483 ║ 0.08588275676 ║ 0.000000 │ - ║ S │ 5 │ 0.062500 ║ 2 │ 2.09e-04 ║ \n", " 56 ║ 1.077526 │ 0.09244295357 ║ 0.08579182510 ║ 0.000000 │ - ║ S │ 2 │ 0.500000 ║ 2 │ 1.78e-06 ║ \n", " 58 ║ 1.077526 │ 0.09243954005 ║ 0.08578865717 ║ 0.000000 │ - ║ S │ 1 │ 1.000000 ║ 4 │ 6.93e-15 ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Optimization results: ║ \n", "F = final iterate, B = Best (to tolerance), MF = Most Feasible ║ \n", "═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣\n", " F ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " B ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", " MF ║ │ ║ 0.08578865717 ║ 0.000000 │ - ║ │ │ ║ │ ║ \n", "═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣\n", "Iterations: 58 ║ \n", "Function evaluations: 127 ║ \n", "PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances. ║ \n", "═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝\n" ] } ], "source": [ "# Restart\n", "opts = pygransoStruct()\n", "opts.torch_device = device\n", "# set the initial point and penalty parameter to their final values from the previous run\n", "opts.x0 = soln.final.x\n", "opts.mu0 = soln.final.mu\n", "opts.limited_mem_size = 1\n", "opts.quadprog_info_msg = False\n", "opts.print_frequency = 2\n", "\n", "opts.limited_mem_warm_start = soln.H_final\n", "opts.scaleH0 = False\n", "\n", "# In contrast to full-memory BFGS updating, limited-memory BFGS\n", "# permits that H0 can be scaled on every iteration. By default,\n", "# PyGRANSO will reuse the scaling parameter that is calculated on the\n", "# very first iteration for all subsequent iterations as well. Set\n", "# this option to false to force PyGRANSO to calculate a new scaling\n", "# parameter on every iteration. Note that opts.scaleH0 has no effect\n", "# when opts.limited_mem_fixed_scaling is set to true.\n", "opts.limited_mem_fixed_scaling = False\n", "\n", "# Restart PyGRANSO\n", "opts.maxit = 100 # increase maximum allowed iterations\n", "\n", "# Main algorithm\n", "soln = pygranso(var_spec = var_in,combined_fn = comb_fn, user_opts = opts)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }