Dictionary Learning

Solve orthogonal dictionary learning problem taken from: Yu Bai, Qijia Jiang, and Ju Sun. “Subgradient descent learns orthogonal dictionaries.” arXiv preprint arXiv:1810.10702 (2018).

Problem Description

Given data \(\{y_i \}_{i \in[m]}\) generated as \(y_i = A x_i\), where \(A \in R^{n \times n}\) is a fixed unknown orthogonal matrix and each \(x_i \in R^n\) is an iid Bernoulli-Gaussian random vector with parameter \(\theta \in (0,1)\), recover \(A\).

Write \(Y \doteq [y_1,...,y_m]\) and \(X \doteq [x_1,...,x_m]\). To find the column of \(A\), one can perform the following optimization:

\[\min_{q \in R^n} f(q) \doteq \frac{1}{m} ||q^T Y||_{1} = \frac{1}{m} \sum_{i=1}^m |q^T y_i|,\]
\[\text{s.t.} ||q||_2 = 1\]

This problem is nonconvex due to the constraint and nonsmooth due to the objective.

Based on the above statistical model, \(q^T Y = q^T A X\) has the highest sparsity when \(q\) is a column of \(A\) (up to sign) so that \(q^T A\) is 1-sparse.

Modules Importing

Import all necessary modules and add PyGRANSO src folder to system path.

[1]:
import time
import numpy as np
import torch
import numpy.linalg as la
from scipy.stats import norm
from pygranso.pygranso import pygranso
from pygranso.pygransoStruct import pygransoStruct

from pygranso.private.getNvar import getNvarTorch
import torch.nn as nn

Initialization

Specify torch device, create torch model and generate data

Use GPU for this problem. If no cuda device available, please set device = torch.device(‘cpu’)

[2]:
device = torch.device('cuda')

class Dict_Learning(nn.Module):

    def __init__(self,n):
        super().__init__()
        np.random.seed(1)
        q0 = norm.ppf(np.random.rand(n,1))
        q0 /= la.norm(q0,2)
        self.q = nn.Parameter( torch.from_numpy(q0) )

    def forward(self, Y,m):
        qtY = self.q.T @ Y
        f = 1/m * torch.norm(qtY, p = 1)
        return f

## Data initialization
n = 30
np.random.seed(1)
m = 10*n**2   # sample complexity
theta = 0.3   # sparsity level
Y = norm.ppf(np.random.rand(n,m)) * (norm.ppf(np.random.rand(n,m)) <= theta)  # Bernoulli-Gaussian model
# All the user-provided data (vector/matrix/tensor) must be in torch tensor format.
# As PyTorch tensor is single precision by default, one must explicitly set `dtype=torch.double`.
# Also, please make sure the device of provided torch tensor is the same as opts.torch_device.
Y = torch.from_numpy(Y).to(device=device, dtype=torch.double)

torch.manual_seed(0)

model = Dict_Learning(n).to(device=device, dtype=torch.double)

Function Set-Up

Encode the optimization variables, and objective and constraint functions.

Note: please strictly follow the format of comb_fn, which will be used in the PyGRANSO main algortihm.

[3]:
def user_fn(model,Y,m):
    # objective function
    f = model(Y,m)

    q = list(model.parameters())[0]

    # inequality constraint
    ci = None

    # equality constraint
    ce = pygransoStruct()
    ce.c1 = q.T @ q - 1

    return [f,ci,ce]

comb_fn = lambda model : user_fn(model,Y,m)

User Options

Specify user-defined options for PyGRANSO

[4]:
opts = pygransoStruct()
opts.torch_device = device
opts.maxit = 500
np.random.seed(1)
nvar = getNvarTorch(model.parameters())
opts.x0 = torch.nn.utils.parameters_to_vector(model.parameters()).detach().reshape(nvar,1)

opts.print_frequency = 10

Main Algorithm

[5]:
start = time.time()
soln = pygranso(var_spec= model, combined_fn = comb_fn, user_opts = opts)
end = time.time()
print("Total Wall Time: {}s".format(end - start))
print(max(abs(soln.final.x))) # should be close to 1


╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║  PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface,  ║
║  the default is osqp. Users may provide their own wrapper for the QP solver.                  ║
║  To disable this notice, set opts.quadprog_info_msg = False                                   ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation                                             ║
Version 1.0.0                                                                                                    ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang                                       ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications:                                                                                          ║
 # of variables                     :   30                                                                       ║
 # of inequality constraints        :    0                                                                       ║
 # of equality constraints          :    1                                                                       ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
     ║ <--- Penalty Function --> ║                ║ Total Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║    Mu    │      Value     ║    Objective   ║ Ineq │    Eq    ║ SD │ Evals │     t    ║ Grads │    Value   ║
═════╬═══════════════════════════╬════════════════╬═════════════════╬═══════════════════════╬════════════════════╣
   0 ║ 1.000000 │  0.61751624522 ║  0.61751624522 ║   -  │ 0.000000 ║ -  │     1 │ 0.000000 ║     1 │ 0.054664   ║
  10 ║ 1.000000 │  0.60573380055 ║  0.60513582468 ║   -  │ 5.98e-04 ║ S  │     1 │ 1.000000 ║     1 │ 0.024968   ║
  20 ║ 1.000000 │  0.58456516016 ║  0.58301955756 ║   -  │ 0.001546 ║ S  │     1 │ 1.000000 ║     1 │ 0.043517   ║
  30 ║ 1.000000 │  0.50113197499 ║  0.49475409554 ║   -  │ 0.006378 ║ S  │     3 │ 0.250000 ║     1 │ 0.121253   ║
  40 ║ 1.000000 │  0.49278124194 ║  0.49260444460 ║   -  │ 1.77e-04 ║ S  │     4 │ 0.125000 ║     1 │ 0.037304   ║
  50 ║ 1.000000 │  0.49225009818 ║  0.49217494723 ║   -  │ 7.52e-05 ║ S  │     5 │ 0.062500 ║     1 │ 0.032163   ║
  60 ║ 1.000000 │  0.49212731751 ║  0.49208854433 ║   -  │ 3.88e-05 ║ S  │     4 │ 0.125000 ║     1 │ 0.051779   ║
  70 ║ 1.000000 │  0.49203371691 ║  0.49201049130 ║   -  │ 2.32e-05 ║ S  │     4 │ 0.125000 ║     1 │ 0.054529   ║
  80 ║ 1.000000 │  0.49197689465 ║  0.49197679422 ║   -  │ 1.00e-07 ║ S  │     2 │ 0.500000 ║     1 │ 0.001300   ║
  90 ║ 1.000000 │  0.49194701030 ║  0.49194698105 ║   -  │ 2.93e-08 ║ S  │     5 │ 0.062500 ║     5 │ 1.02e-04   ║
 100 ║ 1.000000 │  0.49194382838 ║  0.49194381415 ║   -  │ 1.42e-08 ║ S  │     6 │ 0.031250 ║    10 │ 5.71e-05   ║
 110 ║ 1.000000 │  0.49194277900 ║  0.49194277111 ║   -  │ 7.88e-09 ║ S  │     5 │ 0.062500 ║    18 │ 9.98e-06   ║
 120 ║ 1.000000 │  0.49194243076 ║  0.49194242538 ║   -  │ 5.38e-09 ║ S  │     6 │ 0.031250 ║    27 │ 2.47e-06   ║
 130 ║ 1.000000 │  0.49194218055 ║  0.49194217869 ║   -  │ 1.87e-09 ║ S  │     4 │ 0.125000 ║    37 │ 5.14e-07   ║
 140 ║ 1.000000 │  0.49194213249 ║  0.49194213160 ║   -  │ 8.82e-10 ║ S  │     5 │ 0.062500 ║    40 │ 1.15e-07   ║
 150 ║ 1.000000 │  0.49194211795 ║  0.49194211747 ║   -  │ 4.78e-10 ║ S  │     5 │ 0.062500 ║    40 │ 4.44e-08   ║
 160 ║ 1.000000 │  0.49194211356 ║  0.49194211328 ║   -  │ 2.77e-10 ║ S  │     5 │ 0.062500 ║    40 │ 2.16e-08   ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Optimization results:                                                                                            ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible                                                   ║
═════╦═══════════════════════════╦════════════════╦═════════════════╦═══════════════════════╦════════════════════╣
   F ║          │                ║  0.49194211312 ║   -  │ 2.59e-10 ║    │       │          ║       │            ║
   B ║          │                ║  0.49194211312 ║   -  │ 2.59e-10 ║    │       │          ║       │            ║
  MF ║          │                ║  0.61751624522 ║   -  │ 0.000000 ║    │       │          ║       │            ║
═════╩═══════════════════════════╩════════════════╩═════════════════╩═══════════════════════╩════════════════════╣
Iterations:              161                                                                                     ║
Function evaluations:    664                                                                                     ║
PyGRANSO termination code: 0 --- converged to stationarity and feasibility tolerances.                           ║
═════════════════════════════════════════════════════════════════════════════════════════════════════════════════╝
Total Wall Time: 2.669196605682373s
tensor([1.0000], device='cuda:0', dtype=torch.float64)