Unconstrained Deep Learning¶
Train unconstrained deep learning for CIFAR-10 classification using modified LeNet5 based on this PyTorch tutorial
Problem Description¶
We have a simple feed-forward network. The input is an image, which is fed through several layers to obtain the output. The logit output is used to decide the label of the input image. Below is a demo image of LeNet5:
Modules Importing¶
Import all necessary modules and add PyGRANSO src folder to system path.
[1]:
import time
import torch
from pygranso.pygranso import pygranso
from pygranso.pygransoStruct import pygransoStruct
from pygranso.private.getNvar import getNvarTorch
import torch.nn as nn
import torchvision.transforms as transforms
import torch.nn.functional as F
import torchvision
Data Initialization¶
Specify torch device, neural network architecture, and generate data.
NOTE: please specify path for downloading data.
Use GPU for this problem. If no cuda device available, please set device = torch.device(‘cpu’)
[2]:
device = torch.device('cuda')
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv1_bn = nn.BatchNorm2d(6)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 8, 9)
self.conv2_bn = nn.BatchNorm2d(8)
self.fc1 = nn.Linear(8 * 3 * 3, 30)
self.fc1_bn = nn.BatchNorm1d(30)
self.fc2 = nn.Linear(30, 20)
self.fc2_bn = nn.BatchNorm1d(20)
self.fc3 = nn.Linear(20, 10)
def forward(self, x):
x = self.pool(F.elu( self.conv1_bn(self.conv1(x)) ))
x = self.pool(F.elu( self.conv2_bn(self.conv2(x)) ))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.elu( self.fc1_bn(self.fc1(x)) )
x = F.elu( self.fc2_bn(self.fc2(x)) )
x = self.fc3(x)
return x
# fix model parameters
torch.manual_seed(0)
model = Net().to(device=device, dtype=torch.double)
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 1000
trainset = torchvision.datasets.CIFAR10(root='/home/buyun/Documents/GitHub/PyGRANSO/examples', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, num_workers=2)
# data_in
for i, data in enumerate(trainloader, 0):
if i >= 1:
break
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# All the user-provided data (vector/matrix/tensor) must be in torch tensor format.
# As PyTorch tensor is single precision by default, one must explicitly set `dtype=torch.double`.
# Also, please make sure the device of provided torch tensor is the same as opts.torch_device.
labels = labels.to(device=device) # label/target [256]
inputs = inputs.to(device=device, dtype=torch.double) # input data [256,3,32,32]
Files already downloaded and verified
Function Set-Up¶
Encode the optimization variables, and objective and constraint functions.
Note: please strictly follow the format of comb_fn, which will be used in the PyGRANSO main algortihm.
[3]:
def user_fn(model,inputs,labels):
# objective function
outputs = model(inputs)
criterion = nn.CrossEntropyLoss()
f = criterion(outputs, labels)
ci = None
ce = None
return [f,ci,ce]
comb_fn = lambda model : user_fn(model,inputs,labels)
User Options¶
Specify user-defined options for PyGRANSO
[4]:
opts = pygransoStruct()
opts.torch_device = device
nvar = getNvarTorch(model.parameters())
opts.x0 = torch.nn.utils.parameters_to_vector(model.parameters()).detach().reshape(nvar,1)
opts.opt_tol = 1e-3
# opts.fvalquit = 1e-6
opts.print_level = 1
opts.print_frequency = 10
# opts.print_ascii = True
Initial Test¶
Check initial accuracy of the modified LeNet5 model
[5]:
outputs = model(inputs )
acc = (outputs.max(1)[1] == labels).sum().item()/labels.size(0)
print("Initial acc = {}".format(acc))
Initial acc = 0.102
/home/buyun/anaconda3/envs/cuosqp_pygranso/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448255797/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Main Algorithm¶
[6]:
start = time.time()
soln = pygranso(var_spec= model, combined_fn = comb_fn, user_opts = opts)
end = time.time()
print("Total Wall Time: {}s".format(end - start))
╔═════ QP SOLVER NOTICE ════════════════════════════════════════════════════════════════════════╗
║ PyGRANSO requires a quadratic program (QP) solver that has a quadprog-compatible interface, ║
║ the default is osqp. Users may provide their own wrapper for the QP solver. ║
║ To disable this notice, set opts.quadprog_info_msg = False ║
╚═══════════════════════════════════════════════════════════════════════════════════════════════╝
══════════════════════════════════════════════════════════════════════════════════════════════╗
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation ║
Version 1.0.0 ║
Licensed under the AGPLv3, Copyright (C) 2021 Tim Mitchell and Buyun Liang ║
══════════════════════════════════════════════════════════════════════════════════════════════╣
Problem specifications: ║
# of variables : 7500 ║
# of inequality constraints : 0 ║
# of equality constraints : 0 ║
═════╦════════════╦════════════════╦═════════════╦═══════════════════════╦════════════════════╣
║ Penalty Fn ║ ║ Violation ║ <--- Line Search ---> ║ <- Stationarity -> ║
Iter ║ Mu │ Value ║ Objective ║ Ineq │ Eq ║ SD │ Evals │ t ║ Grads │ Value ║
═════╬════════════╬════════════════╬═════════════╬═══════════════════════╬════════════════════╣
0 ║ - │ - ║ 2.39541706590 ║ - │ - ║ - │ 1 │ 0.000000 ║ 1 │ 1.473948 ║
10 ║ - │ - ║ 1.63783103375 ║ - │ - ║ QN │ 1 │ 1.000000 ║ 1 │ 0.269115 ║
20 ║ - │ - ║ 1.26055629664 ║ - │ - ║ QN │ 1 │ 1.000000 ║ 1 │ 0.863316 ║
30 ║ - │ - ║ 0.84374002729 ║ - │ - ║ QN │ 1 │ 1.000000 ║ 1 │ 0.222527 ║
40 ║ - │ - ║ 0.39057278862 ║ - │ - ║ QN │ 1 │ 1.000000 ║ 1 │ 0.224242 ║
50 ║ - │ - ║ 0.08692818754 ║ - │ - ║ QN │ 1 │ 1.000000 ║ 1 │ 0.498670 ║
60 ║ - │ - ║ 0.00766217133 ║ - │ - ║ QN │ 3 │ 4.000000 ║ 1 │ 0.106859 ║
70 ║ - │ - ║ 0.00111545451 ║ - │ - ║ QN │ 2 │ 2.000000 ║ 1 │ 0.028518 ║
80 ║ - │ - ║ 3.9223359e-04 ║ - │ - ║ QN │ 2 │ 2.000000 ║ 1 │ 0.005710 ║
═════╩════════════╩════════════════╩═════════════╩═══════════════════════╩════════════════════╣
Optimization results: ║
F = final iterate, B = Best (to tolerance), MF = Most Feasible ║
═════╦════════════╦════════════════╦═════════════╦═══════════════════════╦════════════════════╣
F ║ │ ║ 1.9363262e-04 ║ - │ - ║ │ │ ║ │ ║
B ║ │ ║ 1.9363262e-04 ║ - │ - ║ │ │ ║ │ ║
═════╩════════════╩════════════════╩═════════════╩═══════════════════════╩════════════════════╣
Iterations: 89 ║
Function evaluations: 154 ║
PyGRANSO termination code: 0 --- converged to stationarity tolerance. ║
══════════════════════════════════════════════════════════════════════════════════════════════╝
Total Wall Time: 6.698632478713989s
Train Accuracy¶
[7]:
torch.nn.utils.vector_to_parameters(soln.final.x, model.parameters())
outputs = model(inputs)
acc = (outputs.max(1)[1] == labels).sum().item()/labels.size(0)
print("Train acc = {}".format(acc))
Train acc = 1.0