subgradient {snnR}R Documentation

subgradient

Description

This function obtains the minimum-norm subgradient of the approximated square error with L1 norm penalty or L2 norm penalty.

Usage

subgradient(w, X, y, nHidden, lambda, lambda2)

Arguments

w

(numeric, n) weights and biases.

X

(numeric, n x p) incidence matrix.

y

(numeric, n) the response data-vector.

nHidden

(positive integer, 1 x h) matrix, h indicates the number of hidden-layers and nHidden[1,h] indicates the neurons of the h-th hidden-layer.

lambda

(numeric,n) lagrange multiplier for L1 norm penalty on parameters.

lambda2

(numeric,n) lagrange multiplier for L2 norm penalty on parameters.

Details

It is based on choosing a subgradient with minimum norm as a steepest descent direction and taking a step resembling Newton iteration in this direction with a Hessian approximation.

Value

A vector with the subgradient values.


[Package snnR version 1.0 Index]