template
1 np = number of predictors
2 pwr = power used in loss, 2 for mean, 1 for median or quantile
0.5 alpha = quantile probability, 0 < alpha < 1, alpha = 0.5 for median
0 ctfm = nonnegative transformation parameter, positive to reduce correlations
Initial number of basis functions, smoothing parameter
nk lambda
1 .0
0 0 !must finish with row of zeros
Cross-validation options
0 indcv = 0/1 indicator for cross-validation
10 ncv = number of subsets for cross-validation partition
1 cvopt = option for partition: 1 sequential, 2 input permutation, 3 random varying
Output options
1 iprint = 0/1 indicator to print output (0/1)
1 ipredict = 0/1 indicator to print predictors (0/1)
-1 ngcl = number of gradient cluster projections, if < 0 then min{np,nk-1})
1000 nprint = upper bound on number of cases output
GCV search options (lambda should be 0 if GCV search is used)
1 isearch = 0/1 indicator for initial selection of nk
0 idr = 0/1 indicator for dimension reduction option
3 nfail = number of successive failures before stopping search
Initialization: gain(i) = gain(0)*w/(i + w), i=1,nm
3000 nm1 determines number of iterations nm = nm1*nk**.5
1.0 rpt0 = rpt gain(0)
100 rpt1 determines w = rpt1*nk**.5
1 ctau (tau = ctau*snn, if ctau <= 0 then set default 1.
0 ywt determines weight assigned to (dlta-yy)**2 in vq
Training algorithm tuning parameters:
50000 nm1 determines number of iterations nm = nm1*nk**.5
.25 rpt0 = rpt gain(0)
500 rpt1 determines w = rpt1*nk**.5
10 nrep = number of repetitions of first prop*nm stoch. approx. iterations
.10 prop = proportion of training repeated
training sample - rows yy,xx(1),...,xx(np),[+ indx optional]
1.0D99 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
test sample - rows yy,xx(1),...,xx(np)
1.0D99 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0