Delaunaybased Derivativefree Optimization via Global Surrogates (ΔDOGS)
(work supported by AFOSR grant FA 95501210046 in collaboration with Prof David Meyer)
Alongside derivativebased methods, which scale better to higherdimensional problems, derivativefree methods play an essential role in the optimization of many practical engineering systems, especially those in which function evaluations are determined by statistical averaging, and those for which the function of interest is nonconvex in the adjustable parameters. We are in the process of developing an entire new family of surrogatebased derivativefree optimization schemes. The idea unifying this efficient and (under the appropriate assumptions) provablygloballyconvergent family of schemes is the the minimization of a search function which linearly combines a computationallyinexpensive “surrogate” (that is, an interpolation, or in some cases a regression, of recent function evaluations  we generally favor some variant of polyharmonic splines for this purpose), to summarize the trends evident in the data available thus far, with a synthetic piecewisequadratic “uncertainty function” (built on the framework of a Delaunay triangulation of existing datapoints), to characterize the reliability of the surrogate by quantifying the distance of any given point in parameter space to the nearest function evaluations.
Schemes developed in the family thus far include:

ΔDOGS, the original scheme of this class, designed for bound or linear constraints [BCB16],

ΔDOGS(C), designed for convex feasible domains defined by computable constraint functions [BB16],

ΔDOGS(Ω), designed for nonconvex (even, disconnected!) feasible domains defined by computable constraint functions [ABB_omega],

ΔDOGS(Z) and ΔDOGS(Ω Z), which accelerate convergence by coordinating function evaluations by restricting them at each iteration to lie on a Cartesian grid which is successively refined as convergence is approached [BB_Z],

ΔDOGS(𝚲), which modifies the ΔDOGS(Z) idea by using a dense lattice (derived from an ndimensional sphere packing), instead of a Cartesian grid, to coordinate the search [ABB_Lambda] ,

α DOGS, which is specifically designed to handle efficiently approximate function evaluations obtained via statistical sampling of an ergodic function, the uncertainty of which is reduced as the sampling time T is increased (that is, the scheme automatically increases the accuracy of the function evaluations performed as convergence is approached) [BB_alpha], and

α DOGS(extended), designed to simultaneous increase the sampling time T (as in α DOGS), and refine the numerical approximation (reducing dx and/or dt), as convergence is approached [bab_alphaX].
Related efforts in this project include:

gradientbased acceleration of the (globallyconvergent) algorithms in the ΔDOGS family, combining derivativefree global exploration with derivativebased local refinement, [abb17]

the development of the Multivariate Adaptive Polyharmonic Splines (MAPS) method, which scales the parameter domain under consideration based on the variation seen in data computed thus far in the optimization, thereby obtaining significantly smoother interpolants [abmb17],

the judicious use of MAPS to identify and, for a number of iterations, neglect the less significant parameters in a given optimization problem, thereby speeding convergence, [ABB_reduction]

introduction of some practical databased approaches to Uncertainty Quantification (UQ), and

the development of methods to optimize an objective function within a feasible domain defined only via binary oracle calls (feasible or not), not computable constraint functions.
Ongoing practical applications of this effort being explored in our lab include

the design of airfoils and hydrofoils [abmb17], and

the design of lowstorage implicit/explicit Runge–Kutta (IMEXRK) schemes for high performance computing (HPC) problems like the numerical simulation of turbulence [ACBB_IMEXRK_via_DDOGS].