NLopt
1 Installation
2 High Level Interface
minimize/  flvector
maximize/  flvector
optimize/  flvector
minimize/  f64vector
maximize/  f64vector
optimize/  f64vector
minimize/  vector
maximize/  vector
optimize/  vector
minimize/  list
maximize/  list
optimize/  list
minimize/  args
maximize/  args
optimize/  args
3 Safe Interface
3.1 Basics
create
copy
get-algorithm
get-dimension
optimize
nlopt-opt?
3.2 Constraints
set-lower-bounds
set-upper-bounds
set-lower-bounds1
set-upper-bounds1
get-lower-bounds
get-upper-bounds
remove-inequality-constraints
add-inequality-constraint
remove-equality-constraints
add-equality-constraint
3.3 Stopping Criteria
set-stopval
get-stopval
set-ftol-rel
get-ftol-rel
set-ftol-abs
get-ftol-abs
set-xtol-rel
get-xtol-rel
set-xtol-abs1
set-xtol-abs
get-xtol-abs
set-maxeval
get-maxeval
set-maxtime
get-maxtime
force-stop
set-force-stop
get-force-stop
3.4 Algorithm-Specific Parameters
set-local-optimizer
set-population
get-population
set-vector-storage
get-vector-storage
set-default-initial-step
set-initial-step1
set-initial-step
get-initial-step
4 Unsafe Interface
4.1 Differences in the Basics
optimize
set-min-objective
set-max-objective
4.2 Differences in the Constraints
set-lower-bounds
set-upper-bounds
get-lower-bounds
get-upper-bounds
add-inequality-constraint
add-equality-constraint
4.3 Differences in the Stopping Criteria
set-xtol-abs
get-xtol-abs
4.4 Differences in the Algorithm-Specific Parameters
set-default-initial-step
set-initial-step
get-initial-step
5 Algorithms
5.1 Global Optimization
GN_  DIRECT
GN_  DIRECT_  L
GLOBAL_  DIRECT_  L_  RAND
GLOBAL_  DIRECT_  NOSCAL
GLOBAL_  DIRECT_  L_  NOSCAL
GLOBAL_  DIRECT_  L_  RAND_  NOSCAL
GN_  ORIG_  DIRECT
GN_  ORIG_  DIRECT_  L
GN_  CRS2_  LM
GN_  MLSL
GD_  MLSL
GN_  MLSL_  LDS
GD_  MLSL_  LDS
G_  MLSL
G_  MLSL_  LDS
5.2 Local derivative-free optimization
5.3 Local gradient-based optimization
5.4 Augmented Lagrangian algorithm
AUGLAG
AUGLAG_  EQ
Bibliography
7.1

NLopt

Jay Kominek <[email protected]>

 (require nlopt) package: nlopt

I consider this package to be in a somewhat beta state. I don’t yet promise to keep the API from changing. It needs some feedback yet. Feel free to comment on it.

This package provides a wrapper for the NLopt nonlinear optimization package[NLopt], which is a common interface for a number of different optimization routines.

The nlopt module currently reexports the contents of nlopt/highlevel. The interface provided by nlopt/highlevel will be more stable than the interface provided by nlopt. The goal of nlopt will be to facilitate interactive programming, scripting, and generally getting things done quickly with a minimal amount of typing. If nlopt/highlevel is no longer the best way to do that, nlopt will provide different stuff which better satisfies those goals, while nlopt/highlevel will be left alone. (Perhaps nlopt would reexport a hypothetical new "nlopt/highlevel2".)

1 Installation

This Racket wrapper was developed and tested against NLopt 2.4.2; I’d expect anything later in the 2.4.x series to suffice as well, but there are no guarantees.

You’ll need to get the appropriate NLopt shared library for your Racket installation. Precompiled Windows binaries are available from the NLopt website; make sure you download the appropriate bitness.

Many Linux distributions provide NLopt. Look for libnlopt0 or similar.

In FreeBSD, NLopt is available in the ports collection as nlopt.

And on Mac OS X, I believe you’re stuck compiling it for yourself.

If you have to compile it yourself, and on Windows, where you’re handed a DLL, you’ll need to ensure that the shared library ends up somewhere that Racket will be able to find it with minimal effort. Placing it in the same directory as your code might work, or you can modify your PATH or LD_LIBRARY_PATH environmental variables to point to it.

2 High Level Interface

 (require nlopt/highlevel) package: nlopt

This is the most unstable part of the package. Not only will things here change, they might not even work right now.

procedure

(minimize/flvector ...)  ...

procedure

(maximize/flvector ...)  ...

procedure

(optimize/flvector fun 
  x0 
  #:maximize maximize 
  #:minimize minimize 
  #:method method 
  #:jac jac 
  #:bounds bounds 
  #:ineq-constraints ineq-constraints 
  #:eq-constraints eq-constraints 
  #:tolerance tolerance 
  #:epsilon epsilon 
  #:maxeval maxeval 
  #:maxtime maxtime) 
  
real? flvector?
  fun : (-> flvector? any/c flonum?)
  x0 : flvector?
  maximize : boolean?
  minimize : boolean?
  method : (or/c symbol? #f)
  jac : (or/c (-> flonum? flvector? flvector? any/c) #f)
  bounds : (or/c (sequence/c (pair/c real? real?)) #f)
  ineq-constraints : 
(or/c #f
  (sequence/c
   (or/c (-> flvector? any/c flonum?)
         (cons/c
          (-> flvector? any/c flonum?)
          (-> flonum? flvector? flvector? any/c)))))
  eq-constraints : 
(or/c #f
  (sequence/c
   (or/c (-> flvector? any/c flonum?)
         (cons/c
          (-> flvector? any/c flonum?)
          (-> flonum? flvector? flvector? any/c)))))
  tolerance : real?
  epsilon : real?
  maxeval : natural-number/c
  maxtime : (and/c positive? real?)
These super convenient procedure does pretty much everything for you. minimize/flvector and maximize/flvector behave the same as optimize/flvector, and take all the same arguments, except for #:minimize and #:maximize.

These /flvector variants require flonum? values. (Which is largely enforced by the flvectors themselves.) /flvector objective functions should be of the form (fun x) where x is an flvector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is an flvector? to be populated with the gradient.

fun is the procedure that will be optimized. It shouldn’t be invoked significantly more than maxeval times, over maxtime seconds. x0 is your initial guess for the optimization; some algorithms are more sensitive to the quality of your initial guess than others. data will be passed to every invocation of fun or jac. You may use force-stop inside of fun.

#:maximize and #:minimize determine whether a maximization, or minimzation will be performed. Exactly one of them must be #t. Anything else will result in an exception being raised.

method is a symbol indicating which optimization algorithm to run. See the Algorithms section for your options. If you omit it, or set it to #f, an algorithm will be chosen automatically based on jac, bounds, ineq-constraints and eq-constraints. It should run without error; performance is not guaranteed.

jac is the Jacobian of fun. If you omit it, or supply #f then a very simple approximation will be constructed, by determining how much fun varies when the current x is varied by epsilon. If you provide a Jacobian, or it is not used by the algorithm you select, then epsilon is unused.

bounds may be #f in which case no upper or lower bounds are applied. If it isn’t #f then it should be a sequence of the same length as x0, each element in the sequence should be a pair. The car of each pair will be the lower bound, and the cdr the upper bound. You may supply +max.0 and -max.0 if you don’t wish to bound a dimension above or below, respectively.

ineq-constraints and eq-constraints are sequences of constraint functions (or #f). They must have the same interface as an objective function. An inequality constraint f will constrain the optimization so that (<= (f x _) 0.0) is #t. An equality constraint requires that (= (f x _) 0.0) remain #t. You may provide just the constraint function itself, or a pair, containing the constraint function in car, and the Jacobian of the constraint function in cdr.

tolerance is not currently used. Sorry!

This procedure’s interface was based on scipy’s optimize function.

procedure

(minimize/f64vector ...)  ...

procedure

(maximize/f64vector ...)  ...

procedure

(optimize/f64vector ...)  ...

Takes different arguments. Needs docs.

The /f64vector variants should perform about as well as the /flvector variants. They accept any real? values. /flvector objective functions should be of the form (fun x) where x is an f64vector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is an f64vector? to be populated with the gradient.

procedure

(minimize/vector ...)  ...

procedure

(maximize/vector ...)  ...

procedure

(optimize/vector ...)  ...

Takes different arguments. Needs docs.

The /vector variants will be less efficient than the /flvector variants. They accept any real? values. /vector objective functions should be of the form (fun x) where x is a vector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is a vector? to be populated with the gradient.

procedure

(minimize/list ...)  ...

procedure

(maximize/list ...)  ...

procedure

(optimize/list ...)  ...

Takes different arguments. Needs docs.

The /list variants will be the slowest. Like /vector variants, will handle any real? values. /list objective functions should be of the form (fun x) where x is a list? and the Jacobians should be of the form (jac y x) where y is (fun x). They should return the gradient as a list?.

procedure

(minimize/args ...)  ...

procedure

(maximize/args ...)  ...

procedure

(optimize/args ...)  ...

Takes different arguments. Needs docs.

The /args variants will be among the slowest. Like /vector and /list variants, will handle any real? values. /args objective functions should be of the form (fun x ...) where x ... are the elements of the x vector and the Jacobians should be of the form (jac x ...). They should return the gradient as a list?. Note /args Jacobian functions do not receive the the precomputed y. This allows you to quickly set up an optimization problem using existing mathematical functions:

"(maximize/args sin""\n""               "
               "'(0.0)""\n""               "
               "#:jac cos""\n""               "
               "#:bounds '((-inf.0 0.0)))"

3 Safe Interface

 (require nlopt/safe) package: nlopt

This module is the safe, fully-contracted version of the interface to the C library. For the unsafe, contractless version, see nlopt/unsafe.

3.1 Basics

procedure

(create algorithm dimension)  nlopt-opt?

  algorithm : symbol?
  dimension : (and/c natural-number/c positive?)
Creates a new NLopt options structure. The algorithm and dimension of the optimization problem cannot be changed later. Everything else can be.

The general pattern for using this library is to create an options structure, apply the various setup options to it (making sure to include a stopping condition!), and then run optimize.

procedure

(copy opt)  nlopt-opt?

  opt : nlopt-opt?
Copies an existing options object. Racket objects stored as data arguments for functions are not copied.

procedure

(get-algorithm opt)  symbol?

  opt : nlopt-opt?
Returns the algorithm being used by the options structure.

procedure

(get-dimension opt)  (and/c natural-number/c positive?)

  opt : nlopt-opt?
Returns the dimension the options structure is set up to handle.

procedure

(optimize opt x)  
[res symbol?] [f flonum?]
  opt : nlopt-opt?
  x : flvector?
Runs the optimization problem, with an initial guess provided in x. The status of the optimization is returned in res. If it was successful, x will contain the optimized values of the parameters, and f will by the corresponding value of the objective function.

x must be at least as large as the dimension of opt.

procedure

(nlopt-opt? v)  boolean?

  v : any/c
Returns #t if v is a NLopt options structure, #f otherwise.

3.2 Constraints

procedure

(set-lower-bounds opt bounds)  nlopt-result/c

  opt : nlopt-opt?
  bounds : (flvector/length? (get-dimension opt))

procedure

(set-upper-bounds opt bounds)  nlopt-result/c

  opt : nlopt-opt?
  bounds : (flvector/length? (get-dimension opt))

procedure

(set-lower-bounds1 opt lower-bound)  nlopt-result/c

  opt : nlopt-opt?
  lower-bound : real?
Sets all of the lower bounds to the same value, lower-bound.

procedure

(set-upper-bounds1 opt upper-bound)  nlopt-result/c

  opt : nlopt-opt?
  upper-bound : real?
Sets all of the upper bounds to the same value, upper-bound.

procedure

(get-lower-bounds opt)

  
[res nlopt-result/c]
[bounds (flvector/length? (get-dimension opt))]
  opt : nlopt-opt?

procedure

(get-upper-bounds opt)

  
[res nlopt-result/c]
[bounds (flvector/length? (get-dimension opt))]
  opt : nlopt-opt?

procedure

(remove-inequality-constraints opt)  nlopt-result/c

  opt : nlopt-opt?
Removes all inequality constraints.

procedure

(add-inequality-constraint opt    
  f    
  data    
  tolerance)  nlopt-result/c
  opt : nlopt-opt?
  f : 
(-> (=/c (get-dimension opt))
    flvector?
    (or/c flvector? #f)
    any/c
    flonum?)
  data : any/c
  tolerance : real?
The optimization will be constrainted such that (<= (fun x data) 0.0) is #t.

procedure

(remove-equality-constraints opt)  nlopt-result/c

  opt : nlopt-opt?
Removes all equality constraints.

procedure

(add-equality-constraint opt    
  f    
  data    
  tolerance)  nlopt-result/c
  opt : nlopt-opt?
  f : 
(-> (=/c (get-dimension opt))
    flvector?
    (or/c flvector? #f)
    any/c
    flonum?)
  data : any/c
  tolerance : real?
The optimization will be constrainted such that (= (fun x data) 0.0) is #t.

3.3 Stopping Criteria

procedure

(set-stopval opt stopval)  nlopt-result/c

  opt : nlopt-opt?
  stopval : real?

procedure

(get-stopval opt)  flonum?

  opt : nlopt-opt?

procedure

(set-ftol-rel opt ftol-rel)  nlopt-result/c

  opt : nlopt-opt?
  ftol-rel : real?

procedure

(get-ftol-rel opt)  flonum?

  opt : nlopt-opt?

procedure

(set-ftol-abs opt ftol-abs)  nlopt-result/c

  opt : nlopt-opt?
  ftol-abs : real?

procedure

(get-ftol-abs opt)  flonum?

  opt : nlopt-opt?

procedure

(set-xtol-rel opt xtol-rel)  nlopt-result/c

  opt : nlopt-opt?
  xtol-rel : real?

procedure

(get-xtol-rel opt)  flonum?

  opt : nlopt-opt?

procedure

(set-xtol-abs1 opt xtol-abs)  nlopt-result/c

  opt : nlopt-opt?
  xtol-abs : real?

procedure

(set-xtol-abs opt xtols)  nlopt-result/c

  opt : nlopt-opt?
  xtols : (flvector/length? (get-dimension opt))

procedure

(get-xtol-abs opt)  
nlopt-result/c
(flvector/length? (get-dimension opt))
  opt : nlopt-opt?

procedure

(set-maxeval opt maxeval)  nlopt-result/c

  opt : nlopt-opt?
  maxeval : natural-number/c

procedure

(get-maxeval opt)  natural-number/c

  opt : nlopt-opt?

procedure

(set-maxtime opt maxtime)  nlopt-result/c

  opt : nlopt-opt?
  maxtime : real?

procedure

(get-maxtime opt)  flonum?

  opt : nlopt-opt?

procedure

(force-stop opt)  nlopt-result/c

  opt : nlopt-opt?
When called within an objective function evaluation, the optimization will stop at the next opportunity. Calling set-force-stop will save an integer value which can be retrieved after optimize returns with get-force-stop.

procedure

(set-force-stop opt value)  nlopt-result/c

  opt : nlopt-opt?
  value : integer?
value is stored within the optimization options, and can be retrieved by get-force-stop later.

procedure

(get-force-stop opt)  integer?

  opt : nlopt-opt?
Retrieves the value stored by set-force-stop.

3.4 Algorithm-Specific Parameters

procedure

(set-local-optimizer opt local-opt)  nlopt-result/c

  opt : nlopt-opt?
  local-opt : nlopt-opt?

procedure

(set-population opt pop)  nlopt-result/c

  opt : nlopt-opt?
  pop : natural-number/c

procedure

(get-population opt)  natural-number/c

  opt : nlopt-opt?

procedure

(set-vector-storage opt vectors)  nlopt-result/c

  opt : nlopt-opt?
  vectors : natural-number/c

procedure

(get-vector-storage opt)  natural-number/c

  opt : nlopt-opt?

procedure

(set-default-initial-step opt bounds)  nlopt-result/c

  opt : nlopt-opt?
  bounds : (flvector/length? (get-dimension opt))

procedure

(set-initial-step1 opt initial-step)  nlopt-result/c

  opt : nlopt-opt?
  initial-step : real?

procedure

(set-initial-step opt bounds)  nlopt-result/c

  opt : nlopt-opt?
  bounds : (flvector/length? (get-dimension opt))

procedure

(get-initial-step opt)  
nlopt-result/c
(flvector/length? (get-dimension opt))
  opt : nlopt-opt?

4 Unsafe Interface

 (require nlopt/unsafe) package: nlopt

This module is the unsafe, contractless version of the interface to the C library. For the safe, fully-contracted version, see nlopt/safe.

The one bit of safety provided by this module is that nlopt_opt structures will be cleaned up properly, and Racket values passed to NLopt procedures will be held onto until NLopt no longer refers to them.

The API of this module is probably the most stable thing in the whole package, as it is a direct mapping of the C API.

Failure to obey any of the requirements mentioned below will probably cause Racket to crash.

4.1 Differences in the Basics

procedure

(optimize opt x)  
[res symbol?] [f flonum?]
  opt : nlopt-opt?
  x : cpointer?
As with the safe version of optimize, but x is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(set-min-objective opt f data)  symbol?

  opt : nlopt-opt?
  f : 
(-> natural-number/c
    cpointer?
    (or/c cpointer? #f)
    any/c
    flonum?)
  data : any/c
As with the safe version of set-min-objective, but the objective function f receives only bare pointers instead of flvectors.

procedure

(set-max-objective opt f data)  symbol?

  opt : nlopt-opt?
  f : 
(-> natural-number/c
    cpointer?
    (or/c cpointer? #f)
    any/c
    flonum?)
  data : any/c
As with the safe version of set-max-objective, but the objective function f receives only bare pointers instead of flvectors.

4.2 Differences in the Constraints

procedure

(set-lower-bounds opt bounds)  symbol?

  opt : nlopt-opt?
  bounds : cpointer?
As with the safe version of set-lower-bounds, but bounds is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(set-upper-bounds opt bounds)  symbol?

  opt : nlopt-opt?
  bounds : cpointer?
As with the safe version of set-upper-bounds, but bounds is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(get-lower-bounds opt bounds)  symbol?

  opt : nlopt-opt?
  bounds : cpointer?
As with the safe version of get-lower-bounds, but bounds is provided as a bare pointer. It must point to a block of memory large enough to contain (get-dimension opt) double-precision floats.

procedure

(get-upper-bounds opt bounds)  symbol?

  opt : nlopt-opt?
  bounds : cpointer?
As with the safe version of get-upper-bounds, but bounds is provided as a bare pointer. It must point to a block of memory large enough to contain (get-dimension opt) double-precision floats.

procedure

(add-inequality-constraint opt    
  f    
  data    
  tolerance)  symbol?
  opt : nlopt-opt?
  f : 
(-> nlopt-opt?
    cpointer?
    (or/c cpointer? #f)
    any/c
    flonum?)
  data : any/c
  tolerance : real?
As with the safe version of add-inequality-constraint, but the constraint function receives only bare pointers instead of flvectors.

procedure

(add-equality-constraint opt    
  f    
  data    
  tolerance)  symbol?
  opt : nlopt-opt?
  f : 
(-> nlopt-opt?
    cpointer?
    (or/c cpointer? #f)
    any/c
    flonum?)
  data : any/c
  tolerance : real?
As with the safe version of add-inequality-constraint, but the constraint function receives only bare pointers instead of flvectors.

4.3 Differences in the Stopping Criteria

procedure

(set-xtol-abs opt xtols)  symbol?

  opt : nlopt-opt?
  xtols : cpointer?
As with the safe version of set-xtol-abs, but xtols is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(get-xtol-abs opt bounds)  symbol?

  opt : nlopt-opt?
  bounds : cpointer?
As with the safe version of set-xtol-abs, but xtols is provided as a bare pointer. It must point to a block of memory large enough to contain (get-dimension opt) double-precision floats.

4.4 Differences in the Algorithm-Specific Parameters

procedure

(set-default-initial-step opt stepsizes)  symbol?

  opt : nlopt-opt?
  stepsizes : cpointer?
As with the safe version of set-default-initial-step, but stepsizes is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(set-initial-step opt stepsizes)  symbol?

  opt : nlopt-opt?
  stepsizes : cpointer?
As with the safe version of set-initial-step, but stepsizes is provided as a bare pointer. It must point to a block of memory containing (get-dimension opt) double-precision floats.

procedure

(get-initial-step opt stepsizes)  symbol?

  opt : nlopt-opt?
  stepsizes : cpointer?
As with the safe version of get-initial-step, but stepsizes is provided as a bare pointer. It must point to a block of memory large enough to contain (get-dimension opt) double-precision floats.

5 Algorithms

This section is a rough sketch of what I intend it to be when the package is complete: Categorize the algorithms, list their Racket names, briefly indicate what they are, provide citations, and include DOI in the bibliography.

This section is not intended to completely describe the algorithms. Rather, the goal is to quickly give you a sense of what your options are. That way, if you have simple needs, you won’t have to consult external documentation. If you anticipate needing to know the exact details of the optimization algorithm you use, consider consulting the NLopt website

5.1 Global Optimization

value

GN_DIRECT : symbol?

value

GN_DIRECT_L : symbol?

value

GLOBAL_DIRECT_L_RAND : symbol?

value

GLOBAL_DIRECT_NOSCAL : symbol?

value

GLOBAL_DIRECT_L_NOSCAL : symbol?

value

GLOBAL_DIRECT_L_RAND_NOSCAL : symbol?

value

GN_ORIG_DIRECT : symbol?

value

GN_ORIG_DIRECT_L : symbol?

GN_DIRECT is the DIviding RECTangles algorithm described in [Jones93]. GN_DIRECT_L is the "locally biased" variant proposed in [Gablonsky01]. They are deterministic search algorithms based on systematic division of the search domain into smaller and smaller hyperrectangles. The NLopt documentation suggests starting with GN_DIRECT_L first.

value

GN_CRS2_LM : symbol?

GN_CRS2_LM is Controlled Random Search with local mutation. It is one of the algorithms which makes use of set-stochastic-population. The NLopt version is described in [Kaelo06].

value

GN_MLSL : symbol?

value

GD_MLSL : symbol?

value

GN_MLSL_LDS : symbol?

value

GD_MLSL_LDS : symbol?

value

G_MLSL : symbol?

value

G_MLSL_LDS : symbol?

These are all variations on the "Multi-Level Single-Linkage" (MLSL) algorithm proposed in [Kan87-1] and [Kan87-2].

5.2 Local derivative-free optimization

5.3 Local gradient-based optimization

5.4 Augmented Lagrangian algorithm

value

AUGLAG : symbol?

value

AUGLAG_EQ : symbol?

Requires the specification of a subsidiary optimization algorithm via set-local-optimizer. AUGLAG converts the objective function and all constraints into a single function, and uses the subsidiary algorithm to optimize it. AUGLAG_EQ only converts the objective and equality constraints; the inequality constraints are passed through to the subsidiary algorithm. (Which must be able to handle them.)

Described in [Conn91] and [Birgin08].

Bibliography

[NLopt] “The NLopt nonlinear-optimization package.” http://ab-initio.mit.edu/nlopt
[Jones93] D. R. Jones, C. D. Perttunen, and B. E. Stuckmann, “Lipschitzian optimization without the lipschitz constant,” J. Optimization Theory and Applications 79, pp. 157–157, 1993. http://dx.doi.org/10.1007/BF00941892
[Gablonsky01] J. M. Gablonsky and C. T. Kelley, “A locally-biased form of the DIRECT algorithm,” Journal of Global Optimization 21(1), pp. 27–37, 2001. http://dx.doi.org/10.1023/A:1017930332101
[Kaelo06] P. Kaelo and M. M. Ali, “Some variants of the controlled random search algorithm for global optimization,” J. Optim. Theory Appl. 130(2), pp. 253–264, 2006. http://dx.doi.org/10.1007/s10957-006-9101-0
[Kan87-1] A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods part I: Clustering methods,” Mathematical Programming 39(1), pp. 27–56, 1987. http://dx.doi.org/10.1007/BF02592070
[Kan87-2] A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods part II: Multi level methods,” Mathematical Programming 39(1), pp. 57–78, 1987. http://dx.doi.org/10.1007/BF02592071
[Conn91] Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint, “A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,” SIAM J. Numer. Anal. 28(2), pp. 545–572, 1991. http://dx.doi.org/10.1137/0728030
[Birgin08] E. G. Birgin and J. M. Martínez, “Improving ultimate convergence of an augmented Lagrangian method,” Optimization Methods and Software 23(2), pp. 177–195, 2008. http://dx.doi.org/10.1080/10556780701577730