Rationale for black-box optim

Since Julia have powerful auto-differentiation library (such as Zygote.jl) and useful Optimization (convex or not) packages such as JuMP.jl, one may think a black-box optimization is not useful in almost any case. However, even within the pure-Julia ecosystem, one might find the function at hand having noisy/spiky differential, or, takes too long to compute not mentioning the differentials.

Another place a black-box optimization may come handy is when dealing with external programs, especially those closed source once or legacy (read: fossil) softwares.

A short example

Say we have some compiled C program with the following function:

// ./test.c
#include <math.h>

double parab(double x){
    return pow((3+x), 2);
}

we can compile a shared library:

gcc  -c -Wall -fPIC test.c -o test.o
gcc  -shared -fPIC -o test.so test.o

# gives us a ./test.so

We can use @ccall to access this function:

julia> @ccall "./test".parab(2.0::Float64)::Float64
25.0

Now, we wrap this into a function:[1]

julia> parab_c(x) = @ccall "./test".parab(x[1]::Float64)::Float64
parab_c (generic function with 1 method)

then we can black-box optimize it!

using BlackBoxOptim
julia> res = bboptimize(parab_c; SearchRange=(-5.0, 5.0), NumDimensions=1)
...
julia> best_candidate(res)
1-element Array{Float64,1}:
 -3.0

In fact, we would realize that we don't care how the external program is invoked, as long as the returned 'stuff' is stable; you might as well just parse the stdout of some weird external program into numbers and feed it into bboptimize.

[1] BlackBoxOptim.jl expects the function to take an Array even though your function may be a scalar one.