Ipopt not using ComputeDerivative? -> what should I set? null? 0?

Apr 3, 2014 at 10:05 AM
Hi all

I am trying to get to work the IpoptOptimizer for portfolio optimization.
In this type of problem we do not have the derivative in a functional way. Therefore I chose Ipopt over BFGS since to my knowledge it does not need the derivatives.

Here's my question: What should I set in the method ComputeDerivative?
Since returning null leads to a null pointer exception, I tried this. But I am doubing that this is correct
    protected override Function ComputeDerivative(Variable variable)
        Function ret = new ConstantFunction(0);
        return ret;
Has someone some similar experience and can help me out?
Apr 4, 2014 at 9:58 AM
Edited Apr 4, 2014 at 9:58 AM
Actually, IPOPT itself requires first and second order derivatives of the objective and nonlinear constraint functions. With respect to second-order, you can use for example limited memory Hessian approximation, which is controlled with one of the Ipopt options (can’t remember which one right out of my head now).

If I am not mistaken, Funclib provides automated derivation, so you should be able to use that functionality if your optimization problem is not too complicated.

If you want to avoid calculating derivatives altogether, you could for example take a look at the derivative free optimizer COBYLA2, written by Professor Michael Powell. I have translated COBYLA2 to C#, you can find it here: https://github.com/cureos/cscobyla

Are your constraint functions linear? In that case you could also consider Michael Powell’s new algorithm for nonlinear objective, linearly constrained optimization, LINCOA. I have also ported this algorithm to C#, you’ll find it here: https://github.com/cureos/csnumerics

Anders @ Cureos
Apr 8, 2014 at 11:03 AM
Edited Apr 8, 2014 at 11:08 AM
The idea in the framework is that you should not need to specify derivatives yourself (i.e. implement the abstract class Function), but instead express functions as primitive functions composed by operators. This is why FuncLib defines overloaded operators for */-/etc as well as primitive functions. This way it's just to use the rules of differentiation to work out derivatives (mostly the product rule). However, if you choose to define functions yourself, you should define mathematically consistent derivatives, not just return 0, as you have done in your code.

Also keep in mind whether your function is actually continuous (or more precisely differentiable). From your mail it seems like you do some sorting (value-at-risk computations I guess). Sorting in inherently not a smooth (differentiable) process, so you might end up with an ill-defined mathematical function which is difficult to handle for Ipopt.

Also, if you define some very complicated functions, you may need to switch to the forward-mode automatic differentiation, as that process doesn't require to keep all intermediate steps in memory. This is implemented in the DualNumber class with some examples on the web page.

As a side note, Ipopt requires second order derivatives (which are computed automatically), where BFGS only requires first order derivatives. However, my experience is that Ipopt converges significantly faster to accurate solutions.
Apr 8, 2014 at 12:07 PM

Does this mean I cannot use the funcLib / csipopt framework for value-at-risk computations?!

I think this is a very commen optimization problem. Using MATLAB's fmincon (with opts2 = optimset('MaxFunEvals', 7000, 'Algorithm','interior-point','Display','off');) this was straightforward.

My missing piece is to calculate the derivative.
Can you give me some code snippets how do this either with Function or DualNumber/DualFunction.

Currently I have this: GoalFunction:

protected override double ComputeValue(IEvaluation evaluation)

        int position = (int)Math.Round((1 - varAlpha)*rows);
        tmpLosses = new double[rows];
        for (int row = 0; row < rows; row++)
            for (int col = 0; col < cols; col++)
                tmpLosses[row] += xs[col].Value(evaluation) * losses[row, col];


        varValue = tmpLosses[position];

        return varValue;


    protected override Function ComputeDerivative(Variable variable)
        //Variable muVariable = new GoalFunction()

        //Function ret = xs[0] * xs[1];
        //return ret;

        return new GoalFunctionDerivative(xs, varAlpha);

        //return new Variable();

public GoalFunction(List<Variable> x, double varAlpha)

        this.varAlpha = varAlpha;
        this.xs = x;
        //for the moment we store all the loss statistics here
        //see: K:\Acuance\Analysis\MATLAB\Optimization\Optimization\Portfolio_Optimization_Suite
        if (losses == null || losses.Length == 0)
            losses = new double[,]
                    0.0012, 0.036086, 0.03342, 0.004083, 0.004353, 0.001472, 0.026058, 0.001254, 0.050607799, 0.031378,
                    0.031286, -0.0001, -0.0001, -0.0001, -0.0001, -0.0001, -0.0001, -0.028133583, -0.043495773,
                    0.016751, -0.022359, -0.015513, -0.005492, -0.049644, 0.044698, 0.06596284, 0.030038, 0.123486,
                    0.079428, 0.009936, 0.029188, 0.010248298, 0.024747, 0.003156667, 0.035702219, 0.006826604

I had a look at the examples Dual.cs and FunctionVersusDual.cs but it does not seem to help my problem.