CodePlexProject Hosting for Open Source Software

All optimizers are designed to search for a local minimum given some starting point. No guarantee for global convergence is provided in general. If the problem is to find a maximum then simply multiply the function by -1 (there’s also a property for doing this automatically).

// Variables. The string names are added for convenience when printing out the optimal point with Evaluation.ToString(). Variable x = new Variable("x"); Variable y = new Variable("y"); // Rosenbrock optimization test function. Function f = Function.Sqr(1.0 - x) + 100.0 * Function.Sqr(y - Function.Sqr(x)); // Use the BFGS optimizer. Optimizer o = new BfgsOptimizer(); // Specify variables and the objective function. o.Variables.Add(x, y); o.ObjectiveFunction = f; // Start the optimizer from a random point. Random r = new Random(1); OptimizerResult or = o.Run(x | r.NextDouble(), y | r.NextDouble()); Console.WriteLine(or.OptimalEvaluation);

If the same problem is used with multiple different starting points and if performance is critical, all initialization can be performed once and reused for each different point with the Prepare method. This method returns a PreparedOptimizer object with its own Run method. Changes to the parameters of the problem made after calling Prepare are ignored.

// Prepare the optimizer. PreparedOptimizer po = o.Prepare(); // Run the prepared optimizer from many different points. for (int i = 0; i < 1000; i++) { OptimizerResult or = po.Run(x | r.NextDouble(), y | r.NextDouble()); Console.WriteLine(or.OptimalEvaluation); }

Although the Optimizer base class makes no assumptions about constraints it does have a property with the name VariableEqualityConstraints. This collection is used to tell the optimizer that some variables should be kept constant at some given values. They’re not really constraints; the variables are just eliminated from the problem before the optimizer is started. In order to make the C# code look natural from a mathematical point of view an operator overload is provided for the == operator.

```
// Fix the first variable at 0.5, effectively eliminating it from the problem.
o.VariableEqualityConstraints.Add(x == 0.5);
```

The next abstract optimizer class is VariableConstrainedOptimizer, which derives from Optimizer, and is the base class for all optimizers with simple constraints where variables are bounded by intervals. It has one additional property VariableConstraints. Like before constraints are specified using operator overloads; in this case the <= and >= operators, which return VariableConstraint objects. Since VariableEqualityConstraint, the object being return by the == operator, derives from VariableConstraint it’s also possible to add the first kind of simple equality constraints to this collection. The optimizer automatically determines which kind of constraint is used and puts it the right place internally. In fact it does more: if two inequality constraints are such that only one value is possible, then this variable is automatically removed from the optimization. Similarly if two constraints are mutually exclusive then an exception is thrown.

Unfortunately constructs like -0.5 <= x <= 0.5 are not allowed as of now as it seems like it’s impossible to make nice strongly typed C# code for this; instead add -0.5 <= x and x <= 0.5 separately. They’re identified by the optimizer as just one constraint with two finite bounds. Alternatively this constraint can be specified explicitly with new VariableConstraint(x, -0.5, 0.5).

// Use the BFGS optimizer. VariableConstrainedOptimizer o = new BfgsOptimizer(); // Restrict the first variable to this interval. o.VariableConstraints.Add(-0.5 <= x, x <= 0.5); // Make sure that the starting point is within the interval. OptimizerResult or = o.Run(x | r.NextDouble() - 0.5, y | r.NextDouble()); Console.WriteLine(or.OptimalEvaluation);

The last abstract base class is ConstrainedOptimizer. It derives from VariableConstrainedOptimizer and is designed to work as a base class for a general non-linear problem with non-linear constraints. The constraints are specified using the Constraints property, which is a collection of FunctionConstraint. Like before VariableConstraint derives from FunctionConstraint, so any of the previous kinds of constraints can be added here for convenience; the optimizer automatically determines how to handle each of them. The non-linear constraints can be specified using the <=, >=, or == operators defined in the Function class.

// Use the Ipopt optimizer. ConstrainedOptimizer o = new IpoptOptimizer(); // A non-linear constraint. o.Constraints.Add(Function.Exp(x) >= y);

The Ipopt tutorial example can be represented in FuncLib using this code:

// Variables. Variable x1 = new Variable("x1"); Variable x2 = new Variable("x2"); Variable x3 = new Variable("x3"); Variable x4 = new Variable("x4"); // Objective function and non-linear constraints. Function f = x1 * x4 * (x1 + x2 + x3) + x3; Function g1 = x1 * x2 * x3 * x4; Function g2 = Function.Sqr(x1) + Function.Sqr(x2) + Function.Sqr(x3) + Function.Sqr(x4); // Prepare the optimizer. IpoptOptimizer o = new IpoptOptimizer(); o.Variables.Add(x1, x2, x3, x4); o.ObjectiveFunction = f; o.Constraints.Add(g1 >= 25.0); o.Constraints.Add(g2 == 40.0); o.Constraints.Add(1.0 <= x1, x1 <= 5.0); o.Constraints.Add(1.0 <= x2, x2 <= 5.0); o.Constraints.Add(1.0 <= x3, x3 <= 5.0); o.Constraints.Add(1.0 <= x4, x4 <= 5.0); // Verbose mode. Show Ipopt convergence. o.PrintLevel = 5; // Run optimization starting from (1, 5, 5, 1). OptimizerResult or = o.Run(x1 | 1.0, x2 | 5.0, x3 | 5.0, x4 | 1.0); Console.WriteLine(or.OptimalEvaluation); // Output: x1 = 1, x2 = 4.7429996418092966, x3 = 3.8211499817883072, x4 = 1.3794082897556981

If performance is critical the functions can be compiled to IL code before calling the Ipopt optimizer. In this case the following code can be used:

// Replace the functions by their compiled equivalents (including first and second derivatives). CompiledFunction[] h = Compiler.Compile(new Function[] { f, g1, g2 }, new Variable[] { x1, x2, x3, x4 }, 2); f = h[0]; g1 = h[1]; g2 = h[2];

For repeated use Prepare should be used. Note, however, that FuncLib with it's IL code generator is designed to be efficient when the function complexity is large compared to the number of variables. If this is not the case there may by a non-diminishing overhead associated with the abstraction in FuncLib, but when this is the case the performance may not be so important. In other words, FuncLib is currently not effcient for problems with a large number of variables (say, more than 20) and a dense derivative structure. For real-world problems with around 20 variables and very complex functions the overhead from the FuncLib abstraction is often insignificant.

Last edited Jul 17, 2011 at 3:32 PM by bakkedal, version 14