Share via


Using Solver Foundation's Unconstrained Nonlinear Solver

Hey, did you know that Solver Foundation can solve unconstrained nonlinear optimization problems?  Yep, it's true.  Few people know about it because 1) we haven't talked about it too much and 2) it is not accessible from the SFS (which means you can't use OML or the Excel add-in to solve such problems).  To use the solver you need to work with a solver object directly.  The Samples directory contains a simple example, but since there has been a couple of requests I thought I would give a slighly more involved example.

The CompactQuasiNewtonSolver class solves unconstrained optimization problems: it finds the minimum (or maximum) of an unbound objective function f(x).  x is an array of doubles, and f needs to be twice differentiable.  The CQN solver uses the L-BFGS algorithm.  Like other implementations, you need to provide a function that will serve up the function value and gradient on demand, given the variables x.  To do that, set the GradientAndValueAtPoint property to a delegate that fills in the gradient, and returns the function value.  You'll also need to specify a starting point.

The CompactQuasiNewtonSolver has a Solve method that takes a solver parameters object.  It has a few interesting properties, including the solver tolerance and maximum iteration count.  On construction you can also pass in an "abort" delegate.  This function will be called periodically during the course of execution - if you return true then the solver will return.  In my sample below, I abuse it to print out the progress of the solver as it proceeds.  Note that I would have liked to have used solver.NumberOfIterations rather than keeping track of the number of calls myself.  That's a bug, and it will get fixed in our 2.1 release.

The function can have as many variables as you like.  In my example I am minimizing a multidimensional variant of the Rosenbrock function, a standard test problem.  You can modify the code to use your own objective function by modifying the ValueAndGradient method.  You can also learn more about the solver by consulting the Solver Programming Primer (page 24 or so).

 using System;
using Microsoft.SolverFoundation.Solvers;

namespace Microsoft.SolverFoundation.Samples {
  /// <summary> Minimizes the generalized Rosenbrock function, a standard test case
  /// in nonlinear optimization. You can read more about the Rosenbrock
  /// function here:
  ///  https://en.wikipedia.org/wiki/Rosenbrock_function
  /// </summary>
  public class Rosenbrock {
    public static void Main(string[] args) {

      int n = 4; // make sure I'm even, see below.
      double[] x = new double[n];
      for (int i = 0; i < x.Length; i++) {
        x[i] = -1;
      }

      CompactQuasiNewtonSolver solver = new CompactQuasiNewtonSolver(n);
      int iterationCount = 0;
      CompactQuasiNewtonSolverParams param = new CompactQuasiNewtonSolverParams(() => {
        Console.WriteLine("{0} | {1:e6}", /*solver.NumberOfIterations*/ iterationCount++, solver.ToleranceDifference);
        return false;
      }
        );
      
      solver.SetStartingPoint(x);
      solver.GradientAndValueAtPoint = (GetGradientAndValueAtPoint)ValueAndGradient;
      solver.Minimize = true;

      var solutionQuality = solver.Solve(param);

      Console.WriteLine();
      Console.WriteLine("Solution quality = " + solutionQuality);
      Console.WriteLine("f(x) = " + solver.SolutionValue);
      solver.GetSolutionPoint(x);
      for (int i = 0; i < x.Length; i++) {
        Console.WriteLine("x[{0}] = {1}", i, x[i]);
      }
    }

    /// <summary>Get the function value and gradient for the specified point.
    /// </summary>
    /// <param name="x">The point at which the value and gradient should be computed.</param>
    /// <param name="gradient">The gradient vector, the same size as x.</param>
    /// <returns>The function value.</returns>
    private static double ValueAndGradient(double[] x, double[] gradient) {
      // The gradient and value are both computed inside the method for efficiency.
      // It's also possible to refactor value and gradient calculation into submethods.
      double value = 0;
      for (int j = 0; j < x.Length; j += 2) {
        double t1 = 1.0 - x[j];
        double t2 = 10.0 * (x[j + 1] - x[j] * x[j]);
        gradient[j + 1] = 20.0 * t2;
        gradient[j] = -2.0 * (x[j] * gradient[j + 1] + t1);
        value += t1 * t1 + t2 * t2;
      }
      return value;
    }
  }
}

Comments

  • Anonymous
    July 31, 2010
    Thanks for this example.  I do have a few questions of my own
  1.  Is possible to solve constrained  Non Linear problems with SFS
  2.  is there a provision for cases where the Derivative f(x) is not available?