Jaa


Solver Foundation 2.0 Preview: Second Order Conic Programming

A couple of weeks ago I wrote about one of our big features for Solver Foundation 2.0: stochastic programming. In this post I want to talk about a new solver for 2.0: an interior-point solver for Second Order Conic Programming (SOCP). Second Order Conic Programming is the problem of minimizing a linear function where there are linear terms constrained to be inside a second order cone. Also called conic quadratic programming, SOCP is important because it can be used to model many real-world robust optimization, engineering, and finance problems. You can even model certain types of chance constraints using SOCP.  Here are a few favorite SOCP links:

An SOCP can contain conventional linear constraints Ax >= b as well as second-order cone constraints. Some solvers define SOCP in terms of conic variables rather than conic constraints. Solver Foundation’s interior point solver uses conic constraints rather than conic variables because (IMHO) it leads to a more flexible, easy-to-program solver API. A second-order cone is a pointed cone where the norm of n - 1 variables t_i  is less than the other variable t_0 (so that a cross-section is a circle): t_0 >= || t_i ||, t_0 >= 0. A second-order cone in three dimensions looks like an ice cream cone. A second-order conic constraint is similar, except that each t can be a linear term that may combine several variables. The first term t_0 is called the "primary row".

What this means is that if you know how to build linear programming problems using the ILinearModel API I described last time, then you pretty much know how to build SOCP problems.  You need to build a row that represents the conic constraint using AddRow, then you need to call AddRow once for each term in the conic constraint.  That's it.

Here's a short code sample: given an InteriorPointSolver, pass in a list of matrices F and a corresponding list of matrices g.  We want to find a vector x that minimizes the sum of ||Fx + g|| over all entries in F and g.  The steps are as follows:

  • Create an InteriorPointSolver and the lists F and g.  
  • Create the descision variables x, and add the goal just as in my previous post.
  • Iterate through the matrices.  We will create one conic inequality for each matrix.  The primary row is an artificial variable t, which is an upper bound on the norm of Fx + g.  The other rows are simply the rows of F, with the bounds coming from g.
 /// <summary> Create a simple sum-of-norms problem from F and g.
/// </summary>
/// <remarks>Returns the variable vids so the solution vector x can be 
/// retrieved after Solve().</remarks>
private static int[] BuildSumOfNorms(InteriorPointSolver solver, 
  List<double[][]> F, List<double[]> g) {

  int[] x = new int[F[0][0].Length];
  for (int i = 0; i < x.Length; i++) {
    solver.AddVariable("x" + i, out x[i]);
  }

  // Add the goal.
  int goal;
  solver.AddRow("goal", out goal);
  solver.AddGoal(goal, 1, true);
  for (int k = 0; k < F.Count; k++) {
    int cone, t;
    // Add variable t_k, which is the upper bound on the kth norm.
    solver.AddVariable("t_" + k, out t);
    solver.SetCoefficient(goal, t, Rational.One);
    solver.SetBounds(t, Rational.Zero, Rational.PositiveInfinity);

    // Add a quadratic cone.
    solver.AddRow("cone_" + k, SecondOrderConeType.Quadratic, out cone);
    solver.SetBounds(cone, 0, Rational.PositiveInfinity);

    // Add the conic constraint
    int conicRow1;
    solver.AddRow("cr1_" + k, cone, SecondOrderConeRowType.PrimaryConic, 
      out conicRow1);
    solver.SetCoefficient(conicRow1, t, Rational.One);
    solver.SetLowerBound(conicRow1, Rational.Zero);

    for (int i = 0; i < F[k].Length; i++) {
      int conicRow2;
      solver.AddRow("cr2_" + k + "_" + i, cone, SecondOrderConeRowType.Conic, 
        out conicRow2);
      for (int j = 0; j < F[k][i].Length; j++) {
        if (F[k][i][j] != 0) {
          solver.SetCoefficient(conicRow2, x[j], F[k][i][j]);
        }
      }
      solver.SetLowerBound(conicRow2, -g[k][i]);
    }
  }

  return x;
}

Those are the basics. We will have SOCP analogues of the LinearModel APIs, as well as APIs that let you look at the details of the conic constraints. In Version 2.0 we will support creation and solution of SOCP only using the InteriorPointSolver API.  We will not have SFS or OML support for this release.

Comments

  • Anonymous
    October 03, 2009
    I am fan of Robust Optimization and I extensively use SOCP in my distributionally robust optimization models. Together with my colleague Joel Goh, we have developed ROME (Robust Optimization Made Ease), a RO modeling language under the Matlab Platform. It is freely available for download at www.RobustOpt.com. I have not explored Solver Foundation 2.0 as yet... I have included some examples on inventory control and portfolio optimization using SOCP solvers.

  • Anonymous
    January 12, 2011
    Hi, this example of SOCP is very helpful to understand how to implement a SOCP problem in MSF. However, I still can’t understand why, if we set bounds on the variable x, the solver produces absurd results. Let me explain. If I define F and g as follow:     List<double[][]> F = new List<double[][]>();            F.Add(                new double[][]                {                    new double[] { 1, 2, 3 },                    new double[] { 4, 5, 6 }                });            F.Add(                new double[][]                {                    new double[] { -4, -3, -2 },                    new double[] { 1, 0, 1 }                });     List<double[]> g = new List<double[]>();     g.Add(new double[] { 1, 1 });     g.Add(new double[] { 3, 2 }); The result of the optimization is: { 0.06,  2.29, -2.06} And the sum of norms is 0.636 So far, everything is normal. But I tried to tweak the code in order to be more familiar with SOCP programming and I found something bizarre. If I set bounds on the variables x like this: int[] x = new int[F[0][0].Length]; for (int i = 0; i < x.Length; i++) {    solver.AddVariable("x" + i, out x[i]);    solver.SetBounds(x[i], -5, 5); } Logically, this should not, in any way,  change the results because every x are included in [-5, 5]. But in fact, I obtain something different: Result = Optimal Minimum sum of norms = 5.01976483323466 x[0] = -4.99999997050398 x[1] = -4.99999998890822 x[2] = -4.99999999409569 As you can see, the solver does not return the same prior results. Moreover, the “Minimum sum of norms” does not look correct. If you calculate llF1.x+g1ll + llF2.x+g2ll, you get 128.1 and not 5.0 Is anybody have an explanation for this phenomenon? Did I set the bounds of variables x in a wrong manner? Finally (and this is my main concern), is there any reason for the solver to not properly calculate the sum of norms? Thanks!