LocalSolver logo
is now
Hexaly logo

We're excited to share that we are moving forward. We're leaving behind the LocalSolver brand and transitioning to our new identity: Hexaly. This represents a leap forward in our mission to enable every organization to make better decisions faster when faced with operational and strategic challenges.

This page is for an old version of Hexaly Optimizer. We recommend that you update your version and read the documentation for the latest stable release.

How to migrate from MIP to LSP?

Note

It is important to understand that LocalSolver is not a MIP solver. More precisely LocalSolver is more than a MIP solver. Similarly to a MIP solver, LocalSolver searches for feasible or optimal solutions, computes lower bounds and optimality proofs and can prove the inconsistency of an optimization model. But LocalSolver not only achieves this on linear models but also on highly non linear and combinatorial models, involving high-level expressions like collection variables, arrays, or even external functions.

Since LocalSolver offers operators sum, product and arithmetic comparisons, any integer linear program can be directly written in the LocalSolver modeling language. However, such a model does not take advantage of all the simple high level operators of LocalSolver. A set of simple rules is given here to help you reach the best possible performance with LocalSolver.

Let us consider the car sequencing problem detailed in our example tour. Here is the standard integer program for this problem, written in LocalSolver syntax, assuming that all variables have been defined beforehand:

// A MIP-like MODEL FOR THE CAR SEQUENCING PROBLEM
for[c in 1..nbClasses]
  constraint sum[p in 1..nbPositions](cp[c][p]) == card[c];

for[p in 1..nbPositions]
  constraint sum[c in 1..nbClasses](cp[c][p]) == 1;

for[o in 1..nbOptions][p in 1..nbPositions]
  constraint op[o][p] >= sum[c in 1..nbClasses : options[c][o]](cp[c][p]);

for[o in 1..nbOptions][j in 1..nbPositions-Q[o]+1]
  constraint nbVehicles[o][j] == sum[k in 1..Q[o]](op[o][j+k-1]);

for[o in 1..nbOptions][j in 1..nbPositions-Q[o]+1] {
  constraint violations[o][j] >= nbVehicles[o][j] -P[o];
  constraint violations[o][j] >= 0;
}

constraint obj == sum[o in 1..nbOptions][p in 1..nbPositions-Q[o]+1](violations[o][p]);

minimize obj;

Decision variables and intermediate expressions

As explained in our Modeling principles, the most crucial modeling decision is the choice of the set of decision variables. Here, the set of cp[c][p] is a good set of decision variables since the values of all other variables can be inferred from the values of cp[c][p]. Now intermediate variables can be written as such. For instance the constraint nbVehicles[o][j] == sum[k in 1..Q[o]](op[o][j+k-1]) can be turned into an expression defining the term nbVehicles[o][j] <- sum[k in 1..Q[o]](op[o][j+k-1]).

Using non-linear operators instead of linearizations

Now we can observe that some equations in this model are in fact linearizations of arithmetic or logic expressions. For instance the constraint op[o][p] >= sum[c in 1..nbClasses : options[c][o]](cp[c][p]) merely states that op[o][p] is equal to 1 as soon as one of the given terms is equal to 1 what is more naturally expressed as a or: op[o][p] <- or[c in 1..nbClasses : options[c][o]](cp[c][p])

Finally the pair of constraints on violations[o][j] is just the linear fashion of defining the maximum of a certain number of variables, which is directly written in LocalSolver as violations[o][j] <- max( nbVehicles[o][j] - P[o], 0).

Similarly, all linearizations of a maximum, a condition or a conjunction should be translated into their direct LocalSolver formulation with operators max, iif and and. Piecewise linear function can be simply written as well. If your MIP contains a piecewise linear function (possibly with associated binary variables) making Y equal to f(X) such that on any interval [c[i-1],c[i]] we have Y = a[i] * X + b[i] with i in (1..3), then you will directly define Y as follows: Y <- X < c[1] ? a[1]*X+b[1] : (X < c[2] ? a[2]*X+b[2] : a[3]*X+b[3]);

After these transformations we obtain the following model:

function model() {
  cp[1..nbClasses][1..nbPositions] <- bool();

  for [c in 1..nbClasses]
    constraint sum[p in 1..nbPositions](cp[c][p]) == nbCars[c];

  for [p in 1..nbPositions]
    constraint sum[c in 1..nbClasses](cp[c][p]) == 1;

  op[o in 1..nbOptions][p in 1..nbPositions] <- or[c in 1..nbClasses : options[c][o]](cp[c][p]);

  nbCarsWindows[o in 1..nbOptions][p in 1..nbPositions-ratioDenoms[o]+1] <-
       sum[k in 1..ratioDenoms[o]](op[o][p+k-1]);

  nbViolationsWindows[o in 1..nbOptions][p in 1..nbPositions-ratioDenoms[o]+1] <-
       max(nbCarsWindows[o][p]-ratioNums[o], 0);

  obj <- sum[o in 1..nbOptions][p in 1..nbPositions-ratioDenoms[o]+1](nbViolationsWindows[o][p]);
  minimize obj;
}

Remove useless constraints

If your model contains valid inequalities they can (and should) be removed. Since LocalSolver does not rely on a relaxation of the model, these inequalities would only burden the model.

You must omit symmetry-breaking constraints as well: since LocalSolver does not rely on tree-based search, breaking symmetries would just make it harder to move from a feasible solution to another one. Typically it could prevent LocalSolver from considering a nearby improving solution.