Modeling principles

LocalSolver is able to efficiently explore the search space defined by your optimisation model, searching for good solutions in terms of the objective function that you defined. It means that, as in any operations research approach, the modeling of the problem is a crucial step. We insist below on the importance of choosing the appropriate decision variables, subject to the right set of constraints, with a well-defined objective function.

Distinguish decision variables from intermediate variables

The first step of the modeling task is to select the set of decision variables. In LocalSolver these decisions variables can be binary (introduced by the bool keyword), integer (introduced by the int keyword) or floating-point (introduced by the float keyword). These variables represent the decisions to be taken. Once you have defined your decisions variables, ask yourself whether some of these variables could not be inferred from others.

For instance, consider the car sequencing problem:


// cp[c][p] = 1 if class c is at position p, and 0 otherwise
cp[1..nbClasses][1..nbPositions] <- bool();

// op[o][p] = 1 if option o appears at position p, and 0 otherwise
op[o in 1..nbOptions][p in 1..nbPositions] <- bool();

// constraints linking variables op[o][p] to cp[c][p]
for[o in 1..nbOptions][p in 1..nbPositions]
  constraint op[o][p] = or[c in 1..nbClasses : options[c][o]](cp[c][p]);

In the above model, the user introduced two families of decisions variables, linked by a set of constraints. This model is valid in the sense that the solutions to this model are the solutions of the car sequencing problem. However this model defines a poor search space because there are too many decisions variables: clearly knowing the values of cp[c][p] variables is sufficient to define a solution and all other variables can be inferred from the values of these variables. Therefore the following model is much better, with op[o][p] introduced as intermediate expressions:

// 0-1 decisions:
// cp[c][p] = 1 if class c is at position p, and 0 otherwise
cp[1..nbClasses][1..nbPositions] <- bool();

// expressions:
// op[o][p] = 1 if option o appears at position p, and 0 otherwise
op[o in 1..nbOptions][p in 1..nbPositions] <- or[c in 1..nbClasses : options[c][o]](cp[c][p]);

A perfectionist reader may observe that nbPositions decisions variables could still be removed by defining cp[0][p] <- 1 - sum[i in 2..nbClasses](cp[i][p]). Just refrain from this excess of zeal: perfect is the enemy of the good. More precisely think in terms of family of variables instead of individual variables.

This car sequencing example may seem completely trivial. A more subtle case is that of the steel mill slab design. In this problem, ‘’orders’’ are assigned to ‘’slabs’’ of different capacities, and the objective is to minimize the consumed capacity (sizes of slabs with at least one order). A simple model for this problem consists in defining enough slabs of different capacities and x[o][s] variables equal to one if order ‘’o’’ is assigned to slab ‘’s’‘. In this model, the use of a slab can be easily detected (intermediate expression) and directly used in the objective function:

// slab s is used if at least one order is assigned to this slab
use[s in 1..nbSlabs] <- or[o in 1..nbOrders](x[o][s]);

// objective function
minimize sum[s in 1..nbSlabs](use[s] * size[s]);

This model is correct and works quite well on CSPLib instances. However an even more powerful model can be built if we infer the size of a slab from its content. It will behave as if after each move the content of each slab was moved to a slab with the smallest sufficent capacity. Note that Constraint Programming models for this problem use the same approach. Below are the key lines of this model, the full model is available in our example tour:

for [j in 1..nbOrders] {
  slabContent[j] <- sum[i in 1..nbOrders](orders[i] * x[i][j]);
  constraint slabContent[j] <= maxSize;
  // threshold detection variables
  t[j][1] <- slabContent[j] > 0;
  t[j][k in 2..nbSlabSizes] <- slabContent[j] > slabSizes[k-1];

/* ... */

obj <- sum[j in 1..nbOrders](slabSizes[1] * t[j][1]
  + sum[k in 2..nbSlabSizes]((slabSizes[k] - slabSizes[k-1]) * t[j][k]))
  - sumSizeOrders;

Distinguish constraints from first priority objectives

Once your set of decision variables is chosen, you need to choose the constraints, that is to say the boolean expressions which must be satisfied for a solution to be considered as ‘’feasible’‘. These expressions are prefixed with the constraint keyword. Remind that local search is appropriate for ‘’optimization’’ problems, therefore finding a feasible solution to your problem must be relatively easy while finding a very good solution can be difficult. It means that some “constraints” of your problem are in fact high priority objectives, while the term ‘’constraint’’ should be reserved to structural properties of the problem.

Consider the car sequencing problem and its definition on

“A number of cars are to be produced; they are not identical, because different options are available as variants on the basic model. The assembly line has different stations which install the various options (air-conditioning, sun-roof, etc.). These stations have been designed to handle at most a certain percentage of the cars passing along the assembly line. (...) The cars must be arranged in a sequence so that the capacity of each station is never exceeded. For instance, if a particular station can only cope with at most half of the cars passing along the line, the sequence must be built so that at most 1 car in any 2 requires that option.”


Reading carefully the above definition, you may note that it is presented as a pure satisfaction problem. Now a practical question arise: what will the car manufacturer do if some day more than half of the cars require an option for which a constraint “at most 1 car in any 2” was set ? Clearly an optimization system returning a laconic “No solution found” would be of no help. Beyond this pratical problem, this example illustrates that some of the constraints define the structure of the problem while others (sometimes called ‘’business constraints’‘) are properties that should be satisfied. We voluntarily use the term “should” and not “must”, because we do not actually know if all those properties can be satisfied.

Here the only structural constraint is that each position in the assembly line contains at most one vehicle. Now you may define several optimization models. For instance you could consider that the station constraints (“at most P cars in any Q”) are structural and that the objective function to be minimized is the number of empty postions on the assembly line. In other words you try to maximize the number of produced cars, subject to the assembly line constraints. For this car sequencing problem the opposite point of view is usually taken: you must produce all cars and you minimize a penalty measuring the violation of station constraints. This measure is merely the sum of overcapacities over all windows:

// expressions: compute the number of violations in each window
nbViolationsWindows[o in 1..nbOptions][p in 1..nbPositions-ratioDenoms[o]+1]
  <- max(nbCarsWindows[o][p]-ratioNums[o], 0);

// objective: minimize the sum of violations for all options and all windows
obj <- sum[o in 1..nbOptions]
  [p in 1..nbPositions-ratioDenoms[o]+1](nbViolationsWindows[o][p]);

minimize obj;

As you see, turning a constraint c into an objective does not necessarily amount to maximizing c. Indeed since constraints often take the form of comparisons you will often turn the satisfaction of a <= b into the minimization of max(0,a-b), or the satisfaction of a == b into the minimization of dist(a,b).

Here a pure satisfaction model was turned into an optimization model, hence there is only one objective function. When your original problem is an optimization problem where a cost expression must be minimized, then if you identify that some “constraints” should be modeled as high priority objectives (minimize violations) then you will have several objective functions to be optimized in lexicographic order:

minimize violations;
minimize cost;

In such a case it can be useful to exploit LocalSolver ability to handle multiple time limits. Typically if you have a total of 60 seconds to solve the problem, setting lsTimeLimit={50,10} means that 50 seconds will be devoted to the minimization of violations (ignoring impacts on the cost) while the remaining 10 seconds will take both objectives into account (lexicographically). Of course if an optimal solution (here 0) is found in 1 second, LocalSolver will have 59 seconds left to optimize the second objective.

In summary two excesses must be avoided when selecting your set of constraints:

  • adding too many constraints will lead to a model for which LocalSolver finds no solution (the status of the returned solution is ‘’infeasible’‘) or for which moving from a feasible solution to another is very difficult, resulting in a very high rate of infeasible moves.
  • turning too many constraints into objectives will lead to a model without any structure, whereas one of the strengths of LocalSolver is to perform moves preserving the feasibilty and thus “making sense”.

Define your objective function

Thanks to LocalSolver operators, you can easily write your objective function possibly with conditions (ternary conditional operator iif) and highly non linear terms (product, max, square root, etc.).