I’m currently working on an optimization model and need some assistance with linearizing a specific conditional expression. The expression is defined as follows:
z_i=x_i-y_i, if x_i-y_i>0, else z_i=+\infty Z = \min_i z_i
I’m wondering if there’s a systematic way to linearize those constraints. I would really appreciate any advice or examples on how to tackle this problem.
If you are working with any of the linear solvers that have the newest AMPL interface (see the list in the Modeling guide for MP-based AMPL solvers) then you can write this pretty directly. Solvers don’t deal well with variables equal to \infty but you can just use a reasonable upper bound, for example:
set S;
var x {S};
var y {S};
var z {S};
var Z = min {i in S} z[i];
subj to zdefn {i in S}:
z[i] = if x[i] > y[i] then x[i]-y[i] else 100;
The discontinuity of the constraint function at x[i]-y[i]=0 may make an exact linearization impossible, however, in which case you may get a slightly inexact solution; for example, some x[i] values may be 0.0001 even though a slightly better result may be achieved with a lower (but still positive) value. Whether this is really a problem will depend on the forms of the objective and other constraints of your model, so you may have to do some investigation and experimenting.
For advice on formulating your own linearized model, I recommend posting your question to Operations Research Stack Exchange, where there are many questions (and answers) on linearization issues.