Suggestions to improve callback performance in amplpy (amplpy-cplex)

Hi!
I’ve implemented a callback in amplpy to change the gap after a given number of explored nodes, and I notice three things that I would like to discuss here:

  1. With the callback the parallel mode is not applied, and only 1 thread is used to solve the problem (thus, more time was used to solve the problem).
  2. With the callback the MIP search method changed to traditional branch and cut, while without the callback dynamic search is used (actually I read that cplex disable dynamic search in presence of control callbacks).
  3. The count of nodes that I do with my callback is different from that displayed in the logfile, please, if you can explain to me that.
    Finally, I ask for a method (if exist) to use callbacks preserving the parallel mode, as well as for suggestions to improve my callback.

My callback is as follows:

#Callback class
class MyCallback(ampls.GenericCallback):

def __init__(self, stoprule):
    super(MyCallback, self).__init__()
    self._stoprule = stoprule
    self._current = 0
    self._nMIPnodes = 0
    self._continueOpt = True

def run(self):
    t = self.getAMPLWhere()
    if t == ampls.Where.MSG:
        print('>' + self.getMessage())
    elif t == ampls.Where.MIPNODE:
        self._nMIPnodes += 1
        print("New MIP node, count {}".format(self._nMIPnodes))
        if self._nMIPnodes >= self._stoprule['nodes'][self._current]:
            self._continueOpt = True
            return -1
    elif t == ampls.Where.MIPSOL:
        print("MIP Solution = {}".format(self.getObj()))
    return 0

def setCurrentGap(self):
    gaptolpct = 100*self._stoprule['gaptol'][self._current]
    stopnodes = self._stoprule['nodes'][self._current]
    print("Increasing gap tolerance to "
          
    f"{gaptolpct:.2f}% after {stopnodes:.1f} nodes")
    ampls_model.setAMPLsParameter(ampls.SolverParams.DBL_MIPGap,
    self._stoprule['gaptol'][self._current])
    self._current += 1

#Solve using callbacks
ampls_model=ampl.exportModel(solver,[“return_mipgap=5”,“mipstartvalue=3”,“mipstartalg=2”,“mipdisplay=2”])
#Stopping rule
stopdict={‘nodes’:(100,200,300),
‘gaptol’:(.001,.02,.3)}
callback=MyCallback(stopdict)
ampls_model.setCallback(callback)

#Invoke solver
while callback._continueOpt:
callback._continueOpt = False
ampls_model.optimize()
if callback._continueOpt:
callback.setCurrentGap()

Overview results without callback:

MIP search method: dynamic search.
Parallel mode: deterministic, using up to 8 threads.

690 391 16850.0251 423 16859.1290 16819.9129 379322 0.23%

Total (root+branch&cut) = 3713.24 sec. (1033774.39 ticks)
CPLEX 20.1.0.0: optimal integer solution within mipgap or absmipgap; objective 16859.12903

Overview results with the callback:

MIP search method: traditional branch-and-cut.
Parallel mode: none, using 1 thread.

New MIP node, count 99
48 48 16827.1844 467 16882.0053 16819.8279 277235 0.37% x133190 U 48 45 24

New MIP node, count 100

Flow cuts applied: 461
Mixed integer rounding cuts applied: 1461
Root node processing (before b&c):
Total (root+branch&cut) = 5092.52 sec. (4336608.06 ticks)

Increasing gap tolerance to 0.10% after 100.0 nodes

Thanks in advance

2 Likes

Hi,

great of you to use this.
All you say in points 1 and 2 regarding CPLEX is correct; amplpy_cplex was implemented using the “old style” callbacks, as compared to the “new style” ones: , and as such there is currently no way to preserve the parallel mode. I am experimenting with a reimplementation using the new style callbacks (see: `Generic callbacks - IBM Documentation") but, due to multi-threading support, it will need a bit of work.

On 3, can you please provide me with some more details?

Regarding the suggestion: you can try using the underlying cplex native object pointers (via CPLEXModel.getCPXENV() and CPLEXModel.getCPXLP()) and then use the CPLEX C API wrappers (you can issue dir(amplpy_cplex) to have the full list, and you can find reference material here: Callable Library - IBM Documentation. Having said so, it’s not an avenue I recommend, unless you have used the CPLEX C API before; I hope to progress on the reimplementation using the new callbacks soon - although that will mean foregoing to some existing functionality.

Kind regards!

Hi,
Thanks a lot for your reply. Regarding point 3, basically I would like to know if there is a way to directly call the number of explored nodes in ampls (python), because my code uses a simple node counter routine which apparently is not congruent with the logfile node counter, as you can see below. Particularly, there are mismatches in the root node and after stopping the solution process to update the gap.

Callback:

def run(self):
t = self.getAMPLWhere()
if t == ampls.Where.MSG:
print(‘>’ + self.getMessage())
elif t == ampls.Where.MIPNODE:
self._nMIPnodes += 1
print(“New MIP node, count {}”.format(self._nMIPnodes))
if self._nMIPnodes >= self._stoprule[‘nodes’][self._current]:
self._continueOpt = True
return -1
elif t == ampls.Where.MIPSOL:
print(“MIP Solution = {}”.format(self.getObj()))
return 0

Logfile:

>    Nodes    |    Current Node    |     Objective Bounds      |     Work

> Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

>

>H    0     0                    18481.246339 16839.9094  8.88%     - 1297s

>     0     0 16839.9094    0  461 18481.2463 16839.9094  8.88%     - 1297s

MIP Solution = 18481.246318540754
>H    0     0                    18481.246319 16839.9094  8.88%     - 1423s

New MIP node, count 1
>     0     0 16840.4141    0  467 18481.2463 16840.4141  8.88%     - 1436s

MIP Solution = 18463.630287709202
>H    0     0                    18463.630288 16840.4141  8.79%     - 1571s

New MIP node, count 2
>     0     0 16840.4777    0  467 18463.6303 16840.4777  8.79%     - 1601s

New MIP node, count 3
New MIP node, count 4
New MIP node, count 5
>     0     2 16840.4777    0  467 18463.6303 16840.4777  8.79%     - 2290s

New MIP node, count 6
>     1     4 16841.8523    1  553 18463.6303 16841.7839  8.78%  1441 2328s

New MIP node, count 7
New MIP node, count 8
>     3     8 16842.8076    2  551 18463.6303 16841.8742  8.78%  1292 2374s
.........
.........