Hello esteemed AMPL team, I’m currently experiencing problems with runs.
For example, if one run is 1 minute, then 50 separate runs is about an hour, and for the sake of organizing the output, I’m now running these 50 runs together for over 5 hours or more (with frequent lags in the middle where I need to click on the INTERRUPT option at the top in order to continue, and if I accidentally click on it more than once, I’ll be interrupted and have to start from the beginning again by entering “include bl-4.run;” to start running).
What is the reason for this? Is it because my code is written in the wrong order that makes it run slower? How can I modify my code so that it will run as fast as possible after typing “include run.run;” to start the run, and then write all the cases to the corresponding tables?
Because I want to study the results of different parameter values in the model, and there are many parameter values, and I need to get all the results of the run, so this run speed problem is very important to me, but I don’t know how to solve it, which makes me very frustrated. Attached is the main code of my run file.

Note: Global optimal solution is required as the model is non-convex. When running it only once, found it slower with octeract and lgo and couenne solvers. For baron, I don’t have access to it, which solver out of baron and lindo global is the most efficient to run? Would changing the solver to baron make a qualitative leap in run efficiency? run.run (2.3 KB)

One additional note: After clicking the INTERRUPT button above when I thought it was lagging, I realized that I was getting a local optimal solution instead of a global optimal solution. I need the global optimum, so I can’t click on the interrupt button, but if I don’t click on it, the console screen doesn’t update, so maybe it’s solving for the global optimum, but it’s taking way too long, way too long, and many times longer than it would take to run the program on its own.

Many solves are successful and take around one minute. You would like to find a way to make these solves faster.

Some solves are unsuccessful, and run for a very long time without results. When a solve takes very long, you want to stop it and go on to the next solve.

To deal with 2, you can add this command after your optionsolver command:

option lindoglobal_options "maxtime 300";

Then if a solve does not find a solution after 300 seconds, it will be ended and your run will go on to the next solve. (Of course, instead of 300, you can choose any time limit that works for you.) After all of the runs are finished, you can tell by looking at the result messages which runs found optimal solutions and which runs “timed out”.

To investigate how runs could be speeded up — whether they take one minute or much longer — you should do a few of the solves with this statement instead of the one above:

That will produce a very detailed listing of the solver’s progress, which will give a greater understanding of why it takes the amount of time that it does. You can post a few of these listings to this forum to get advice.

Here are some other observations:

You should review your model formulation to see whether you can increase some lower bounds or decrease some upper bounds. Tightening the bounds like this sometimes gives a significant reduction in solver times.

There could also be some ways to speed up the execution of run.run, though it looks to me like it is written efficiently already. If most of the time is being spent in solves, then you will not be able to gain much by speeding the execution of the AMPL commands in run.run.

If you stop a global solver before it finishes, in general it will return a solution that is not optimal — although it is possible that it returns a solution that is globally optimal but that has not yet been proved to be optimal. Due to the way that global solvers work, they do not return locally optimal solutions.

The only reliable way to determine whether BARON is a better choice than Lindo Global is to run some tests, like you did with Octeract, LGO, and Couenne.