Hi,I build optimizations and solve with these solvers but In my optimization problem there is no randomness.I set up two different optimization problems they are used separately.I realize every time I rerun the python(close my computer and open and run) they solve different objectives,but in the same python opening window if I rerun they give same solution means no randomness in that opening.I couldn’t understand.

Hi,

Gurobi and Highs are MIP solvers, so they are able to solve mixed-integer linear problems, which are convex, so you should expect them to find the optimal solution (and the optimal value is unique).

You can also run the `expand;`

command in ampl to see the model. Is it the same model in both problems?

If you are changing the data and you still get the same solution, it might be due to that data not affecting the solution. Maybe we could help if you give some more detail about how do you load data into your optimization problem.

I used Gurobi for quadratic optimization and Highs for linear optimization,gurobi for mean variance portfolio optimization and highs for mean-CVaR optimization problems.I used same data for both only issue is when I close my computer and open jupyter again solvers give me different objective values so results are also different like

HiGHS 1.6.0:HiGHS 1.6.0: optimal solution; objective -0.07161774363

162 simplex iterations

0 barrier iterations

My return:0.0015850270437580668

HiGHS 1.6.0:HiGHS 1.6.0: optimal solution; objective -0.07161778293

162 simplex iterations

0 barrier iterations

My return: 0.00158591030001598

For Highs, can I control randomness, my data is the matrix format,pandas, stock values data.

For example,I read that gurobi is deterministic, but no, I have a similar issue.

I guess it is related with tolerance value or inner randomness ,I read document of Highs but not exist a random seed type of thing.

Hi,

There could be many reasons for this difference.

I would bet that when you rerun the solver, different values are being sent or retrieved by the solver. You can write the nl file that it’s being sent to the solver, and check if it is the same in the two situations.

`write gmodel;`

generates a file called “model.nl”, so you would need to write two different nl files. If you are using amplpy, something like `model.eval("write gf1;")`

and `model.eval("write gf2;")`

before your two solve statements to generate the two files.

Are you reusing the same ampl object for the two problems?

Same ampl object,if it runs in the opening in a jupyter it gave me same solution when I rerun in the same window solution was same,only difference is when I close my computer a week than run the jupyter optimal value give me this difference.

Hi,

Were you able to reproduce the issue? Time should not affect the jupyter notebook, solvers, or ampl itself, so there should be a way to reproduce the issue without waiting a week.

I could take a look at your script if you want (support@ampl.com, marcos@ampl.com), but it seems that your data could be changing, or there is some kind of small precision issue when rerunning the problem with a different solver.