I delete u,than I got same solution,I guess you add u as expected return.I want to construct mean variance with multiperiod dynamic portfolio ,rebalance weights in every week.
#min_var
min_var = AMPL()
min_var.eval(
r"“”
set A ordered;
param risk_aversion;
param risk_free;
param Sigma{A, A};
param mu{A};
param R_min;
param R_max;
param Var_min;
param Var_max;
param lb default 0;
param ub default 1;
var w{A} >= lb <= ub;
maximize risk_adjusted_return:
risk_aversion*((sum {i in A} mu[i] * w[i]- R_min)/(R_max-R_min))+
(1-risk_aversion)*((sum {i in A, j in A} w[i] * Sigma[i, j] * w[j]- Var_min)/(Var_max - Var_min));
s.t. portfolio_weights:
sum {i in A} w[i] = 1;
s.t. portfolio_return_carried_over :
sum {i in A} mu[i] * w[i,t+1]=(sum {i in A} mu[i] * w[i])+(sum {i in A} mu[i] * w[i,t] - sum {i in A} w[i])*risk_free;
"""
)
min_var.set["A"] = stocks_in_sample
min_var.param["mu"] = mu
min_var.param["risk_aversion"] = risk_aversion
min_var.param["Sigma"] = Sigma
min_var.param["R_min"]=R_min
min_var.param["R_max"]=R_max
min_var.param["Var_min"]=Var_min
min_var.param["Var_max"]=Var_max
min_var.param["risk_free"]=mu["IRX"]
min_var.option["solver"] = "gurobi"
min_var.solve()
How I write
s.t. portfolio_return_carried_over :
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * w[i])+(sum {i in A} mu[i] * w[i] - sum {i in A} w[i])*risk_free;
since in the original equation I have t,but without t equation gives this
In iteration i
, do you want to use the weights form the previous iteration i-1
somewhere in the model?
If that is the case, then you can use the following:
param prev_w{A} default 0;
and assign to it the values from the previous optimal solution with:
ampl.param["prev_w"] = previous_optimal_w
Then in your model you can just use prev_w[i]
anywhere where you need it.
NameError Traceback (most recent call last)
Cell In[106], line 144
142 min_var.param[“Var_max”]=Var_max
143 min_var.param[“risk_free”]=mu[“IRX”]
→ 144 min_var.param[“prev_w”] = previous_optimal_w
145 # Calculate and set values for max_return_diff and max_variance_diff
146 min_var.option[“solver”] = “gurobi”
NameError: name ‘previous_optimal_w’ is not defined
previous_optimal_w
was just a placeholder for you to replace by something that would be holding the previous solution.
You can do as follows:
from pypfopt import expected_returns, risk_models
from datetime import datetime, timedelta
from amplpy import AMPL
import yfinance as yf
import numpy as np
import pandas as pd
import yfinance as yf
tickers = [
"HD", "MCD", "NKE", "KO", "PG", "SYY", "WMT", # Consumer Staples
"CVX", "XOM", # Energy
"AXP", "JPM", # Financials
"JNJ", "MRK", "PFE", "WBA", # Health Care
"BA", "CAT", "MMM", # Industrials
]
end_date = datetime.now().date()
start_date = end_date - timedelta(days=365)
ohlc = yf.download(tickers, start=start_date, end=end_date)
prices = ohlc["Adj Close"].dropna(how="all")
print(len(prices))
tau = 240
prev_w = {} # start with nothing
for t in range(tau, len(prices)):
in_sample_data = prices[t - tau : t]
ampl = AMPL()
ampl.eval(
r"""
set A ordered;
param Sigma{A, A};
param lb default 0;
param ub default 1;
param u{A};
param iteration default 0;
param prev_w{A} default 0;
var w{A} >= lb <= ub;
minimize portfolio_variance:
sum {i in A, j in A} w[i] * Sigma[i, j] * w[j];
s.t. portfolio_weights:
sum {i in A} w[i] = 1;
s.t. rebalance_limit{if iteration > 0}:
sum {i in A} abs(w[i]-prev_w[i]) <= 0.5;
"""
)
ampl.set["A"] = tickers
ampl.param["Sigma"] = risk_models.risk_matrix(in_sample_data, method="sample_cov")
ampl.param["iteration"] = t-tau # iteration number
ampl.param["prev_w"] = prev_w # load previous w
ampl.option["solver"] = "gurobi"
ampl.solve()
print("optimal variance:", ampl.get_value("portfolio_variance"))
ampl.get_data("w").to_pandas().plot.barh()
prev_w = ampl.get_data("w").to_dict() # store the optimal w in prev_w for the next iteration
You can run it at Google Colab
I am using the previous w just to impose a limit on how much the solution is allowed to change from the previous iteration as follows:
s.t. rebalance_limit{if iteration > 0}:
sum {i in A} abs(w[i]-prev_w[i]) <= 0.5;
In your case you may want to do something different such as including rebalancing costs that can be limited in a constraint like the one above or used in the objective as a penalizing factor.
Your code works in colab but I still get NameError Traceback (most recent call last)
Cell In[110], line 142
140 min_var.param[“Sigma”] = risk_models.risk_matrix(stocks_in_sample, method=“sample_cov”)
141 min_var.param[“iteration”] = t-tau # iteration number
→ 142 min_var.param[“prev_w”] = previous_optimal_w
143 min_var.param[“R_min”]=R_min
144 min_var.param[“R_max”]=R_max
NameError: name ‘previous_optimal_w’ is not defined
I use
s.t. portfolio_return_carried_over:
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
"""
)
min_var.set["A"] = stocks_in_sample
min_var.param["mu"] = mu
min_var.param["risk_aversion"] = risk_aversion
min_var.param["Sigma"] = risk_models.risk_matrix(stocks_in_sample, method="sample_cov")
min_var.param["iteration"] = t-tau # iteration number
min_var.param["prev_w"] = previous_optimal_w
min_var.param["R_min"]=R_min
min_var.param["R_max"]=R_max
min_var.param["Var_min"]=Var_min
min_var.param["Var_max"]=Var_max
min_var.param["risk_free"]=mu["IRX"]
min_var.option["solver"] = "gurobi"
min_var.solve()
min_var.get_data("w").to_pandas().plot.barh()
previous_optimal_w = min_var.get_data("w").to_dict() #
In my code, instead of previous_optimal_w
, I use a python dictionary named prev_w
to hold the previous optimal w.
s.t. portfolio_weights:
sum {i in A} w[i] = 1;
s.t. portfolio_return_carried_over:
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
"""
)
min_var.set["A"] = stocks_in_sample
min_var.param["mu"] = mu
min_var.param["risk_aversion"] = risk_aversion
min_var.param["Sigma"] = risk_models.risk_matrix(stocks_in_sample, method="sample_cov")
min_var.param["iteration"] = t-tau # iteration number
min_var.param["prev_w"] = prev_w
min_var.param["R_min"]=R_min
min_var.param["R_max"]=R_max
min_var.param["Var_min"]=Var_min
min_var.param["Var_max"]=Var_max
min_var.param["risk_free"]=mu["IRX"]
min_var.option["solver"] = "gurobi"
min_var.solve()
min_var.get_data("w").to_pandas().plot.barh()
prev_w = min_var.get_data("w").to_dict()
Warning:
presolve, constraint portfolio_weights:
all variables eliminated, but lower bound = 1 > 0
That means that the problem is infeasible. In that constraint you have:
s.t. portfolio_return_carried_over:
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
but you are not excluding it from the first iteration like in the example in Colab. If prev_w is 0 for everything, this is equivalent to sum {i in A} mu[i] * w[i]= 0
and portfolio_weights
require this to sum to 1.
You can exclude that constraint from the first iteration with:
param iteration default 0;
s.t. portfolio_return_carried_over{if iteration > 1}:
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
and in your code you need to set ampl.param["iteration"]
to the iteration number.
You can use min_var.get_data("w").to_pandas()
to get a Pandas dataframe.
Here is what I want:
w_flattened = min_var.get_data(“w”).to_pandas().values.flatten()
dot_product_result = np.dot(mu_out, w_flattened)
.flatten() converts all the values in the DataFrame into a 1D NumPy array
maximize risk_adjusted_return:
risk_aversion*((sum {i in A} mu[i] * w[i]- R_min)/(R_max-R_min))+
(1-risk_aversion)*((sum {i in A, j in A} w[i] * Sigma[i, j] * w[j]- Var_min)/(Var_max - Var_min));
s.t. portfolio_weights:
sum {i in A} w[i] = 1;
s.t. portfolio_return_carried_over {if iteration > 1}:
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
"""
)
min_var.set["A"] = stocks_in_sample
min_var.param["mu"] = mu
min_var.param["risk_aversion"] = risk_aversion
min_var.param["Sigma"] = risk_models.risk_matrix(stocks_in_sample, method="sample_cov")
min_var.param["iteration"] = t-tau # iteration number
min_var.param["prev_w"] = prev_w
min_var.param["R_min"]=R_min
min_var.param["R_max"]=R_max
min_var.param["Var_min"]=Var_min
min_var.param["Var_max"]=Var_max
min_var.param["risk_free"]=mu["IRX"]
min_var.option["solver"] = "gurobi"
min_var.solve()
then I still get
Warning:
presolve, constraint portfolio_weights:
all variables eliminated, but lower bound = 1 > 0
The constraint
sum {i in A} mu[i] * w[i]=(sum {i in A} mu[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free
is equivalent to
sum {i in A} mu[i] * w[i] =
(sum {i in A} mu[i] * prev_w[i])
+ (sum {i in A} mu[i] * prev_w[i])*risk_free
- (sum {i in A} prev_w[i])*risk_free
Which is equivalent to
sum {i in A} mu[i] * w[i] =
(sum {i in A} mu[i] * prev_w[i]) * (1+risk_free)
- (sum {i in A} prev_w[i])*risk_free
Is this what you want to model? Also note that sum {i in A} w[i] = 1
enforces full allocation on assets, so there should not be anything risk_free
. You may also want to change that for instance to sum {i in A} w[i] <= 1
so that the model will not allocate everything on assets or if you have risk free assets in your set of assets you may want to split them.
s.t. portfolio_return_carried_over {if iteration > 1}:
sum {i in A} mu[i] * w[i]=(sum {i in A} e[i] * prev_w[i] )+(sum {i in A} mu[i] * prev_w[i] - sum {i in A} prev_w[i] )*risk_free;
mu is mean return and e is rate of return my mistake,apologize,then I got following error:
e has the form:
I reconsidered this question,my aim is rebalancing in each week, my equation has time parameter, in your answer for variance for time periods,you also divide the data and wrote t in sigma,I want t in sigma,mu,weight, I also give data as for loop.
In MAD,
time goes daily return
I just want my time interval goes fixed time iteration such as 26.Not daily.I will try insert t all the equations. If I solce single period in each week,it does not multiperiod model.
Your code:
start_date = end_date - timedelta(weeks=26)
ohlc = yf.download(tickers, start=start_date, end=end_date)
prices = ohlc[“Adj Close”].dropna(how=“all”)
n_slices = 26
display(prices)
slices = np.array_split(prices, n_slices)
dfs =
for i, slice_df in enumerate(slices):
df = risk_models.risk_matrix(slice_df, method=“sample_cov”).stack().to_frame()
df.reset_index(inplace=True) # Turn the index into regular data columns
df.columns = [“Stock1”, “Stock2”, “S”] # Adjust column names
df[“Time”] = i # Add new column with the index of the slice
dfs.append(df)
my code:
variance optimization with time.zip (23.8 KB)
Here I want just variance minimization with time parameter,AMPL gives many error,could not fix,I just changed
for i, slice_df in enumerate(stocks_in_sample.values):#stocks_in_sample.values is array fom like your slices and ampl.set[“Time”]=range(tau),tau=32