Asdasdas
Paul Schrimpf
January 14, 2009
Paul Schrimpf ()
Matlab – Optimization and Integration
January 14, 2009
1 / 43
This lecture focuses on two ubiquitous numerical techiniques: 1 Optimization and equation solving
Agents maximize utility / profits Estimation
2
Integration
Expectations of the future or over unobserved variables
PaulSchrimpf ()
Matlab – Optimization and Integration
January 14, 2009
2 / 43
Optimization
Want to solve a minimization problem: min f (x)
x
Two basic approaches:
1 2
Heuristic methods search over x in some systematic way Model based approaches use an easily minimized approximation to f () to guide their search
First, x ∈ Then, x ∈
for intuition
n
Paul Schrimpf ()Matlab – Optimization and Integration
January 14, 2009
3 / 43
Section Search
Form bracket x1 < x2 < x3 such that f (x2 ) < f (x1 ) and f (x3 ) Try new x ∈ (x1 , x3 ), update bracket
Paul Schrimpf ()
Matlab – Optimization and Integration
January 14, 2009
4 / 43
Quadratic Interpolation
More general interpolation methods possible e.g. Matlab uses both quadratic and cubicinterpolation for line search
Paul Schrimpf ()
Matlab – Optimization and Integration
January 14, 2009
5 / 43
Newton-Rhapson and Quasi-Newton
Newton: use f (x) and f (x) to construct parabola xn+1 = xn − f (xn ) f (xn )
n )−f (x Quasi-Newton: approximate f (xn ) with f (xxn −xn−1n−1 ) 1 BHHH: approximate f (x) Optimization and i=1 fi (x)fi (x) January 14, 2009 Paul Schrimpf ()Matlab – with N Integration
6 / 43
Rates of Convergence
Newton’s method converges quadraticly, i.e. in a neighborhood of the solution, ||xn+1 − x ∗ || =C n→∞ ||xn − x ∗ ||2 lim Parabolic interpolation and quasi-Newton methods also achieve better than linear rates of convergence, but (usually) less than quadratic, i.e. ||xn+1 − x ∗ || =C n→∞ ||xn − x ∗ ||r lim for some r ∈ (1, 2] Can achievefaster than quadratic convergence by using more information Usually, happy with any rate better than 1
Paul Schrimpf () Matlab – Optimization and Integration January 14, 2009 7 / 43
Trust Regions
Problem: For a function that is not globally concave, quadratic interpolation and Newton methods might prescribe an upward step and can fail to converge Solution: Combine them with a sectionalsearch, or more generally, a trust region
˜ Region, R where we “trust” our quadratic approximation, f (x) ≈ f (x) ˜ xn+1 = arg min fn (x)
x∈R
Shrink or expand R based on how much better f (xn+1 ) is than f (xn )
Brent’s method combines quadratic interpolation with sectional search
Paul Schrimpf ()
Matlab – Optimization and Integration
January 14, 2009
8 / 43
MatlabImplementation
fminbnd()
uses Brent’s method
No uni-dimensional implementation of any type of Newton method, but could use multi-dimensional versions We used fminbnd() in lecture 1:
1 2 3 4 5 6 7 8
optimset('fminbnd') % this returns the default options for
% change some options opt = optimset('Display','iter', ... % 'off','final','notif 'MaxFunEvals',1000,'MaxIter',1000, ... 'TolX',1e−8); %maximize the expected value [c ev] = fminbnd(@(c) obj(c,x,parm), cLo, cHi,opt);
For all optimization functions, can also set options through a graphical interface by using optimtool()
Paul Schrimpf () Matlab – Optimization and Integration January 14, 2009 9 / 43
Multi-Dimensional Optimization with Quadratic Approximation
Basic idea is the same:
Construct a quadratic approximation to theobjective Minimize approximation over some region
Complicated by difficulty of constructing region
Cannot “bracket” a minimum in R n , would need an n − 1 dimensional manifold Two approaches:
Use n dimensional trust region Break problem into sequence of smaller-dimensional minimizations
Paul Schrimpf ()
Matlab – Optimization and Integration
January 14, 2009
10 / 43
Directional...
Regístrate para leer el documento completo.