Retour

G-2011-83

Approximation of Dynamic Programs

et

référence BibTeX

Under some standard market assumptions, evaluating a derivative implies computing the discounted expected value of its future cash flows. In that context, the evaluation of a financial option can be written as a stochastic Dynamic Program, where the state variable corresponds to the underlying assets' observable characteristics. The discrete-stage Dynamic Programming (DP) algorithm over a finite horizon requires the evaluation of the value function at each decision stage for all possible states, by comparing expected values of actions over all possible state transitions. Approximation procedures are used to reduce the computational burden of the DP algorithm.

This paper presents some basic approximation approaches used in DP algorithms for the evaluation of financial options. These are presented in the simple setting of a Bermudian put option, and references to specific applications are provided. Topics covered include Markov Chain approximation, finite element and spectral interpolation of the value function, and a hybrid interpolation algorithm which achieves spectral convergence, while avoiding localization errors and numerical instability.

, 18 pages

Publication

et
Duan J.C. & Härdle W. & Gentle J.E. , Handbook of Computational Finance, Springer, 633–649, 2012 référence BibTeX