4 октября (вторник), 1830, аудитория 615 ИППИ
Денис Беломестный (Duisburg-Essen University, ВШЭ)
Approximative Dynamic Programming and Deep Learning
Modeling of optimal control is one of the most challenging areas in applied stochastics, since typical real-world control problems, for example dynamic optimization problems in finance, are too complex to be treated analytically. In this talk we will discuss a general approach applicable to any discrete-time controlled Markov processes. The idea is to simulate a set of trajectories and to use the Bellman optimality principle combined with functional optimization and fast regression- based methods approximating conditional expectations without nested simulation. We will discuss why and how deep learning can be used to accurately approximate the dynamic optimization problem and demonstrate the applicability of the approach on an example from finance industry.
03.10.2016 | |