## Convergence of Langevin-Simulated Annealing algorithms with multiplicative noise

Pierre Bras (Sorbonne Université)

### Convergence of Langevin-Simulated Annealing algorithms with multiplicative noise

Pierre Bras (Sorbonne Université)

The classical Langevin equation is $dY_t = -\nabla V(Y_t)dt + \sigma dW_t$, where $V$ is a function to be minimized, where $\sigma > 0$ and where $W_t$ is a Brownian motion. It is used in optimization problems arising in machine learning and mathematical finance. It corresponds to a gradient descent to which a white noise is added; adding such noise is useful to escape local minima which are traps for the classical gradient descent.

In this talk we focus on the case where $\sigma$ is not constant but is a matrix depending on the position : $\sigma(Y_t)$. This case is often used by practitioners however it has little theoretical background. As in simulated annealing algorithms, we make slowly decrease the white noise by adding a coefficient $a(t)$ which converges to $0$. We then show the convergence for the $L^1$-Wasserstein distance of $Y_t$ to $\text{argmin}(V)$.

講師： Pierre Bras (Sorbonne Université) 大阪大学 数理・データ科学セミナー 金融・保険セミナーシリーズ 第125回 2021年12月09日(木)　17:00-18:00 Zoomによるオンラインセミナー 無料 参加費は無料ですが下記のリンクより事前登録をお願いします。https://docs.google.com/forms/d/e/1FAIpQLSdC0YTB6vgUDxQ5gYxSiajizRHhMO3rmBC8LzArWOG9V4Cdlw/viewform?usp=sf_link登録されたメールアドレス宛に参加用 URL をお送りします。 参加方法をご覧下さい。 本ウェブサイトの「お問い合せ」のページをご参照ください。