Prelegent: Michal Nauman
In this talk, we will discuss policy gradients with many action samples. We will investigate decompositions of policy gradient variance, as well as measure the variance reduction effect stemming form increasing the number of state and action samples used in estimation. Finally, we will compare various strategies of simulating additional samples using neural networks.