This paper tackles the challenge of estimating a continuous-time human motion field from an event stream. Current Human Mesh Recovery (HMR) methods predominantly use frame-based approaches, which are susceptible to aliasing and inaccuracies caused by limited temporal resolution and motion blur. In contrast, we propose a method to predict a continuous-time human motion field directly from events. Our approach employs a recurrent feed-forward neural network to model human motion in the latent space of possible movements. Previous state-of-the-art event-based methods rely on computationally expensive optimizations over a fixed number of poses at high frame rates, which become impractical as temporal resolution increases. Instead, we introduce the first method to replace traditional discrete-time predictions with a continuous human motion field, represented as a time-implicit function that supports parallel pose queries at arbitrary temporal resolutions.