Deciphering interpretable latent regularity or structure from high-dimensional time series data is a challenging problem in the artificial intelligence community. Many studies and theories in computational neuroscience posit that high-dimensional neural recordings are noisy observations of some underlying, low-dimensional, and time-varying signal of interest. Thus, robust and powerful statistical methods are needed to identify such latent dynamics, which can then be used to provide insights into latent patterns governing neural activity both spatially and temporally. These methods can further advance machine learning research methods in three ways: 1) avoiding the curse of dimensionality of multiple time series; 2) removing the multicollinearity that affects the independence of variables in machine learning models; and 3) saving time and storage space when utilizing low-dimensional data for predictions. Real-world time series usage models such as neuroscience, financial data, and healthcare datasets can use these methods to get a low dimensional representation of datasets while preserving as much information as possible.
Extracting Low-Dimensional Data from Noisy High-Dimensional Neural Recordings
A large body of literature exists that examines how to extract concise, structured and insightful dynamic portraits from noisy high-dimensional neural recordings. Specifically, many dimensionality reduction methods have been widely adopted to extract low-dimensional, smooth and time-evolving latent trajectories. However, simple state transition structures, linear embedding assumptions, or inflexible inference networks impede the accurate recovery of dynamic portraits.
At an oral presentation today at the 2019 Conference on Uncertainty in Artificial Intelligence (UAI), we are presenting a novel latent dynamic model that is capable of capturing nonlinear, non-Markovian, long short-term time-dependent dynamics via recurrent neural networks and tackling complex nonlinear embedding via non-parametric Gaussian process. Our method, called Gaussian Process Recurrent Neural Networks (GP-RNN), is briefly shown in Figure 1. The low-dimensional latent state (blue nodes) is evolving via incorporating a RNN structure (yellow nodes) as the prior, and propagating to observation space (green nodes) via the Gaussian process.
Due to the complexity and intractability of the model and its inference if the observation is Poisson data (the latent function of Gaussian process cannot be integrated out because of the non-conjugate prior for the Poisson data), we also provide a powerful inference network with bi-directional long short-term memory networks that encode both past and future information into posterior distributions (see Figure 2). In the experiment detailed in the full paper, we show that our model outperforms other state-of-the-art methods in reconstructing insightful latent dynamics from both simulated and experimental neural datasets with either Gaussian or Poisson observations, especially in the low-sample scenario.
Gaussian Process Recurrent Neural Networks (GP-RNN) are superior at recovering more structured latent trajectories with better quantitative performance compared with other state-of-the-art methods. In addition to the visual cortex dataset tested in the paper, the proposed model can also be potentially applied to analyzing the neural dynamics of the primary motor cortex, prefrontal cortex (PFC), or posterior parietal cortex (PPC) which play a significant role in cognition (like evidence integration, short term memory, spatial reasoning, etc.).
The model can also be applied to other domains like finance and healthcare, for extracting low-dimensional, underlying latent states from complicated time series. The latent representations extracted from those time series datasets are universal and can effectively and efficiently be used for diverse machine learning tasks such as regression, for which we achieve state-of-the-art performance, and classification. We invite you to check out the entire text of our research paper. Our code and additional materials are available at https://github.com/sheqi/GP-RNN_UAI2019. You can follow us on @IntelAI and @IntelAIResearch for more on Intel AI tools and technologies.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.