Graphical models are structured probabilistic models to describe a probability distribution using a graph, where the graph expresses the dependence between the random variables. Each node represents a random variable and each edge represents a direct dependence. The direct interactions imply indirect interactions, but only the direct interactions need to be explicitly modeled.
Graphical models can be divided into two categories: Directed Graphical Models (or, Bayesian Nets) and Undirected Graphical Models (or, Markov Networks). The directed edge implies a parent-child dependence between random variables, while an undirected edge implies an associative dependence between random variables, giving rise to energy-based models.
The graphical models research community is large and has developed many different models, training algorithms, and inference algorithms. Deep learning practitioners tend to use very different model structures, learning algorithms and inference procedures than are commonly used by the rest of the graphical models research community. Restricted Boltzmann Machines (RBM), Deep Belief Nets (DBN) and Autoencoders are a few example.
The applications of such models in AI research are many, including unlabeled data encoding, feature learning, dimensionality reduction, change detection and medical imaging applications like MRI Segmentation, Brain Tumor Detection etc.