Skip to main content

Graphical and Deep Learning Models

a
Course
Postgraduate
Semester
Electives
Subject Code
MA873

Syllabus

Graphical Models: Basic graph concepts; Bayesian Networks; conditional independence; Markov Networks; Inference: variable elimination, belief propagation, max-product, junction trees, loopy belief propogation, expectation propogation, sampling; structure learning; learning with missing data.

Deep Learning: recurrent networks; probabilistic neural nets; Boltzmann machines; RBMs; sigmoid belief nets; CNN; autoencoders; deep reinforcement learning; generative adversarial net- works; structured deep learning; applications.

Text Books

Same as Reference

 

References

  1. Koller D. and Friedman, N., Probabilistic Graphical Models: Principles and Techniques, The MIT Press (2009).
  2. Barber, D., Bayesian Reasoning and Machine Learning, Cambridge Univ. Press (2012).
  3. Bishop, C. M., Pattern Recognition and Machine Learning, Springer (2006).
  4. Hastie, T., Tibshirani, R., and Friedman, J., The Elements of Statistical Learning: DataMining, Inference, and Prediction, Springer (2002).
  5. Murphy, K. P., Machine Learning: A Probabilistic Perspective, The MIT Press (2012).
  6. Goodfellow, I., Bengio, Y., and Courville, A., Deep Learning, The MIT Press (2016).

Course Outcomes (COs):
CO1: Develop a comprehensive understanding of the fundamentals of graphical and deep learning models.

CO2: Cover key concepts, architectures, and principles underlying both graphical models and deep learning.

CO3: Learn the mathematical and statistical concepts that form the basis of graphical and deep learning models