Description
Machine Learning: A Bayesian and Optimization Perspective, Second Edition, gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches based on optimization techniques combined with the Bayesian inference approach. The book builds from the basic classical methods to recent trends, making it suitable for different courses, including pattern recognition, statistical/adaptive signal processing, and statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. In addition, sections cover major machine learning methods developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science.
Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth and supported by examples and problems, giving an invaluable resource to both the student and researcher for understanding and applying machine learning concepts.
This updated edition includes many more simple examples on basic theory, complete rewrites of the chapter on Neural Networks and Deep Learning, and expanded treatment of Bayesian learning, including Nonparametric Bayesian Learning.
Key Features
Presents the physical reasoning, mathematical modeling and algorithmic implementation of each method
Updates on the latest trends, including sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling
Provides case studies on a variety of topics, including protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, and more
Readership
Researchers and graduate students in electronic engineering, mechanical engineering, computer science, applied mathematics, statistics, medical imaging
Table of Contents
1. Introduction
2. Probability and stochastic Processes
3. Learning in parametric Modeling: Basic Concepts and Directions
4. Mean-Square Error Linear Estimation
5. Stochastic Gradient Descent: the LMS Algorithm and its Family
6. The Least-Squares Family
7. Classification: A Tour of the Classics
8. Parameter Learning: A Convex Analytic Path
9. Sparsity-Aware Learning: Concepts and Theoretical Foundations
10. Sparsity-Aware Learning: Algorithms and Applications
11. Learning in Reproducing Kernel Hilbert Spaces
12. Bayesian Learning: Inference and the EM Algorithm
13. Bayesian Learning: Approximate Inference and nonparametric Models
14. Montel Carlo Methods
15. Probabilistic Graphical Models: Part 1
16. Probabilistic Graphical Models: Part 2
17. Particle Filtering
18. Neural Networks and Deep Learning
19. Dimensionality Reduction and Latent Variables Modeling
About the Author
Sergios Theodoridis is Professor of Signal Processing and Machine Learning in the Department of Informatics and Telecommunications of the University of Athens.
He is the co-author of the bestselling book, Pattern Recognition, and the co-author of Introduction to Pattern Recognition: A MATLAB Approach.
He serves as Editor-in-Chief for the IEEE Transactions on Signal Processing, and he is the co-Editor in Chief with Rama Chellapa for the Academic
Press Library in Signal Processing.
He has received a number of awards including the 2014 IEEE Signal Processing Magazine Best Paper Award, the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award, the 2014 IEEE Signal Processing Society Education Award, the EURASIP 2014 Meritorious Service Award, and he has served as a Distinguished Lecturer for the IEEE Signal Processing Society and the IEEE Circuits and Systems Society. He is a Fellow of EURASIP and a Fellow of IEEE.
Affiliations and Expertise
Department of Informatics and Telecommunications, University of Athens, Greece