- Numerical optimization and data science (7 weeks)
- Linear systems: Iterative methods (e.g., Gauss-Seidel), matrix spectrum (eigenvalues and eigenvectors), matrix factorization (e.g., SVD), overdetermined systems (least squares). Interpolation and curve fitting (splines, optional)
- Generalities about numerical optimization and machine learning. Gradient descent. Stochastic gradient descent. Conjugate gradient. Metaheuristics: Search spaces, neighborhoods, sampling of the search space: exploration and exploitation, meta-modeling
- Introduction to machine learning (7 weeks)
- Overview of models and challenges in machine learning and data science: geometric vs. probabilistic approaches; predictive models vs. inference models
- Probabilistic methods; Bayesian classification
- Geometrical methods: K-NN, decision trees
- Margin-based methods, kernel methods
- Introduction to Neural networks
- Introduction to ensemble methods
Bibliography
- Eldén, Lars (2007). Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), SIAM.
- Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization, 2nd Edition, Springer.
- Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning, Springer.
- Bengio, Yoshua; Goodfellow, Ian; Courville, Aaron (2016). Deep Learning, MIT Press.
- Gareth, James; Witten, Daniela; Hastie, Trevor; Tibshirani, Robert (2023). Introduction to Statistical Learning, Springer.
Support Sessions
2 hours a week with a teaching assistant.
Grading
Partial exams in each block (25%), projects in each block (25%), homework assignments (50%).
|