Upon successful completion of the course, the students should:
- Understand the basics and limitations of the classical estimation, detection and regression methods, as a motivation for the modern data-driven machine learning and deep learning methods.
- Understand the different machine learning methods and when deep learning approaches are the most adequate.
- Understand the underlying concepts and properties related to learning by Deep Neural Networks (DNNs), including stochastic gradient methods, back-propagation equations, different training methods, concepts of trainable parameters and hyper-parameters, activation functions, losses, regularizers and optimizers, understanding what choices are adequate for a given problem.
- Be able to both generate data sets and work with on-line public data sets using standard Python tools.
- Be able to train and evaluate the performance of common DNN architectures/types, such as deep feedforward, deep convolutional and deep recurrent neural networks (specially GRUs, LSTM), analysing the impact of different design choices.
- Know how to formulate and apply the Deep Learning framework to solve several practical problems in different application domains.
In this course, we cover the mathematical and algorithmic foundations of deep neural networks (DNNs), as well as the development of skills at designing and training them for different problems, placing deep learning in the context of Electrical Engineering. Deep Learning has shown an important impact in multiple important applications, such as internet search, speech recognition, face recognition, computer vision, wireless communications, sensor networks and self-driving cars.
The course covers the following main topics:
- Review of estimation and detection methods based on statistical models/descriptions, regression and classification, motivation for data-driven machine learning approaches.
- Types of machine learning problems and approaches, statistical learning, motivation for deep neural networks, how deep Learning relates to the Electrical Engineering curricula.
- Optimization and training in deep learning: LMS algorithm, stochastic gradient descent with mini-batches, loss functions and regularizers, activation functions, back-propagation, trainable parameters and hyper-parameters, universal approximation, vanishing gradients, momentum and optimizers (e.g. AdaGrad, RMSProp, Adam), Drop-out, batch normalization, hyper-parameter optimization.
- Deep learning design flow: collection, contamination, cleaning, augmentation of data, synthetic data generation, designing features and dimensionality reduction (e.g. PCA, LDA), best practices for design of the overall data processing flow.
- Convolutional deep neural networks: 1D and 2D CNNs, pooling and kernel selection, design architectures, complexity reduction methods, applications.
- Recurrent deep neural networks: concept of RNNs, general gating and filtering concepts, gated recurrent units (GRUs), Long-Term Short-Term Memory (LSTM) networks, Backprop Through Time (BPTT), Recurrent networks for non-linear filtering.
- Other topics (time permitting): Auto-encoders and Generative models, Generative Adversarial Networks (GANs), Transformers, BERT, Basics of Deep Reinforcement Learning.
- Computing material: working with Python, training feed-forward DNN using numpy, training CNNs and RNNs using Keras, other deep learning computing frameworks (time permitting).
Faculty of Engineering and Science