Track 1: 12 months at UoB

Structure:

Note:

Semester Location Duration Months Core Courses Electives Project IITM Credits UoB Credits
Semester 1 – Taught courses (Theory) IITM Late July to November 4.2 4 42 55
Industrial placement IITM December to February 3 Yes 25 33
Semester 2 – Taught Courses (Theory for practice) UoB February to June 4.5 1 2 46 60
Summer and/ Semester 3 UoB June to August 3 Yes 46 60
Semester 4 UoB September to January 4.5 2 Yes 46 60
Total 19-20 205 268

Semester 2: Students to pick 2 electives

Semester 4: Students to pick 2 electives

Semester Offered at Type Course Number Course Name IITM UoB
1 IITM Core EE5708 Data Analytics Laboratory 6
Core CH5019 Mathematical foundations of Data Science 12
Core MA5910 Data Structures and Algorithms 12
Core ID5055 Foundation of Machine Learning (replaces MS5600 - Introduction to Data Analytics) 12
Total 42 55
Winter Project IITM Project 1 Industrial Placement 25
Total 25 33
2 UoB Core 06-32260 Current Topics in Artificial Intelligence and Machine Learning 20
Elective 06-37812 Natural Language Processing (Extended) 20
Elective 06-35376 Evolutionary Computation (Extended) 20
Elective 06-30241 Computer Vision and Imaging (Extended)
Elective 06-32254 Visualisation
Total 46 60
3 UoB Project 2 TBC Full-time Research Project supervision - IIT Madras & UoB 60
Total 46 60
4 UoB Project 3 TBC Research Project with collaborative supervision between IIT Madras & UoB (writing up - including research paper) 20
Elective 06 38968 Intelligent Data Analysis (Extended) 20
Elective 06 32212 Neural Computation (Extended) 20
Elective TBC Machine Learning (Extended)
Total 46 60
Program Total 205 268


Track 2: End Program at IIT Madras (Return in Summer ‘25)

Structure:

Note:

Semester Location Duration Months Core Courses Electives Project IITM Credits UoB Credits
Semester 1 – Taught courses (Theory) IITM Late July to November 4.2 4 42 55
Industrial placement IITM December to February 3 Yes 25 33
Semester 2 – Taught Courses (Theory for practice) UoB February to June 4.5 1 2 46 60
Summer Project + Semester 3 + Extended Project IITM June to January 7.5 3 Yes 94 122
Total 19-20 207 270

Semester 2 (Feb-Jun): Students to pick 2 electives ( 2 x 20 credits) at UoB

Semester 3 (Jul – Nov): Students to pick 3 electives from the basket at IITM

*3 electives will be offered. Not all courses may be available for any given batch.

*Students opting for the Computer Vision and Imaging Elective at UoB (Sem 2) are not eligible for the elective CS6350 – Computer Vision at IITM (in the final term).

Semester Offered at Type Course Number Course Name IITM UoB
1 IITM Core EE5708 Data Analytics Laboratory 6
Core CH5019 Mathematical foundations of Data Science 12
Core MA5910 Data Structures and Algorithms 12
Core ID5055 Foundation of Machine Learning (replaces MS5600 - Introduction to Data Analytics) 12
Total 42 55
Winter Project IITM Project 1 Industrial Placement 25
Total 25 33
2 UoB Core 06-32260 Current Topics in Artificial Intelligence and Machine Learning 20
Elective 06-37812 Natural Language Processing (Extended) 20
Elective 06-35376 Evolutionary Computation (Extended) 20
Elective 06-30241 Computer Vision and Imaging (Extended)
Elective 06-32254 Visualisation
Total 46 60
Summer and Sem 3 and Extended Project IITM Project 2 (Jun-Aug) TBC Full-time Research Project with collaborative supervision between IIT Madras and Edgbaston 46
Project 3 (Sep -Jan) TBC Research Project with collaborative supervision between IIT Madras & UoB (writing up - including research paper) 15
Total 94 122
Program Total 207 270

Credit Conversion:

Formula:
IITM to UoB:
(Number of credits*13 weeks)/ 10
Eg: 9 credit
9*13/10 = 11.7 ~ 12 credits at UoB
UoB to IITM:
(Number of Credits * 10)/13 weeks
Eg: 20 credits
20*10/13 = 15.3 ~ 15 credits at IITM


Curriculum Details:

Semester 1

1. CH5019 - Mathematical foundations of Data Science

Description: The course will introduce students to the fundamental mathematical concepts required for a program in data science

Course Content: 1. Basics of Data Science: Introduction; Typology of problems; Importance of linear algebra, statistics and optimization from a data science perspective; Structured thinking for solving data science problems.2. Linear Algebra: Matrices and their properties (determinants, traces, rank, nullity, etc.); Eigenvalues and eigenvectors; Matrix factorizations; Inner products; Distance measures; Projections; Notion of hyperplanes; half-planes.3. Probability, Statistics and Random Processes: Probability theory and axioms; Random variables; Probability distributions and density functions (univariate and multivariate); Expectations and moments; Covariance and correlation; Statistics and sampling distributions; Hypothesis testing of means, proportions, variances and correlations; Confidence (statistical) intervals; Correlation functions; White-noise process.4. Optimization: Unconstrained optimization; Necessary and sufficiency conditions for optima; Gradient descent methods; Constrained optimization, KKT conditions; Introduction to non-gradient techniques; Introduction to least squares optimization; Optimization view of machine learning.5. Introduction to Data Science Methods: Linear regression as an exemplar function approximation problem; Linear classification problems.

Text Books: 1. G. Strang (2016). Introduction to Linear Algebra, Wellesley-Cambridge Press, Fifth edition, USA.2. Bendat, J. S. and A. G. Piersol (2010). Random Data: Analysis and Measurement Procedures. 4th Edition. John Wiley & Sons, Inc., NY, USA:3. Montgomery, D. C. and G. C. Runger (2011). Applied Statistics and Probability for Engineers. 5th Edition. John Wiley & Sons, Inc., NY, USA: 4. David G. Luenberger (1969). Optimization by Vector Space Methods, John Wiley & Sons (NY)

Reference Books: 1. Cathy O’Neil and Rachel Schutt (2013). Doing Data Science, O’Reilly Media
Prerequisite:

2. MA5910 - Data Structures and Algorithms

Description:The objective of this course is to give an exposure to the students on the basics of data structure, their combinatorial aspects and designing algorithms for solving problems arising in scientific computing.

Course Content: Preliminaries: Growth of functions, recurrence relation, generating functions, solution of difference equations, Master's theorem (without proof). Sorting and Order Statistics: Bubblesort, mergesort, heapsort, quicksort, sorting in linear time, median and order statistics. Elementary Data Structures: Stacks, queues, linked lists, implementing pointers, rooted trees, direct-address tables, hash tables, open addressing, perfect hashing, binary search trees, red-black trees, dynamic programming, optimal binary search trees, greedy algorithms. Graph Algorithms: Breadth-first search, depth-first search, topological sort, Minimum spanning trees, Krushkal's and Prim's algorithms, shortest path, Bellman-Ford algorithm, Dijkstra's algorithm, Floyd-Warshall algorithm, Johnson's algorithm, Maximum flow, Ford-Fulkerson method, maximum bipartite matching.

Text Books: T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein, Introduction to Algorithms, 3rd Edn, PHI, New Delhi, 2009.

Reference Books: 1. Alfred V. Aho, Jeffrey D. Ullman, John E. Hopcroft, Data Structures and Algorithms, Addison Wesley, 1983. 2. M.A. WeiSS, Data Structures and Algorithm Analysis in CTT, 3rd Edn, Pearson, Addison Wiesley, 2006. 3. A.M. Tenenbaum, Y. Langsam, and M.J. Augenstein, Data Structures using C, PHI, New Delhi, 2009.
Prerequisite: NIL

3. EE5708 - Data Analytics Laboratory

Description: This course will introduce the students to practical aspects of data analytics. The course will start with a basic introduction to various python toolkits followed by using these toolkits for developing various supervised and unsupervised machine learning algorithms.

Course Content: Introduction to various Python toolkits: Numpy for handling arrays and matrices; SciPy for scientific computing; Matplotlib for data visualization; Pandas for data manipulation; SciKit Learn library for machine learning. Linear models for regression: Ordinary least squares; Ridge regression (l2 regularization); Lasso (l1 regularization); Elastic Net (l2-l1 regularization). Linear classification: Naive Bayes, Linear Discriminant Analysis (LDA); Logistics regression; Linear Support Vector Machine (SVM); l2 and l1 regularized versions of these algorithms. Non-linear algorithms: Kernel SVM, Random forest. Gradient Boosting, Neural network. Unsupervised learning: Dimensionality reduction techniques such as Principal Component Analysis (PCA), Clustering techniques such as k-Means clustering and Agglomerative clustering.

Text Books: Gareth James, Daniela Witten, Trevor Hastie, Rob Tibshirani, An Introduction to Statistical Learning: with Applications in R, Springer 2013.

Reference Books: Sarah Guido, Andreas C. Müller, Introduction to Machine Learning with Python, O'Reilly Media, Inc., 2016. Trevor Hastie, Robert Tibshirani and Jerome H. Friedman, The Elements of Statistical Learning, Second Edition Springer Series in Satistics, 2009. Edouard Duchesnay, Tommy Löfstedt, Statistics and Machine Learning in Python. https://duchesnay.github.io/pystatsml/
Prerequisite: CH5019 or equivalent

4. ID5055 - Foundations of Machine Learning

Objectives: The objective of this course is to introduce fundamentals of machine learning techniques and their applications in different problems

Course Content:

Unsupervised Learning (3 weeks)

1. Representation Learning - PCA
2. Estimation - Review of MLE, Bayesian estimation
3. Clustering - K-Means, Hierarchical Clustering, Spectral Clustering

Supervised Learning: (9 weeks)

• Regression (2 weeks)
• Linear Regression, Ordinary Least Squares, PCR
• Non-linear regression (basis functions)
• Ridge Regression, LASSO
• Binary Classification: (4 weeks)
• K-Nearest Neighbors
• Decision Trees, CART
• Bias-Variance Dichotomy, Model Validation: Cross validation
• Bayesian Decision Theory
• Generative vs Discriminative Modeling for classification
• Generative
• Naive Bayes, Gaussian Discriminant Analysis
• Hidden Markov Model
• Discriminative
• Logistic Regression
• Advanced Methods for Classification (3 weeks)
• Support Vector Machines - Kernels
• Ensemble Methods:
• Bagging - Random Forest
• Boosting - Adaboost/GBDT/XgBoost
• Artificial Neural Networks
• Multi-Class Classification - one vs all, one vs one
• Sequential Decision Making (1 week)

Text Books: Hastie, Trevor, et al. The elements of statistical learning. Vol. 2. No. 1. New York: springer, 2009. James, Gareth, et al. An introduction to statistical learning. Vol. 6. New York: springer, 2013. .

Reference Books: Online tutorials / materials Richard O. Duda, Peter E. Hart and David G. Stork. Pattern Classification. John Wiley, 2001. Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
Prerequisite: None

Semester 2

1. Current Topics in Data Science:

Module 06-32260
Semester 2
Outline:
Data Science is a rapidly advancing field of study. This module will consist of a series of seminar-style short courses that will cover recent advances in the subject. Each topic will be introduced by a staff member and students will then develop their knowledge via directed reading and present their findings to the class. Each topic will be accompanied by weekly quizzes that will test students’ understanding.
Learning Outcomes:
On successful completion of this module, the student should be able to:
Demonstrate an understanding and appreciation of recent advances in data science Make effective oral and written presentations
Engage effectively in discussions about recent research in data science
Taught with:
06-32257 - Current Topics in Artificial Intelligence and Machine Learning
Cannot be taken with:
06-32257 - Current Topics in Artificial Intelligence and Machine Learning
Assessment:
Main Assessments: Continuous assessment (100%)
Supplementary Assessments: Continuous assessment (100%) over the Summer period

2. Visualisation:

Module 06-32254 (2021)
Semester 2
Outline:
Visualising data effectively is important for both presentation and also for gaining insight and intuition into its structure. This can be challenging when the number of data points is large, and especially when it is high dimensional. In this module students will study techniques for visualising complex datasets, including best practice for visual display, dimensionality reduction techniques, and tools for visualisation.
Learning Outcomes:
On successful completion of this module, the student should be able to:
Understand, explain, and apply techniques for visualising high-dimensional data
Effectively use a range of tools for data visualisation
Present effective visualisations of complex datasets using established best practice.
Assessment:
Main Assessments: 1.5 hour examination (80%) and continuous assessment (20%)
Supplementary Assessments: 1.5 hour examination (100%)

3. Computer Vision and Imaging (Extended)

06-30241
Description
Vision is one of the major senses that enables humans to act (and interact) in (ever)changing environments, and imaging is the means by which we record visual information in a form suitable for computational processing. Together, imaging and computer vision play an important role in a wide range of intelligent systems, from advanced microscopy techniques to autonomous vehicles. This module will focus on the fundamental computational principles that enable an array of picture elements, acquired by one of a multitude of imaging technologies, to be converted into structural and semantic entities necessary to understand the content of images and to accomplish various perceptual tasks. We will study the problems of image formation, low level image processing, object recognition, categorisation, segmentation, registration, stereo vision, motion analysis, tracking and active vision. The lectures will be accompanied by a series of exercises in which these computational models will be designed, implemented and tested in real-world scenarios.

Learning Outcomes

  • By the end of the module students should be able to:
  • Understand the main computer vision and imaging methods and computational models
  • Design, implement and test computer vision and imaging algorithms
  • Know how to synthesise combinations of imaging and vision techniques to solve real-world problems
  • The student should demonstrate the capacity to independently study, understand, and critically evaluate advanced materials or research articles in the subject areas covered by this module.
  • Assessment:
    Examination (50%),
    Continuous Assessment (50%)

    Reassessment:
    Examination (100%)

    4. Natural Language Processing (Extended)

    06-37812
    Description

    Natural Language Processing enables computers to understand and reason about human languages such as English and has resulted in many exciting technologies such as conversational assistants, machine translation and (intelligent) internet search. This module would provide the theoretical foundations of NLP as well as applied techniques for extracting and reasoning about information from text.

    The module explores three major themes:
    Computational Models of human cognition such as memory, attention and psycholinguistics Symbolic AI methods for processing language such as automated reasoning, planning, parsing of grammar, and conversational systems. Statistical Models of Language including the use of machine learning to infer structure and meaning.

    Learning Outcomes:
    By the end of the module students should be able to:
    Demonstrate an understanding of the major topics in Natural Language Processing
    Understand the role of machine learning techniques in widening the coverage of NLP systems
    Demonstrate an ability to apply knowledge-based and statistical techniques to real-world NLP problems
    Demonstrate the capacity to independently study, understand, and critically evaluate advanced materials or research articles in the subject areas covered by this module

    Assessment Methods & Exceptions

    Assessment:
    2 hr Examination (80%)
    ,Continuous Assessment (20%)

    Reassessment:
    2 hr Examination (100%)

    5. Evolutionary Computation (Extended)

    06-35376
    Description
    Evolutionary algorithms (EAs) are a class of optimisation techniques drawing inspiration from principles of biological evolution. They typically involve a population of candidate solutions from which the better solutions are selected, recombined, and mutated to form a new population of candidate solutions. This continues until an acceptable solution is found. Evolutionary algorithms are popular in applications where no problem-specific method is available, or when gradient-based methods fail. They are suitable for a wide range of challenging problem domains, including dynamic and noisy optimisation problems, constrained optimisation problems, and multi-objective optimisation problems. EAs are used in a wide range of disciplines, including optimisation, engineering design, machine learning, financial technology (“fintech”), and artificial life. In this module, we will study the fundamental principles of evolutionary computation, a range of different EAs and their applications, and a selection of advanced topics which may include time-complexity analysis, neuro-evolution, co-evolution, model-based EAs, and modern multi-objective EAs. The students will also read selected recent research articles on evolutionary computation.

    Learning Outcomes:
    By the end of the module students should be able to:
    Describe, and apply the principles of evolutionary computation
    Explain and compare different evolutionary algorithms
    Design and adapt evolutionary algorithms for non-trivial problems
    Demonstrate an awareness of the current literature in this area

    Assessment Methods & Exception
    Assessment:
    Examination (50%),
    Continuous Assessment (50%)

    Reassessment:
    Examination (100%)

    Semester 4 (Track 1)

    Electives at UoB

    1. Neural Computation (Extended)

    06 32212
    Description

    This module introduces the basic concepts and techniques of neural computation, and its relation to automated learning in computing machines more generally. It covers the main types of formal neuron and their relation to neurobiology, showing how to construct large neural networks and study their learning and generalization abilities in the context of practical applications. It also provides practical experience of designing and implementing a neural network for a real world application.

    Learning Outcomes
    By the end of the module students should be able to:

  • Understand the relationship between real brains and simple artificial neural network models
  • Describe and explain some of the principal architectures and learning algorithms of neural computation
  • Explain the learning and generalisation aspects of neural computation
  • Demonstrate an understanding of the benefits and limitations of neural-based learning techniques in context of other state-of-the-art methods of automated learning
  • Apply neural computation algorithms to specific technical and scientific problems
  • Assessment Methods & Exceptions
    Assessment:
    Examination (80%),
    Continuous Assessment (20%)
    Reassessment:
    Examination (100%)

    2. Machine Learning

    3. Intelligent Data Analysis

    06 38968
    Description
    The 'information revolution' has generated large amounts of data, but valuable information is often hidden and hence unusable. In addition, the data may come in many different forms, e.g. high-dimensional data collections, stream and time-series data, textual documents, images, large-scale graphs representing communication in social networks or protein-to-protein interactions etc. This module will introduce a range of techniques in the fields of pattern analysis, data analytics and data mining that aim to extract hidden patterns and useful information in order to understand such challenging data.

    Learning Outcomes
    By the end of the module students should be able to:
    demonstrate knowledge and understanding of core ideas of pattern analysis, data analytics and data mining
    demonstrate understanding of broader issues of generalisation in intelligent data analysis
    demonstrate the ability to apply the main approaches to unseen examples.
    demonstrate the capacity to independently study, understand, and critically evaluate advanced materials or research articles in the subject areas covered by this module.

    Assessment Methods & Exceptions
    Assessment:
    Examination (80%),
    Continuous Assessment (20%)
    Reassessment:
    Examination (100%)

    Semester 3 and 4 (Track 2)

    Electives at IIT Madras
    1. EE5179 - Deep Learning for Imaging
    12 Credits

    Description: Deep learning has shown immense promise in solving many of the computer vision problems such as object/scene recognition, object detection, face recognition, depth from single image and so on. Recently deep learning has also shown significant promise in solving many image processing problems such as image denoising, debluring, super-resolution and so on. In this course we will concentrate on deep architectures that have shown promise in solving computer vision and image processing problems. We will cover topics such as Convolutional Neural Network (CNN), Generative Adversarial Networks (GAN), Deep Generative Models and so on. We will also look at recent papers on applications of DL to vision and image processing.

    Course Content: 1. Basic Neural Network: Perceptron; Multi-layer Perceptron; Back propagation; Stochastic gradient descent; Universal approximation theorem; Applications in imaging such as for denoising. 2. Convolutional Neural Networks (CNN): CNN Architecture (Convolutional layer, Pooling layer, ReLu layer, fully connected layer, loss layer); Regularization methods such as dropout; Fine-tuning; Understanding and Visualizing CNN; Applications of CNN in imaging such as object/scene recognition. 3. Autoencoders: Autoencoder; Denoising auto-encoder; Sparse auto-encoder; Variational autoencoder; Applications in imaging such as segnet and image generation. 4. Recurrent Neural Network (RNN): Basic RNN; Long Short Term Memory (LSTM) and GRUs; Encoder-Decoder models; Applications in imaging such as activity recognition, image captioning. 5. Deep Generative Models: Restricted Boltzmann machine; Deep Boltzmann machine; Recurrent Image Density Estimators (RIDE); PixelRNN and PixelCNN; Plug-and-Play generative networks. 6. Generative Adversarial Network (GAN): GAN; Deep Convolutional GAN; Conditional GAN; Applications. 7. Deep Learning for Image Processing and Computational Imaging Denoising; Deblurring; Super-resolution; Color Filter Array design.

    Text Books: Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville, MIT Press, 2016: http://www.deeplearningbook.org/.

    Reference Books: 1. Stanford CS231n: Convolutional Neural Networks for Visual Recognition, http://cs231n.stanford.edu/ 2. Neural Networks and Deep Learning by Michael Nielsen: //neuralnetworksanddeeplearning.com/ 3. Online course on Neural Network by Hugo Larochelle: //info.usherbrooke.ca/hlarochelle/neural_networks/content.html 4. Pattern Recognition and Machine Learning by C.M. Bishop.

    Prerequisite: 1. Knowledge of Machine Learning. 2. Familiarity with image processing/computer vision/computational photography 3. Basic knowledge of Python programming

    2. CS6730 - Probabilistic Graphical Models
    12 Credits

    Description: The course provides comprehensive introduction to probabilistic graphical models. At the end of the course the student should be able to model problems using graphical models; design inference algorithms; and learn the structure of the graphical model from data.

    Course Content: Review: Fundamentals of Probability Theory - Views of Probability, Random Variables and Joint Distributions, Conditional Probability, Conditional Independence, Expectation and Variance, Probability Distributions - Conjugate Priors, Introduction to Exponential Family; Fundamentals of Graph Theory - Paths, Cliques, Subgraphs, Cycles and Loops. Graphical Models: Introduction - Directed Models (Bayesian Network), Undirected Models (Markov Random Fields), Dynamic Models (Hidden Markov Model & Kalman Filters) and Factor Graph; Conditional Independence (Bayes Ball Theorem and D-separation), Markov Blanket, Factorization (Hammersley-Clifford Theorem), Equivalence (I-Maps & Perfect Maps); Factor Graphs - Representation, Relation to Bayesian Network and Markov Random Field. Inference in graphical models: Exact Inference - Variable Elimination, Elimination Orderings, Relation to Dynamic Programming, Dealing with Evidence, Forward-Backward Algorithm, Viterbi Algorithm; Junction Tree Algorithm; Belief Propagation (Sum Product); Approximate Inference - Variational Methods (Mean Field, Kikuchi & Bethe Approximation), Expectation Propagation, Gaussian Belief Propagation; MAP Inference - Max-Product, Graph Cuts, Linear Programming Relaxations to MAP (Tree-Reweighted Belief Propagation, MPLP); Sampling - Markov Chain Monte Carlo, Metropolis Hastings, Gibbs (Collapsing & Blocking), Particle filtering. Learning in Graphical Models: Parameter Estimation - Expectation Maximization, Maximum Likelihood Estimation, Maximum Entropy, Pseudolikelihood, Bayesian Estimation, Conditional Likelihood, Structured Prediction; Learning with Approximate Inference; Learning with Latent Variables; Structure Learning, Structure Search, L1 priors.

    Text Books: Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.

    Reference Books: 1. Jensen, F. V. and Nielsen, T. D. (2002). Bayesian Networks and Decision Graphs. Information Science and Statistics. Springer, 2nd edition. 2. Kevin P. Murphy (2013) Machine Learning: A Probabilistic Perspective. 4th Printing. MIT Press. 3. Barber, D. (2011). Bayesian Reasoning and Machine Learning. Cambridge University Press, 1st edition. 4. Bishop, C. M. (2011). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 2nd printing. 5. Wainwright, M. and Jordan, M. (2008). Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 1:1–305.

    Prerequisite: CS5011 OR CS4011 OR CS6690 OR equivalent

    3. CS6910 - Fundamentals of Deep Learning
    12 Credits

    Description: Neural Networks have made tremendous impact on various AI fields in the recent past. In this course, we study the basics of Neural Networks and their various variants such as the Convolutional Neural Networks and Recurrent Neural Networks. We also study the different ways in which they can be used to solve problems in various domains such as Computer Vision, Speech and NLP. We also look at the latest results and trends in the field.

    Course Content: Overview: of the Classification task and motivation for NNs to solve these tasks. Network Organization: Biological Neurons, Idea of computational units, Activation functions, Multi-layer Perceptrons, Convolutional Neural Networks, Convolution and pooling, Higher-level representations, Fea- ture visualization. Training Algorithms: Loss Functions, Optimization, Stochastic Gradient Descent, Back-propagation, Initialization, Regularization, Update rules, Ensembles, data augmentation, Transfer learning, Dropout, Batch Normalization. Advanced Architectures: Recurrent Neural Networks: RNN, LSTM, GRU, CTC, Residual networks etc. Generative models: Restrictive Boltzmann Machines (RBMs), MCMC and Gibbs Sampling, Variational Auto-encoders, Generative Adversarial Networks. Applications: Application to various problems in different AI fields such as Computer vision, NLP and Speech. A subset of the following topics will be covered: Image Classification, Object Detection, Image Segmentation, Semantic segmentation, Instance segmentation, stereo matching, optical flow, style transfer, PixelRNN, Human Pose Estimation, Contour Detec- tion, shape classification, 3D Object Detection and Classification, Video analysis, summarization, labeling, Language modeling, Image captioning, visual question answering, Attention, Neural Machine Translation, Document Question Answering, Encoder Decoder Models, Text Summarization, and other recent applications from NLP, Speech and Computer Vision. Other latest ideas and trends: such as adversarial examples, network compaction, unsupervised learn- ing, transfer learning etc.

    Text Books: none

    Reference Books: 1. Deep Learning, An MIT Press book, Ian Goodfellow and Yoshua Bengio and Aaron Courvill. 2. InformationTheory,Inference,andLearningAlgorithms(Ch.5),DavidMacKay. 3. Latest research papers from various Computer Vision, Natural Language Processing, Speech and Information Retrieval conferences.

    Prerequisite:

    4. CS6700 - Reinforcement Learning
    9/ 12 Credits

    Description: The Reinforcement Learning problem : evaluative feedback, non-associative learning, Rewards and returns, Markov Decision Processes, Value functions, optimality and approximation. Dynamic programming : value iteration, policy iteration, asynchronous DP, generalized policy iteration. Monta-Carlo methods : policy evaluation, roll outs, on policy and off policy learning, importance sampling. Temporal Difference learning : TD prediction, Optimality of TD(0), SARSA, Q-learning, R-learning, Games and after states. Eligibility traces : n-step TD prediction, TD (lambda), forward and backward views, Q (lambda), SARSA (lambda), replacing traces and accumulating traces. Function Approximation : Value prediction, gradient descent methods, linear function approximation, ANN based function approximation, lazy learning, instability issues Policy Gradient methods : non-associative learning – REINFORCE algorithm, exact gradient methods, estimating gradients, approximate policy gradient algorithms, actor-critic methods. Planning and Learning : Model based learning and planning, prioritized sweeping, Dyna, heuristic search, trajectory sampling, E 3 algorithm Hierarchical RL : MAXQ framework, Options framework, HAM framework, airport algorithm, hierarchical policy gradient Case studies : Elevator dispatching, Samuel’s checker player, TDgammon, Acrobot, Helicopter piloting

    Course Content:

    Text Books: NIL

    Reference Book: NIL

    Prerequisite:

    5. CS6350 – Computer Vision 12 Credits

    Description: Computer Vision focuses on development of algorithms and techniques to analyze and interpret the visible world around us. This requires understanding of the fundamental concepts related to multi-dimensional signal processing, feature extraction, pattern analysis visual geometric modeling, stochastic optimization etc. Knowledge of these concepts is necessary in this field, to explore and contribute to research and further developments in the field of computer vision. Applications range from Biome

    Course Content: • Digital Image Formation and low-level Processing: Overview and State-of-the-art, Fundamentals of Image Formation, Transformation: Orthogonal, Euclidean, Affine, Projective, etc; Fourier Transform, Convolution and Filtering, Image Enhancement, Restoration, Histogram Processing. • Depth estimation and Multi-camera views: Perspective, Binocular Stereopsis: Camera and Epipolar Geometry; Homography, Rectification, DLT, RANSAC, 3-D reconstruction framework; Auto-calibration. • Feature Extraction: Edges – Canny, LOG, DOG; Line detectors (Hough Transform), Corners – Harris and Hessian Affine, Orientation Histogram, SIFT, SURF, HOG, GLOH, Scale-Space Analysis- Image Pyramids and Gaussian derivative filters, Gabor Filters and DWT. • Image Segmentation: Region Growing, Edge Based approaches to segmentation, Graph-Cut, Mean-Shift, MRFs, Texture Segmentation; Object detection. • Object Recognition: Structural, model-based, appearance and shape-based methods; probabilistic paradigms; discriminative part-based models; BOW, ISM, Learning methods. • Pattern Analysis: Clustering: K-Means, K-Medoids, Mixture of Gaussians, Classification: Discriminant Function, Supervised, Un-supervised, Semi-supervised; Classifiers: Bayes, KNN, ANN models; Dimensionality Reduction: PCA, LDA, ICA; Non-parametric methods. • Motion Analysis: Background Subtraction and Modeling, Optical Flow, KLT, Spatio-Temporal Analysis, Dynamic Stereo; Motion parameter estimation. • Shape from X: Light at Surfaces; Phong Model; Reflectance Map; Albedo estimation; Photometric Stereo; Use of Surface Smoothness Constraint; Shape from Texture, color, motion and edges. • Miscellaneous – Applications: CBIR, CBVR, Activity Recognition, computational photography, Biometrics, stitching and document processing; Modern trends – super-resolution; GPU, Augmented Reality; cognitive models, fusion and SR&CS.

    Text Books: 1. Richard Szeliski, Computer Vision: Algorithms and Applications, Springer-Verlag Limited, London, 2011. 2. Computer Vision: A Modern Approach, D. A. Forsyth, J. Ponce, Pearson Education, 2003.

    Reference Books: 1. Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Second Edition, Cambridge University Press, March 2004. 2. K. Fukunaga; Introduction to Statistical Pattern Recognition, Second Edition, Academic Press, Morgan Kaufmann, 1990. 3. R.C. Gonzalez and R.E. Woods, Digital Image Processing, Addison-Wesley, 1992. 4. IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE T-PAMI). 5. International Journal on Computer Vision (IJCV).

    Prerequisite: NIL