Home / Artificial Intelligence (AI) & Deep Learning Course

Artificial Intelligence (AI) & Deep Learning Course

Learn AI concepts and practical applications in the Certification Programme in AI and Deep Learning. Get set for a career as an AI expert.

On-campus training: 80 Hours

Calendar-On-Campus Classes

  • 20th
    April
  • 8:00 PM - 10:00 PM
    Timings

AI & Deep Learning

artificial intelligence course duration

Total Duration

2.5 months

artificial intelligence pre-requisite

Prerequisites

  • Computer Skills
  • Basic Mathematical Knowledge
  • Basic Data Science Concepts

AI Training Programme Overview

Artificial Intelligence certification course has a teaching duration of 80 hours and has been designed for professionals with an aptitude for statistics and a background in a programming language such as Python, R, etc. Artificial Intelligence (AI) and Deep Learning training help students in building AI applications, understanding Neural Network Architectures, structuring algorithms for new AI machines and minimizing errors through advanced optimization techniques. GPUs and TPUs will be used on cloud platforms such as Google Colab to run Google AI algorithms alongside running Neural Network algorithms on-premise GPU machines.

Course Details

AI Training Learning Outcomes

Be able to build AI systems using Deep Learning Algorithms
Be able to run all the variants of Neural Network Machine Learning Algorithms
Be able to deal with unstructured data such as images, videos, text, etc.
Be able to implement Deep Learning solutions and Image Processing applications using Convolution Neural Networks
Be introduced to analyse sequence data and perform Text Analytics and Natural Language Processing (NLP) using Recurrent Neural Network
Be able to run practical applications of building AI driven games using Reinforcement Learning and Q-Learning
Be able to effectively use various Python libraries such as Keras, TensorFlow, OpenCV, etc., which are used in solving AI and Deep Learning problems
Learn about the applications of Graphical Processing Units (GPUs) & Tensor Processing Units (TPUs) in using Deep Learning Algorithms

Artificial Intelligence Training Modules

This course will be the first stepping stone towards Artificial Intelligence and Deep Learning. In this module, you will be introduced to the analytics programming languages. R is a statistical programming language and Python is a general-purpose programming language. These are the most popular tools currently being employed to churn data for deriving meaningful insights.

 
  • All About 360DigiTMG & Innodatatics Inc., USA
  • Dos and Don'ts as a Participant
  • Introduction to Artificial intelligence and Deep learning
  • Course Outline, Road Map and Takeaways from the Course
  • Cross-Industry Standard Process for Data Mining
  • Artificial Intelligence and Deep Learning Applications

Different packages can be used to build Deep Learning and Artificial Intelligence models, such as Tensorflow, Keras, OpenCV, and PyTorch. You will learn more about these packages and their applications in detail.

 

Tensorflow and Keras libraries can be used to build Machine Learning and Deep Learning models. OpenCV is used for image processing and PyTorch is highly useful when you have no idea how much memory will be required for creating a Neural Network Model.

 
  • Introduction to Deep Learning libraries – Torch, Theono, Caffe, Tensorflow, Keras, OpenCV and PyTorch
  • Deep dive into Tensorflow, Keras, OpenCV and PyTorch
  • Introduction to Anaconda, R for Windows, R studio and Spyder
  • Environment Setup and Installation Methods of Multiple Packages

Understand the types of Machine Learning Algorithms. Learn about the life cycle and the detailed understanding of each step involved in the project life cycle. The CRISP-DM process is applied in general for Data Analytics /AI projects. Learn about CRISP-DM and the stages of the project life cycle in-depth.

 

You will also learn different types of data, Data Collection, Data Preparation, Data Cleansing, Feature Engineering, EDA, Data Mining and various Error Functions. Understand about imbalanced data handling techniques and algorithms.

 
  • Introduction to Machine Learning
  • Machine Learning and its types - Supervised Learning, Unsupervised Learning, Reinforcement Learning, Semi-supervised Learning, Active Learning, Transfer Learning, Structured Prediction
  • Understand Business Problem – Business Objective & Business Constraints
  • Data Collection - Surveys and Design of Experiments
  • Data Types namely Continuous, Discrete, Categorical, Count, Qualitative, Quantitative and its identification and application
  • Further classification of data in terms of Nominal, Ordinal, Interval & Ratio types
  • Balanced versus Imbalanced datasets
  • Cross-Sectional versus Time Series versus Panel / Longitudinal Data
  • Batch Processing versus Real-Time Processing
  • Structured versus Unstructured vs Semi-Structured Data
  • Big versus Not-Big Data
  • Data Cleaning / Preparation - Outlier Analysis, Missing Values Imputation Techniques, Transformations, Normalization / Standardization, Discretization
  • Sampling Techniques for Handling Balanced versus Imbalanced Datasets
  • Measures of Central Tendency & Dispersion
    • Population Parameters and Sample Statistics
    • Mean/Average, Median, Mode
    • Variance, Standard Deviation, Range
  • Various Graphical Techniques to Understand Data
    • Bar Plot
    • Histogram
    • Boxplot
    • Scatter Plot
  • Feature Engineering - Feature Extraction & Feature Selection
  • Error Functions - Mean Error, Mean Absolute Deviation, Mean Squared Error, Mean Percentage Error, Root Mean Squared Error, Mean Absolute Percentage Error, Cross Table, Confusion Matrix, Binary Cross Entropy & Categorical Cross-Entropy
  • High-Level Strategy in Handling Machine Learning Projects

Maximize or minimize the error rate using Calculus. Learn to find the best fit line using the linear least-squares method. Understand the gradient method to find the minimum value of a function where a closed-form of the solution is not available or not easily obtained.

 

Under Linear Algebra, you will learn sets, function, scalar, vector, matrix, tensor, basic operations and different matrix operations. Under Probability one will learn about Uniform Distribution, Normal Distribution, Binomial Distribution, Discrete Random Variable, Cumulative Distribution Function and Continuous Random Variables.

 
  • Optimizations - Applications
  • Foundations - Slope, Derivatives & Tangent
  • Derivatives in Optimization: Maxima & Minima - First Derivative Test, Second Derivative Test, Partial Derivatives, Cross Partial Derivatives, Saddle Point, Determinants, Minor and Cofactor
  • Gradient Descent Method / Optimization - Minima, Maxima & Learning Rate

You will have a high level understanding of the human brain, importance of multiple layers in the Neural Network, extraction of features layers wise, composition of the data in Deep Learning using an image, speech and text.

 

You will briefly understand feature extraction using SIFT/HOG for images, Speech recognition and feature extraction using MFCC and NLP feature extraction using parse tree syntactic.

 

Introduction to neurons, which are connected to weighted inputs, threshold values, and an output. You will understand the importance of weights, bias, summation and activation functions.

 
  • Human Brain – Introduction to neuron
  • Compositionality in Data – Images, Speech & text
  • Mathematical Notations
  • Introduction to ANN
  • Components of ANN - Neuron, Weights, Activation function, Integration function, Bias and Output

Learn about single-layered Perceptrons, Rosenblatt’s perceptron for weights and bias updation. You will understand the importance of learning rate and error. Walk through a toy example to understand the perceptron algorithm. Learn about the quadratic and spherical summation functions. Weights updating methods - Windrow-Hoff Learning Rule & Rosenblatt’s Perceptron.

 
  • Introduction to Perceptron
  • Introduction to Multi-Layered Perceptron
  • Activation functions – Identity Function, Step Function, Ramp Function, Sigmoid Function, Tanh Function, ReLU, ELU, Leaky ReLU & Maxout
  • Back Propagation Demo
  • Network Topology – Key characteristics and Number of layers
  • Weights Calculation in Back Propagation

Understand the difference between perception and MLP or ANN. Learn about error surface, challenges related to gradient descent and the practical issues related to deep learning. You will learn the implementation of MLP on MNIST dataset - multi class problem, IMDB dataset - binary classification problem, Reuters dataset - single labelled multi class classification problem and Boston Housing dataset - Regression Problem using Python and Keras.

 
  • Error Surface – Learning Rate & Random Weight Initialization
  • Local Minima issues in Gradient Descent Learning
  • Is DL a Holy Grail?
  • Practical Implementation of MLP/ANN in Python – MNIST, IMDB, Reuters & Boston Housing
  • Segregation of data set: Train, Test & Validation
  • Data Representation in Graphs using Matplotlib
  • Deep Learning Challenges – Gradient Primer, Activation Function, Error Function, Vanishing Gradient, Error Surface challenges, Learning Rate challenges, Decay Parameter, Gradient Descent Algorithmic Approaches, Momentum, Nestrov Momentum, Adam, Adagrad, Adadelta & RMSprop
  • Deep Learning Practical Issues – Avoid Overfitting, DropOut, DropConnect, Noise, Data Augmentation, Parameter Choices, Weights Initialization

Convolution Neural Networks are the class of Deep Learning networks which are mostly applied on images. You will learn about ImageNet challenge, overview on ImageNet winning architectures, applications of CNN, problems of MLP with huge dataset.

 

You will understand convolution of filter on images, basic structure on convent, details about Convolution layer, Pooling layer, Fully Connected layer, Case study of AlexNet and few of the practical issues of CNN.

 
  • ImageNet Challenge – Winning Architectures, Difficult Vision Problems & Hierarchical Approach
  • Parameter Explosion Problem with MLPs
  • Convolution Networks
  • Convolution Layers with Filters
  • Pooling Layer
  • Case Study: Alex Net
  • Practical Issues – Weight decay, Drop Connect, Data Manipulation Techniques & Batch Normalization

You will learn image processing techniques, noise reduction using moving average methods, different types of filters - smoothing the image by averaging, Gaussian filter and the disadvantages of correlation filters. You will learn about different types of filters, boundary effects, template matching, rate of change in the intensity detection, different types of noise, image sampling and interpolation techniques.

 

You will also learn about colors and intensity, affine transformation, projective transformation, embossing, erosion & dilation, vignette, histogram equalization, HAAR cascade for object detection, SIFT, SURF, FAST, BRIEF and seam carving.

 
  • Introduction to Vision
  • Importance of Image Processing
  • Image Processing Challenges – Interclass Variation, ViewPoint Variation, Illumination, Background Clutter, Occlusion & Number of Large Categories
  • Introduction to Image – Image Transformation, Image Processing Operations & Simple Point Operations
  • Noise Reduction – Moving Average & 2D Moving Average
  • Image Filtering – Linear & Gaussian Filtering
  • Disadvantage of Correlation Filter
  • Introduction to Convolution
  • Boundary Effects – Zero, Wrap, Clamp & Mirror
  • Image Sharpening
  • Template Matching
  • Edge Detection – Image filtering, Origin of Edges, Edges in images as Functions, Sobel Edge Detector
  • Effect of Noise
  • Laplacian Filter
  • Smoothing with Gaussian
  • LOG Filter – Blob Detection
  • Noise – Reduction using Salt & Pepper Noise using Gaussian Filter
  • Nonlinear Filters
  • Bilateral Filters
  • Canny Edge Detector
  • Non Maximum Suppression
  • Hysteresis Thresholding
  • Image Sampling & Interpolation – Image Sub Sampling, Image Aliasing , Nyquist Limit, Wagon Wheel Effect, Down Sampling with Gaussian Filter, Image Pyramid, Image Up Sampling
  • Image Interpolation – Nearest Neighbour Interpolation, Linear Interpolation, Bilinear Interpolation & Cubic Interpolation

Understand the language models for next word prediction, spell check, mobile auto-correct, speech recognition, and machine translation. You will learn the disadvantages of traditional models and MLP. Deep understanding of the architecture of RNN, RNN language model, backpropagation through time, types of RNN - one to one, one to many, many to one and many to many along with different examples for each type.

 
  • Introduction to Adversaries
  • Language Models – Next Word Prediction, Spell Checkers, Mobile Auto Correction, Speech Recognition & Machine Translation
  • Traditional Language model
  • Disadvantages of MLP’s
  • Introduction to State & RNN cell
  • Introduction to RNN
  • RNN language Models
  • Back Propagation Through time
  • RNN Loss Computation
  • Types of RNN – One to One, One to Many, Many to One, Many to Many
  • Introduction to the CNN and RNN
  • Combining CNN and RNN for Image Captioning
  • Architecture of CNN and RNN for Image Captioning
  • Bidirectional RNN
  • Deep Bidirectional RNN
  • Disadvantages of RNN

You will learn to build an object detection model using Fast R-CNN by using bounding boxes, understand why fast RCNN is a better choice while dealing with object detection. You will also learn by instance segmentation problems which can be avoided using Mask RCNN.

 
  • R-CNN
  • Fast R-CNN
  • Faster R-CNN
  • Mask R-CNN

Understand and implement Long Short-Term Memory, which is used to keep the information intact, unless the input makes them forget. You will also learn the components of LSTM - cell state, forget gate, input gate and the output gate along with the steps to process the information. Learn the difference between RNN and LSTM, Deep RNN and Deep LSTM and different terminologies. You will apply LSTM to build models for prediction.

 
  • Introduction to LSTM – Architecture
  • Importance of Cell State, Input Gate, Output Gate, Forget Gate, Sigmoid and Tanh
  • Mathematical Calculations to Process Data in LSTM
  • RNN vs LSTM
  • Deep RNN vs Deep LSTM

Gated Recurrent Unit, a variant of LSTM solves this problem in RNN. You will learn the components of GRU and the steps to process the information.

 
  • Introduction to GRU
  • Architecture & Gates
  • Update Gate, Reset Gate, Current Memory Content
  • Final Memory at current timestep
  • Applications of GRUs

You will learn about the components of Autoencoders, steps used to train the autoencoders to generate spatial vectors, types of autoencoders and generation of data using variational autoencoders. Understanding the architecture of RBM and the process involved in it.

 
  • Autoencoders
    • Intuition
    • Comparison with other Encoders (MP3 and JPEG)
    • Implementation in Keras
  • Deep AutoEncoders
    • Intuition
    • Implementing DAE in Keras
  • Convolutional Auto encoders
    • Intuition
    • Implementation in Keras
  • Variational Autoencoders
    • Intuition
    • Implementation in Keras
  • Introduction to Restricted Boltzmann Machines - Energy Function, Schematic implementation, Implementation in TensorFlow

You will learn the difference between CNN and DBN, architecture of deep belief networks, how greedy learning algorithms are used for training them and applications of DBN.

 
  • Introduction to DBN
  • Architecture of DBN
  • Applications of DBN
  • DBN in Real World

Understanding the generation of data using GAN, the architecture of the GAN - encoder and decoder, loss calculation and backpropagation, advantages and disadvantages of GAN.

 
  • Generative Adversarial Networks (GANS)
  • Data Analysis and Pre-Processing
  • Building Model
  • Model Inputs and Hyperparameters
  • Model losses
  • Implementation of GANS
  • Defining the Generator and Discriminator
  • Generator Samples from Training
  • Model Optimizer
  • Discriminator and Generator Losses
  • Sampling from the Generator

You will learn to use SRGAN which uses the GAN to produce the high-resolution images from the low-resolution images. Understand about generators and discriminators.

 
  • Introduction to SRGAN
  • Network Architecture - Generator, Discriminator
  • Loss Function - Discriminator Loss & Generator Loss
  • Implementation of SRGAN in Keras

You will learn Q-learning which is a type of reinforcement learning, exploiting using the creation of a Q table, randomly selecting an action using exploring and steps involved in learning a task by itself.

 
  • Reinforcement Learning
  • Deep Reinforcement Learning vs Atari Games
  • Maximizing Future Rewards
  • Policy versus Values Learning
  • Balancing Exploration With Exploitation
  • Experience Replay, or the Value of Experience
  • Q-Learning and Deep Q-Network as a Q-Function
  • Improving and Moving Beyond DQN
  • Keras Deep Q-Network

Learn to Build a speech to text and text to speech models. You will understand the steps to extract the structured speech data from a speech, convert that into text. Later use the unstructured text data to convert into speech.

 
  • Speech Recognition Pipeline
  • Phonemes
  • Pre-Processing
  • Acoustic Model
  • Deep Learning Models
  • Decoding

Learn to Build a chatbot using generative models and retrieval models. We will understand RASA open source and LSTM to build chatbots.

 
  • Introduction to Chatbot
  • NLP Implementation in Chatbot
  • Integrating and implementing Neural Networks Chatbot
  • Generative Chatbot Development
  • Building a Retrieval Based Chatbot
  • Deploying Chatbot in Various Platforms

Read More >

Alexa, an AI personal assistant that holds 70% of the smart speaker market is expected to add $10 billion by 2021.

Block Your Time

ai course

80 hours

Classroom Sessions

artificial intelligence training

100 hours

Assignments &
e-Learning

ai course in india

100 hours

Live Projects

Who Should Sign Up?

  • IT Engineers
  • Data and Analytics Manager
  • Business Analysts
  • Data Engineers
  • Banking and Finance Analysts
  • Marketing Managers
  • Supply Chain Professionals
  • HR Managers
  • Math, Science and Commerce Graduates

Tools Covered

ai with keras toolai with pytorch artificial intelligence with open cv tool ai with r tool ai with r studio toolai with jupyterai with spyder

 

Register for a free orientation

Limited seats available.

Book now to avoid disappointment.

ai training model

Artificial Intelligence Course Panel of Coaches

artificial intelligence trainer

Bharani Kumar Depuru

  • Areas of expertise: Data analytics, Digital Transformation, Industrial Revolution 4.0
  • Over 14+ years of professional experience
  • Trained over 2,500 professionals from eight countries
  • Corporate clients include Hewlett Packard Enterprise, Computer Science Corporation, Akamai, IBS Software, Litmus7, Personiv, Ebreeze, Alshaya, Synchrony Financials, Deloitte
  • Professional certifications - PMP, PMI-ACP, PMI-RMP from Project Management Institute, Lean Six Sigma Master Black Belt, Tableau Certified Associate, Certified Scrum Practitioner, AgilePM (DSDM Atern)
  • Alumnus of Indian Institute of Technology, Hyderabad and Indian School of Business
Read More >
 
artificial intelligence trainer

Sharat Chandra Kumar

  • Areas of expertise: Data sciences, Machine learning, Business intelligence and Data visualisation
  • Trained over 1,500 professional across 12 countries
  • Worked as a Data scientist for 14+ years across several industry domains
  • Professional certifications: Lean Six Sigma Green and Black Belt, Information Technology Infrastructure Library
  • Experienced in Big Data Hadoop, Spark, NoSQL, NewSQL, MongoDB, R, RStudio, Python, Tableau, Cognos
  • Corporate clients include DuPont, All-Scripts, Girnarsoft (College-dekho, Car-dekho) and many more
Read More >
 
artificial intelligence trainer

Nitin Mishra

  • Areas of expertise: Data sciences, Machine learning, Business intelligence and Data visualisation
  • Over 20+ years of industry experience in data science and business intelligence
  • Trained professionals from Fortune 500 companies and students at prestigious colleges
  • Experienced in Cognos, Tableau, Big Data, NoSQL, NewSQL
  • Corporate clients include Time Inc., Hewlett Packard Enterprise, Dell, Metric Fox (Champions Group), TCS and many more
Read More >
 
ai with deep learning certification

Certificate

Win recognition for your AI skills with the Certification Programme in AI and Deep Learning. Stand out in this emerging yet competitive field with our certification.

Recommended Programmes

Data Science using Python and R Programming

Know More
 

Big Data Using Hadoop & Spark

Know More
 

Artificial Intelligence & Deep Learning

Know More
 

FAQs for Artificial Intelligence Certification

Yes, each session is recorded and videos are placed on our LMS and you receive lifetime access to LMS.

While we also have industry specific trainings in the space of Data Science, Machine Learning and AI, our regular program covers all industries to accommodate all the participants. Our participants come from varied backgrounds.

Yes, all the widely used neural networks are explained in detail. If there is any new Neural Network algorithm which is introduced in the industry then we shall explain that also as part of the ongoing webinars.

We shall be running CNNs on cloud and hence one need not worry about the laptop configuration. However, as of date a minimum of 32GB RAM and 2080ti GPU from Nvidia is most preferred.

 

The Deep Learning book is the most preferred from theoretical perspective. However, from programming and practical application perspective, Deep Learning training by 360DigiTMG should suffice. We also provide a few Neural Network tutorials for you to watch before attending the regular classes.

Ecosystem Partners

Student Voices

4.7

(3152 Reviews)

5 Stars
4 Stars
3 Stars
2 Stars
1 Stars
Make an Enquiry
Call Us