Call Us

Home / Blog / Data Science / Microsoft NNI - AutoML End-to-End Implementation

Microsoft NNI - AutoML End-to-End Implementation

  • June 23, 2023
  • 5692
  • 67
Author Images

Meet the Author : Mr. Bharani Kumar

Bharani Kumar Depuru is a well known IT personality from Hyderabad. He is the Founder and Director of Innodatatics Pvt Ltd and 360DigiTMG. Bharani Kumar is an IIT and ISB alumni with more than 17 years of experience, he held prominent positions in the IT elites like HSBC, ITC Infotech, Infosys, and Deloitte. He is a prevalent IT consultant specializing in Industrial Revolution 4.0 implementation, Data Analytics practice setup, Artificial Intelligence, Big Data Analytics, Industrial IoT, Business Intelligence and Business Management. Bharani Kumar is also the chief trainer at 360DigiTMG with more than Ten years of experience and has been making the IT transition journey easy for his students. 360DigiTMG is at the forefront of delivering quality education, thereby bridging the gap between academia and industry.

Read More >

NNI (Neural Network Intelligence) is a lightweight and powerful toolkit to help users automate:

Hyperparameter Optimization Overview

Auto Hyperparameter Optimization (HPO), or auto-tuning, is one of the best key features of NNI.

Introduction to HPO:

In Machine Learning, a hyperparameter is a parameter whose value is used to control the learning process, and HPO is the problem of selecting a set of optimal hyperparameters for a learning algorithm.

The below code snippet demonstrates a naive HPO process:

360DigiTMG

Tuning Algorithms

NNI offers tuners to quicken the process of locating the ideal collection of hyperparameters.

The hierarchy of hyperparameter sets is determined by a tuner, or tuning algorithm. An effective tuner can estimate where the most effective hyper parameters reside around and discover them in less efforts based on the outcomes of prior hyperparameter sets.

The basic approach above ignores prior results and analyses every possible collection of hyperparameters in a fixed sequence. This is the grid search tuning algorithm, which is simple to use.

NNI offers out-of-the-box support for a variety of well-liked tuners. It includes naïve algorithms like grid search and random search, as well as Bayesian-based methods like TPE and SMAC and RL-based algorithms like PPO.

Learn the core concepts of Data Science Course video on Youtube:

Training Platforms

NNI HPO may be used on your current machine just like any other Python module if you have no interest in networked platforms.

Additionally, NNI offers integrated integration for training platforms, from simple on-premise servers to scaled commercial clouds, when you need to take hold of more computer resources.

NNI enables you to write a single section of model code while simultaneously calculating hyperparameter sets on local workstations, SSH servers, Kubernetes-based groups, AzureML service, and many more platforms.

Search Space

Overview

In NNI the tuner is a sample hyperparameter according to the search space.

To define a search space, users should mention the variable's name, the type of sampling strategy, and its parameters.

An example of a search space definition in JSON format is as follows:

360DigiTMG

Types different of Sampling Strategies

All types of sampling strategies and their existing parameter are listed here:

choice

{"_type": "choice", "_value": options}

One of the better alternatives is the value of the variable. Options in this case should be provided as a list of integers or a list of texts. In most circumstances, using arbitrary objects as members of this list (such as sublists, a string of just integers, or null values) should function, but it may result in unexpected outcomes. The nested sub-search area known as options can also be used, but it only has an impact when the appropriate element is selected for consideration. You may think of the variables in this sub-search area as relying on other factors. Here is a simple illustration of how to define a nested search space. If an entry in the options list is a dict, it is a sub-search space, and you must include a _name key in this dict to use our built-in tuners. In considering this, below is an example of what consumers might expect from NNI's notion of nested search space.

randint

{"_type": "randint", "_value": [lower, upper]}

Select a random integer between lower (inclusive) and upper (exclusive).Note: Different tuners might interpret randint differently. Some (e.g., TPE, GridSearch) treat integers from lower to upper as unordered ones, while others will respect the ordering (e.g., SMAC). If you need all the tuners to respect the ordering, please use uniform with q=1.

uniform

{"_type": "uniform", "_value": [low, high]}

The variable value is equally sampled between low and high. When optimizing, this variable is restricted to a two-sided interval.

quniform

{"_type": "quniform", "_value": [low, high, q]}

The variable value is calculated using the clip(round(uniform(low, high) / q) * q, low, high), where the clip operation is used to constrain the generated value within the bounds. For easy understanding, for _value specified as [0, 15, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0,12.5,15.0]; For _value specified as [2, 15, 5], possible values are [2, 5, 10,15]. Allow for a discrete value with respect to which the objective is still somewhat “smooth”, but which should be limited both above and below. If you need to uniformly choose an integer from a range [low, high], you can write _value like this: [low, high, 1].

loguniform

{"_type": "loguniform", "_value": [low, high]}

The variable value is drawn from a range [low, high] with respect to a loguniform distribution like exp(uniform(log(low), log(high))) so that the logarithm of the return value is equally distributed. When utilizing, this variable is constrained to be positive.

qloguniform

_type": "qloguniform", "_value": [low, high, q]}

The clipping operation is used to confine the produced value inside the constraints and is used to calculate the variable value (a round (log uniform (low, big) / q) * q, low, high). Placing for an independent variable where the aim is "smooth" and becomes smoother as the value increases, but which should have upper and lower bounds.

The tuner judges which hyperparameter sets will be evaluated. It is the most critical part of NNI HPO.

A tuner will work with the following pseudocode:

space = get_search_space()

history = []

while not experiment_end:

hp = suggest_hyperparameter_set(space, history)

result = run_trial(hp)

history.append((hp, result))

For several well-known tuning techniques, NNI offers out-of-the-box support. They ought to be enough to address the majority of common machine learning scenarios.

Common Usage

All built-in tuners have alike usage.

You must provide a standard Search Space and specify a built-in tuner's name and parameters in the experiment configuration in order to utilise it. Some tuners, such SMAC and DNGO, require the installation of extra dependencies independently. To find out what parameters each tuner allows and whether it has additional dependencies, we must look at its reference page.

As a general example, the random tuner can be configured as follow:

config.search_space = {

'x': {'_type': 'uniform', '_value': [0, 1]},

'y': {'_type': 'choice', '_value': ['a', 'b', 'c']}

}

config.tuner.name = 'random'

config.tuner.class_args = {'seed': 0}

Level Pruner

This is a fundamental pruner, and in some papers called it magnitude pruning or fine-grained pruning. It will hide the smallest magnitude weights in each specified layer by a sparsity ratio configured in the config list.

Finding better models involves an increasingly significant role for automatic neural architecture search. The current study has demonstrated the viability of automatic NAS and produced models that outperform those that were manually created and modified. NASNet, ENAS, DARTS, Network Morphism, and Evolution are examples of recognisable efforts. Additionally, contemporary technologies keep appearing. Learn Data Science Course in Hyderabad by clicking here.

High-level speaking, designing a search space, choosing a search method, and calculating performance are often needed when attempting to use neural architecture search to simplify any specific job. The three components work together with the below following loop (from the famous NAS survey):

360DigiTMG

Use Cases and Solutions

This section primarily offers from beginning to end situations and use cases to assist users better understand how NNI can benefit them, in contrast to the lessons and examples in the remainder of the publications that demonstrate how to utilise a feature. NNI is easily obtained in a variety of circumstances. We also encourage community participants to contribute their AutoML best practises, particularly their knowledge of NNI use.

Automatic Model Tuning

You may apply NNI to a variety of model tweaking activities. On NNI, it is simple to construct a few cutting-edge model search techniques, such as EfficientNet. NNI may be used to fine-tune well-known models, such as recommendation models.

Click here to learn Data Science Course, Data Science Course in Hyderabad, Data Science Course in Bangalore

Data Science Placement Success Story

Data Science Training Institutes in Other Locations

Agra, Ahmedabad, Amritsar, Anand, Anantapur, Bangalore, Bhopal, Bhubaneswar, Chengalpattu, Chennai, Cochin, Dehradun, Malaysia, Dombivli, Durgapur, Ernakulam, Erode, Gandhinagar, Ghaziabad, Gorakhpur, Gwalior, Hebbal, Hyderabad, Jabalpur, Jalandhar, Jammu, Jamshedpur, Jodhpur, Khammam, Kolhapur, Kothrud, Ludhiana, Madurai, Meerut, Mohali, Moradabad, Noida, Pimpri, Pondicherry, Pune, Rajkot, Ranchi, Rohtak, Roorkee, Rourkela, Shimla, Shimoga, Siliguri, Srinagar, Thane, Thiruvananthapuram, Tiruchchirappalli, Trichur, Udaipur, Yelahanka, Andhra Pradesh, Anna Nagar, Bhilai, Borivali, Calicut, Chandigarh, Chromepet, Coimbatore, Dilsukhnagar, ECIL, Faridabad, Greater Warangal, Guduvanchery, Guntur, Gurgaon, Guwahati, Hoodi, Indore, Jaipur, Kalaburagi, Kanpur, Kharadi, Kochi, Kolkata, Kompally, Lucknow, Mangalore, Mumbai, Mysore, Nagpur, Nashik, Navi Mumbai, Patna, Porur, Raipur, Salem, Surat, Thoraipakkam, Trichy, Uppal, Vadodara, Varanasi, Vijayawada, Visakhapatnam, Tirunelveli, Aurangabad

Data Analyst Courses in Other Locations

ECIL, Jaipur, Pune, Gurgaon, Salem, Surat, Agra, Ahmedabad, Amritsar, Anand, Anantapur, Andhra Pradesh, Anna Nagar, Aurangabad, Bhilai, Bhopal, Bhubaneswar, Borivali, Calicut, Cochin, Chengalpattu , Dehradun, Dombivli, Durgapur, Ernakulam, Erode, Gandhinagar, Ghaziabad, Gorakhpur, Guduvanchery, Gwalior, Hebbal, Hoodi , Indore, Jabalpur, Jaipur, Jalandhar, Jammu, Jamshedpur, Jodhpur, Kanpur, Khammam, Kochi, Kolhapur, Kolkata, Kothrud, Ludhiana, Madurai, Mangalore, Meerut, Mohali, Moradabad, Pimpri, Pondicherry, Porur, Rajkot, Ranchi, Rohtak, Roorkee, Rourkela, Shimla, Shimoga, Siliguri, Srinagar, Thoraipakkam , Tiruchirappalli, Tirunelveli, Trichur, Trichy, Udaipur, Vijayawada, Vizag, Warangal, Chennai, Coimbatore, Delhi, Dilsukhnagar, Hyderabad, Kalyan, Nagpur, Noida, Thane, Thiruvananthapuram, Uppal, Kompally, Bangalore, Chandigarh, Chromepet, Faridabad, Guntur, Guwahati, Kharadi, Lucknow, Mumbai, Mysore, Nashik, Navi Mumbai, Patna, Pune, Raipur, Vadodara, Varanasi, Yelahanka

 

Navigate to Address

360DigiTMG - Data Science, Data Scientist Course Training in Bangalore

No 23, 2nd Floor, 9th Main Rd, 22nd Cross Rd, 7th Sector, HSR Layout, Bengaluru, Karnataka 560102

1800-212-654-321

Get Direction: Data Science Course

Make an Enquiry