Home / Blog / Data Science Digital Book / K-Nearest Neighbor

K-Nearest Neighbor

  • July 15, 2023
  • 6879
  • 46
Author Images

Meet the Author : Mr. Bharani Kumar

Bharani Kumar Depuru is a well known IT personality from Hyderabad. He is the Founder and Director of Innodatatics Pvt Ltd and 360DigiTMG. Bharani Kumar is an IIT and ISB alumni with more than 18+ years of experience, he held prominent positions in the IT elites like HSBC, ITC Infotech, Infosys, and Deloitte. He is a prevalent IT consultant specializing in Industrial Revolution 4.0 implementation, Data Analytics practice setup, Artificial Intelligence, Big Data Analytics, Industrial IoT, Business Intelligence and Business Management. Bharani Kumar is also the chief trainer at 360DigiTMG with more than Ten years of experience and has been making the IT transition journey easy for his students. 360DigiTMG is at the forefront of delivering quality education, thereby bridging the gap between academia and industry.

Read More >

KNN also known as:

  • On-demand or Lazy Learning
  • Memory-Based Reasoning
  • Example-Based Reasoning
  • Instance-Based Learning
  • Rote Learning
  • Case-Based Reasoning

KNN is effective in both situations, whether Y is continuous or discrete.

KNN is based on measuring the separation between different locations. Any distance measurement, including the Euclidean distance covered in earlier sections, qualifies as distance.

K-Nearest Neighbor

A better variant of KNN assigns more weights to the neighbours in accordance with how far they are from the query location.

When predicting continuous output, the average of all output values will be used, and when predicting categorical output, the majority count of all output values will be used.

K-Nearest Neighbor

Because it is employed to address the bias-variance tradeoff problem, the choice of the "K" value is crucial.

Outliers have an impact on low 'K' values.

'K' values that are high might add data points from other categories.

Watch Free Videos on Youtube

Pros (Advantages) and Cons (Disadvantages)

Strengths Weakness
Does not depend on the underlying data distribution There is no model produced and hence no interesting relationship among output and inputs is learnt
Testing process will be very fast Memory requirement is large because distance calculations is saved in memory
  Testing process is slower in comparison to other models
  Categorical Inputs require additional processing
  Suffers from Curse of dimensionality

Click here to learn Data Science Course, Data Science Course in Hyderabad, Data Science Course in Bangalore

Data Science Training Institutes in Other Locations

Navigate to Address

360DigiTMG - Data Science, Data Scientist Course Training in Bangalore

No 23, 2nd Floor, 9th Main Rd, 22nd Cross Rd, 7th Sector, HSR Layout, Bengaluru, Karnataka 560102

1800-212-654-321

Get Direction: Data Science Course

Read
Success Stories
Make an Enquiry