Sent Successfully.
Home / Blog / Interview Questions on Data Science / Lasso & Ridge Regression Interview Questions & Answers in 2024
Lasso & Ridge Regression Interview Questions & Answers in 2024
Table of Content
- Which of the following is a disadvantage of non-parametric machine learning algorithms?
- Which of the following is a true statement for regression methods the in case of feature selection?
- In Ridge regression, as the regularization parameter increases, do the regression coefficients decrease?
- What is the output of the following?
- Is it true that the L1 term in Lasso has the following purposes: performing feature selection, compensating for overfitting, and smoothing?
- Which regularization is used to reduce the over fit problem?
- To check the linear relationship of dependent and independent continuous variables, which of the following plots are best suited?
- The cost function is altered by adding a penalty equivalent to the square of the magnitude of the coefficients
- What are the assumptions of linear regression?
- Which of the following of the coefficients is added as the penalty term to the loss function in Lasso regression?
- What type of penalty is used on regression weights in Ridge regression?
- If two variables, x and y, have a very strong linear relationship, then
- Which of the following is used for evaluating regression models?
- Lasso Regression uses which norm?
- Ridge Regression uses which norm?
- In Ridge regression, A hyper parameter is used called “________” that controls the weighting of the penalty to the loss function.
- The scikit-learn Python machine learning library provides an implementation of the Ridge Regression algorithm via the Ridge class.Confusingly, the lambda term can be configured via the “________” argument when defining the class.
- Ridge regression can reduce the slope close to zero (but not exactly zero) but Lasso regression can reduce the slope to be exactly equal to zero.
- Ridge regression takes _________ value of variables.
- Ridge regression takes _________ value of variables.
- The effect of alpha value on both ridge and lasso regression is same in terms of value increase and decrease.
- In this Lasso and Ridge regression as alpha value increases, the slope of the regression line reduces and becomes horizontal.
- The following statement is
- To do Ridge and Lasso Regression in R we will use which library _________.
- With Lasso Regression the influence of the hyperparameter lambda, as lambda tends to zero the solution approaches to _________ .
- With Lasso Regression the influence of the hyper parameter lambda, as lambda tends to infinity the solution approaches to _______ .
- When compared with Lasso regression, the Ridge regression works well in cases where we
- Suppose we fit “Lasso Regression” to a data set, which has 100 features (X1,X2…X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter. Now, which of the following options will be correct?
-
Which of the following is a disadvantage of non-parametric machine learning algorithms?
- a) Capable of fitting a large number of functional forms (flexibility)
- b) Very fast to learn (speed)
- c) More of a risk to overfit the training data (overfitting)
- d) They do not require much training data
Answer - c) More of a risk to overfit the training data (overfitting)
-
Which of the following is a true statement for regression methods the in case of feature selection?
- a) Ridge regression uses subset selection of features
- b) Lasso regression uses subset selection of features
- c) Both use subset selection of features
- d) None of above
Answer - b) Lasso regression uses subset selection of features
-
In Ridge regression, as the regularization parameter increases, do the regression coefficients decrease?
- a) True
- b) False
Answer - a) True
-
What is the output of the following?
Print(foo(1,2))
def foo(a,b):
Return a+b+1- a) 2
- b) 3
- c) 6
- d) Error
Answer - d) Error
-
Is it true that the L1 term in Lasso has the following purposes: performing feature selection, compensating for overfitting, and smoothing?
- a) True
- b) False
Answer - b) False
-
Which regularization is used to reduce the over fit problem?
- a) L1
- b) L2
- c) Both
- d) None of the above
Answer - c) Both
-
To check the linear relationship of dependent and independent continuous variables, which of the following plots are best suited?
- a) Scatter plot
- b) Bar chart
- c) Histograms
- d) All of the above
Answer - a) Scatter plot
-
Statement 1: The cost function is altered by adding a penalty equivalent to the square of the magnitude of the coefficients
Statement 2: Ridge and Lasso regression are some of the simple techniques to reduce model complexity and prevent overfitting which may result from simple linear regression.- a) Statement 1 is true and statement 2 is false
- b) Statement 1 is False and statement 2 is true
- c) Both Statement (1 & 2) is true
- d) Both Statement (1 & 2) is wrong
Answer - c) Both Statement (1 & 2) is true
-
What are the assumptions of linear regression?
- a) Output and input should be linear, dependent and parameters should be linear
- b) Multiple input variables should be non- linear, if multiple i/p variables linear that would be called co-linearity
- c) Error should be no-relation (or) error should be independent, if any relation b/w errors that is called Auto-Correlation
- d) All the above
Answer - d) All the above
-
Which of the following of the coefficients is added as the penalty term to the loss function in Lasso regression?
- a) Squared magnitude
- b) Absolute value of magnitude
- c) Number of non-zero entries
- d) None of the above
Answer - b) Absolute value of magnitude
-
What type of penalty is used on regression weights in Ridge regression?
- a) L0
- b) L1
- c) L2
- d) None of the above
Answer - c) L2
-
If two variables, x and y, have a very strong linear relationship, then
- a) There is evidence that x causes a change in y
- b) There is evidence that y causes a change in x
- c) There might not be any causal relationship between x and y
- d) None of these alternatives is correct
Answer - c) There might not be any causal relationship between x and y
-
Which of the following is used for evaluating regression models?
- a) Adjusted R Squared, R Squared
- b) RMSE / MSE / MAE
- c) 1 is true and 2 is false
- d) Both 1 and 2 are true
Answer - d)Both 1 and 2 are true
-
Lasso Regression uses which norm?
- a) L1.
- b) L2.
- c) L1 & L2 both.
- d) None of the above.
Answer - a) L1
-
Ridge Regression uses which norm?
- a) L1.
- b) L2.
- c) L1 & L2 both.
- d) None of the above.
Answer - b) L2
-
In Ridge regression, A hyper parameter is used called “_____________” that controls the weighting of the penalty to the loss function.
- a) Alpha.
- b) Gamma.
- c) Lambda.
- d) None of above.
Answer - a) Alpha
-
The scikit-learn Python machine learning library provides an implementation of the Ridge Regression algorithm via the Ridge class.Confusingly, the lambda term can be configured via the “___________________” argument when defining the class.
- a) Lambda.
- b) Gamma.
- c) Beta.
- d) Alpha.
Answer - d) Alpha
-
Ridge regression can reduce the slope close to zero (but not exactly zero) but Lasso regression can reduce the slope to be exactly equal to zero.
- a) Both statements are True about Ridge and Lasso.
- b) Both statements are False about Ridge and Lasso.
- c) True statement about Ridge but not about Lasso.
- d) True statement about Lasso but not about Ridge.
Answer - a) Both statements are True about Ridge and Lasso
-
Ridge regression takes ________________ value of variables.
- a) Squared value of variables.
- b) Absolute value of variables.
- c) Cube value of variables.
- d) Root value of variables.
Answer - a) Squared value of variables
-
Ridge regression takes ________________ value of variables.
- a) Squared value of variables.
- b) Absolute value of variables.
- c) Cube value of variables.
- d) Root value of variables.
Answer - b) Absolute value of variables
-
)The effect of alpha value on both ridge and lasso regression is same in terms of value increase and decrease.
- a) True.
- b) False.
- c) alpha value is fix for ridge regression.
- d) None of the above.
Answer - a) True
-
In this Lasso and Ridge regression as alpha value increases, the slope of the regression line reduces and becomes horizontal.
- a) Slope is fixed for whatever maybe the value.
- b) False.
- c) True.
- d) None of the above.
Answer - c) True
-
The following statement is
I. Lasso Regression helps to reduce overfitting and it is particularly useful for feature selection.
II. Lasso regression can be useful if we have several independent variables that are useless.- a) Statement ( I ) is true and statement ( II ) is false.
- b) Statement ( I ) is false and statement ( II ) is true.
- c) Both Statement ( I ) & ( II ) are wrong.
- D) Both Statement ( I ) & ( II ) are true.
Answer - D) Both Statement ( I ) & ( II ) are true
-
To do Ridge and Lasso Regression in R we will use which library ___________.
- a) Ggplot.
- b) Glmnet.
- c) Caret.
- d) Dplyr.
Answer - b) Glmnet
-
With Lasso Regression the influence of the hyper parameter lambda, as lambda tends to zero the solution approaches to _________________ .
- a) Zero.
- b) One.
- c) Linear regression.
- d) Infinity.
Answer - c) Linear regression
-
With Lasso Regression the influence of the hyper parameter lambda, as lambda tends to infinity the solution approaches to _________________ .
- a) Zero.
- b) One.
- c) Global mean.
- d) Infinity.
Answer - c) Global mean
-
When compared with Lasso regression, the Ridge regression works well in cases where we
- a) If we have more features.
- b) If we have less features.
- c) If features have high correlation.
- d) If features have low correlation.
Answer - b) If we have less features , c)If features have high correlation
-
Suppose we fit “Lasso Regression” to a data set, which has 100 features (X1,X2…X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter. Now, which of the following options will be correct?
- a) It is more likely for X1 to be excluded from the model
- b) It is more likely for X1 to be included in the model.
- c) Can’t say.
- d) None of these.
Answer - b) It is more likely for X1 to be included in the model
Data Science Training Institutes in Other Locations
Agra, Ahmedabad, Amritsar, Anand, Anantapur, Bangalore, Bhopal, Bhubaneswar, Chengalpattu, Chennai, Cochin, Dehradun, Malaysia, Dombivli, Durgapur, Ernakulam, Erode, Gandhinagar, Ghaziabad, Gorakhpur, Gwalior, Hebbal, Hyderabad, Jabalpur, Jalandhar, Jammu, Jamshedpur, Jodhpur, Khammam, Kolhapur, Kothrud, Ludhiana, Madurai, Meerut, Mohali, Moradabad, Noida, Pimpri, Pondicherry, Pune, Rajkot, Ranchi, Rohtak, Roorkee, Rourkela, Shimla, Shimoga, Siliguri, Srinagar, Thane, Thiruvananthapuram, Tiruchchirappalli, Trichur, Udaipur, Yelahanka, Andhra Pradesh, Anna Nagar, Bhilai, Borivali, Calicut, Chandigarh, Chromepet, Coimbatore, Dilsukhnagar, ECIL, Faridabad, Greater Warangal, Guduvanchery, Guntur, Gurgaon, Guwahati, Hoodi, Indore, Jaipur, Kalaburagi, Kanpur, Kharadi, Kochi, Kolkata, Kompally, Lucknow, Mangalore, Mumbai, Mysore, Nagpur, Nashik, Navi Mumbai, Patna, Porur, Raipur, Salem, Surat, Thoraipakkam, Trichy, Uppal, Vadodara, Varanasi, Vijayawada, Vizag, Tirunelveli, Aurangabad
Navigate to Address
360DigiTMG - Data Science, Data Scientist Course Training in Bangalore
No 23, 2nd Floor, 9th Main Rd, 22nd Cross Rd, 7th Sector, HSR Layout, Bengaluru, Karnataka 560102
+91-9989994319,
1800-212-654-321