Home / Programming Questions / Machine Learning MCQs / Page 5

Machine Learning MCQs with answers Page - 5

Dear candidates you will find MCQ questions of Machine Learning here. Learn these questions and prepare yourself for coming examinations and interviews. You can check the right answer of any question by clicking on any option or by clicking view answer button.
Share your questions by clicking Add Question

M

Mr. Dubey • 51.47K Points
Coach

Q. What is the approach of basic algorithm for decision tree induction?

(A) greedy
(B) top down
(C) procedural
(D) step by step

M

Mr. Dubey • 51.47K Points
Coach

Q. Can we extract knowledge without apply feature selection

(A) Yes
(B) 0.06
(C) ---
(D) ---

M

Mr. Dubey • 51.47K Points
Coach

Q. Suppose there are 25 base classifiers. Each classifier has error rates of e = 0.35. Suppose you are using averaging as ensemble technique. What will be the probabilities that ensemble of above 25 classifiers will make a wrong prediction? Note: All classifiers are independent of each other

(A) 0.05
(B) validation data
(C) 0.07
(D) 0.09

M

Mr. Dubey • 51.47K Points
Coach

Q. When the number of classes is large Gini index is not a good choice.

(A) TRUE
(B) logistic regression
(C) ---
(D) ---

M

Mr. Dubey • 51.47K Points
Coach

Q. Data used to build a data mining model.

(A) training data
(B) to transform the problem from regression to classification
(C) test data
(D) hidden data

M

Mr. Dubey • 51.47K Points
Coach

Q. This technique associates a conditional probability value with each data instance.

(A) linear regression
(B) false - perceptrons are mathematically incapable of solving linearly inseparable functions, no matter what you do
(C) simple regression
(D) multiple linear regression

M

Mr. Dubey • 51.47K Points
Coach

Q. Computers are best at learning

(A) facts.
(B) concepts.
(C) procedures.
(D) principles.

M

Mr. Dubey • 51.47K Points
Coach

Q. what is Feature scaling done before applying K-Mean algorithm?

(A) in distance calculation it will give the same weights for all features
(B) you always get the same clusters. if you use or dont use feature scaling
(C) in manhattan distance it is an important step but in euclidian it is not
(D) none of these

Login

Forgot username? click here

Forgot password? Click here

Don't have account? Register here.