Neural Networks Multiple Choice Questions and Answers for freshers on “Learning – 2”.
1. Correlation learning law is special case of?
A. Hebb learning law
B. Perceptron learning law
C. Delta learning law
D. LMS learning law
Answer: A
Clarification: Since in hebb is replaced by bi(target output) in correlation.
2. Correlation learning law is what type of learning?
A. supervised
B. unsupervised
C. either supervised or unsupervised
D. both supervised or unsupervised
Answer: A
Clarification: Supervised, since depends on target output.
3. Correlation learning law can be represented by equation?
A. ∆wij= µ(si) aj
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
D. ∆wij= µ bi aj
Answer: D
Clarification: Correlation learning law depends on target output(bi).
4. The other name for instar learning law?
A. looser take it all
B. winner take it all
C. winner give it all
D. looser give it all
Answer: B
Clarification: The unit which gives maximum output, weight is adjusted for that unit.
5. The instar learning law can be represented by equation?
A. ∆wij= µ(si) aj
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
D. ∆wk= µ (a-wk), unit k with maximum output is identified
Answer: D
Clarification: Follows from basic definition of instar learning law.
6. Is instar a case of supervised learning?
A. yes
B. no
Answer: B
Clarification: Since weight adjustment don’t depend on target output, it is unsupervised learning.
7. The instar learning law can be represented by equation?
A. ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
D. ∆wij= µ(si) aj
Answer: A
Clarification: Follows from basic definition of outstar learning law.
8. Is outstar a case of supervised learning?
A. yes
B. no
Answer: A
Clarification: Since weight adjustment depend on target output, it is supervised learning.
9. Which of the following learning laws belongs to same category of learning?
A. hebbian, perceptron
B. perceptron, delta
C. hebbian, widrow-hoff
D. instar, outstar
Answer: B
Clarification: They both belongs to supervised type learning.
10. In hebbian learning intial weights are set?
A. random
B. near to zero
C. near to target value
D. near to target value
Answer: B
Clarification: Hebb law lead to sum of correlations between input & output, inorder to achieve this, the starting initial weight values must be small.
To practice all areas of Neural Networks for Freshers,