250+ MCQs on Learning – 2 and Answers

Neural Networks Multiple Choice Questions and Answers for freshers on “Learning – 2”.

1. Correlation learning law is special case of?
A. Hebb learning law
B. Perceptron learning law
C. Delta learning law
D. LMS learning law
Answer: A
Clarification: Since in hebb is replaced by bi(target output) in correlation.

2. Correlation learning law is what type of learning?
A. supervised
B. unsupervised
C. either supervised or unsupervised
D. both supervised or unsupervised
Answer: A
Clarification: Supervised, since depends on target output.

3. Correlation learning law can be represented by equation?
A. ∆wij= µ(si) aj
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
D. ∆wij= µ bi aj
Answer: D
Clarification: Correlation learning law depends on target output(bi).

4. The other name for instar learning law?
A. looser take it all
B. winner take it all
C. winner give it all
D. looser give it all
Answer: B
Clarification: The unit which gives maximum output, weight is adjusted for that unit.

5. The instar learning law can be represented by equation?
A. ∆wij= µ(si) aj
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
D. ∆wk= µ (a-wk), unit k with maximum output is identified
Answer: D
Clarification: Follows from basic definition of instar learning law.

6. Is instar a case of supervised learning?
A. yes
B. no
Answer: B
Clarification: Since weight adjustment don’t depend on target output, it is unsupervised learning.

7. The instar learning law can be represented by equation?
A. ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
D. ∆wij= µ(si) aj
Answer: A
Clarification: Follows from basic definition of outstar learning law.

8. Is outstar a case of supervised learning?
A. yes
B. no
Answer: A
Clarification: Since weight adjustment depend on target output, it is supervised learning.

9. Which of the following learning laws belongs to same category of learning?
A. hebbian, perceptron
B. perceptron, delta
C. hebbian, widrow-hoff
D. instar, outstar
Answer: B
Clarification: They both belongs to supervised type learning.

10. In hebbian learning intial weights are set?
A. random
B. near to zero
C. near to target value
D. near to target value
Answer: B
Clarification: Hebb law lead to sum of correlations between input & output, inorder to achieve this, the starting initial weight values must be small.

To practice all areas of Neural Networks for Freshers,

250+ MCQs on Backpropagation Algorithm and Answers

Neural Networks Multiple Choice Questions on “Backpropagation Algorithm″.

1. What is the objective of backpropagation algorithm?
A. to develop learning algorithm for multilayer feedforward neural network
B. to develop learning algorithm for single layer feedforward neural network
C. to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
D. none of the mentioned
Answer: C
Clarification: The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.

2. The backpropagation law is also known as generalized delta rule, is it true?
A. yes
B. no
Answer: A
Clarification: Because it fulfils the basic condition of delta rule.

3. What is true regarding backpropagation rule?
A. it is also called generalized delta rule
B. error in output is propagated backwards only to determine weight updates
C. there is no feedback of signal at nay stage
D. all of the mentioned
Answer: D
Clarification: These all statements defines backpropagation algorithm.

4. There is feedback in final stage of backpropagation algorithm?
A. yes
B. no
Answer: B
Clarification: No feedback is involved at any stage as it is a feedforward neural network.

5. What is true regarding backpropagation rule?
A. it is a feedback neural network
B. actual output is determined by computing the outputs of units for each hidden layer
C. hidden layers output is not all important, they are only meant for supporting input and output layers
D. none of the mentioned
Answer: B
Clarification: In backpropagation rule, actual output is determined by computing the outputs of units for each hidden layer.

6. What is meant by generalized in statement “backpropagation is a generalized delta rule” ?
A. because delta rule can be extended to hidden layer units
B. because delta is applied to only input and output layers, thus making it more simple and generalized
C. it has no significance
D. none of the mentioned
Answer: A
Clarification: The term generalized is used because delta rule could be extended to hidden layer units.

7. What are general limitations of back propagation rule?
A. local minima problem
B. slow convergence
C. scaling
D. all of the mentioned
Answer: D
Clarification: These all are limitations of backpropagation algorithm in general.

8. What are the general tasks that are performed with backpropagation algorithm?
A. pattern mapping
B. function approximation
C. prediction
D. all of the mentioned
Answer: D
Clarification: These all are the tasks that can be performed with backpropagation algorithm in general.

9. Does backpropagaion learning is based on gradient descent along error surface?
A. yes
B. no
C. cannot be said
D. it depends on gradient descent but not error surface
Answer: A
Clarification: Weight adjustment is proportional to negative gradient of error with respect to weight.

10. How can learning process be stopped in backpropagation rule?
A. there is convergence involved
B. no heuristic criteria exist
C. on basis of average gradient value
D. none of the mentioned
Answer: C
Clarification: If average gadient value fall below a preset threshold value, the process may be stopped.

250+ MCQs on Neural Networks ART and Answers

Neural Networks Multiple Choice Questions on “ART″.

1. An auto – associative network is?
A. network in neural which contains feedback
B. network in neural which contains loops
C. network in neural which no loops
D. none of the mentioned

Answer: A
Clarification: An auto – associative network contains feedback.

2. What is true about sigmoidal neurons?
A. can accept any vectors of real numbers as input
B. outputs a real number between 0 and 1
C. they are the most common type of neurons
D. all of the mentioned

Answer: D
Clarification: These all statements itself defines sigmoidal neurons.

3. The bidirectional associative memory is similar in principle to?
A. hebb learning model
B. boltzman model
C. Papert model
D. none of the mentioned

Answer: D
Clarification: The bidirectional associative memory is similar in principle to Hopfield model.

4. What does ART stand for?
A. Automatic resonance theory
B. Artificial resonance theory
C. Adaptive resonance theory
D. None of the mentioned

Answer: C
Clarification: ART stand for Adaptive resonance theory.

5. What is the purpose of ART?
A. take care of approximation in a network
B. take care of update of weights
C. take care of pattern storage
D. none of the mentioned

Answer: D
Clarification: Adaptive resonance theory take care of stability plasticity dilemma.

6. hat type learning is involved in ART?
A. supervised
B. unsupervised
C. supervised and unsupervised
D. none of the mentioned

Answer: B
Clarification: CPN is a unsupervised learning.

7. What type of inputs does ART – 1 receives?
A. bipolar
B. binary
C. both bipolar and binary
D. none of the mentiobned

Answer: B
Clarification: ART – 1 receives only binary inputs.

8. A greater value of ‘p’ the vigilance parameter leads to?
A. small clusters
B. bigger clusters
C. no change
D. none of the mentioned

Answer: A
Clarification: Input samples associated with same neuron get reduced.

9. ART is made to tackle?
A. stability problem
B. hard problems
C. storage problems
D. none of the mentioned

Answer: D
Clarification: ART is made to tackle stability – plasticity dilemma.

10. What does vigilance parameter in ART determines?
A. number of possible outputs
B. number of desired outputs
C. number of acceptable inputs
D. none of the mentioned

Answer: D
Clarification: Vigilance parameter in ART determines the tolerance of matching process.

250+ MCQs on Neural Networks Dynamics and Answers

Neural Networks Multiple Choice Questions on “Dynamics″.

1. Weight state i.e set of weight values are determined by what kind of dynamics?
A. synaptic dynamics
B. neural level dynamics
C. can be either synaptic or neural dynamics
D. none of the mentioned

Answer: A
Clarification: Weights are best determined by synaptic dynamics, as it is one fastest & precise dynamics occurring.

2. Which is faster neural level dynamics or synaptic dynamics?
A. neural level
B. synaptic
C. both equal
D. insufficient information

Answer: A
Clarification: Since neural level dyna,ics depends on input fluctuations & these take place at every milliseconds.

3. During activation dynamics does weight changes?
A. yes
B. no

Answer: B
Clarification: During activation dynamics, synaptic weights don’t change significantly & hence assumed to be constant.

4. Activation dynamics is referred as?
A. short term memory
B. long term memory
C. either short or long term
D. both short & long term

Answer: a
Clarification: It depends on input pattern, & input changes from moment to moment, hence Short term memory.

5. Synaptic dynamics is referred as?
A. short term memory
B. long term memory
C. either short or long term
D. both short & long term

Answer: B
Clarification: Synaptic dynamics don’t change for a given set of training inputs, hence long term memory.

6. What is classification?
A. deciding what features to use in a pattern recognition problem
B. deciding what class an input pattern belongs to
C. deciding what type of neural network to use
D. none of the mentioned

Answer: B
Clarification: Follows from basic definition of classification.

7. What is generalization?
A. the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the test set.
B. the ability of a pattern recognition system to approximate the desired output values for pattern vectors which are not in the training set.
C. can be either way
D. none of the mentioned

Answer: B
Clarification: Follows from basic definition of generalization.

8. What are models in neural networks?
A. mathematical representation of our understanding
B. representation of biological neural networks
C. both way
D. none of the mentioned

Answer: C
Clarification: Model should be close to our biological neural systems, so that we can have high efficiency in machines too.

9. What kind of dynamics leads to learning laws?
A. synaptic
B. neural
C. activation
D. both synaptic & neural

Answer: A
Clarification: Since weights are dependent on synaptic dynamics, hence learning laws.

10. Changing inputs affects what kind of dynamics directly?
A. synaptic
B. neural
C. activation
D. both synaptic & neural

Answer: C
Clarification: Activation dynamics depends on input pattern, hence any change in input pattern will affect activation dynamics of neural networks.

 

 

250+ MCQs on Analysis of Pattern Storage and Answers

Neural Networks Multiple Choice Questions on “Analysis Of Pattern Storage″.

1. Which is a simplest pattern recognition task in a feedback network?
A. heteroassociation
B. autoassociation
C. can be hetero or autoassociation, depends on situation
D. none of the mentioned
Answer: B
Clarification: Autoassociation is the simplest pattern recognition task.

2. In a linear autoassociative network, if input is noisy than output will be noisy?
A. yes
B. no
Answer: A
Clarification: Linear autoassociative network gives out, what is given to it as input.

3. Does linear autoassociative network have any practical use?
A. yes
B. no
Answer: B
Clarification: Since if input is noisy then output will aslo be noisy, hence no practical use.

4. What can be done by using non – linear output function for each processing unit in a feedback network?
A. pattern classification
B. recall
C. pattern storage
D. all of the mentioned
Answer: C
Clarification: By using non – linear output function for each processing unit, a feedback network can be used for pattern storage.

5. When are stable states reached in energy landscapes, that can be used to store input patterns?
A. mean of peaks and valleys
B. maxima
C. minima
D. none of the mentioned
Answer: C
Clarification: Energy minima corresponds to stable states that can be used to store input patterns.

6. The number of patterns that can be stored in a given network depends on?
A. number of units
B. strength of connecting links
C. both number of units and strength of connecting links
D. none of the mentioned
Answer: C
Clarification: The number of patterns that can be stored in a given network depends on number of units and strength of connecting links.

7. What happens when number of available energy minima be less than number of patterns to be stored?
A. pattern storage is not possible in that case
B. pattern storage can be easily done
C. pattern storage problem becomes hard problem for the network
D. none of the mentioned
Answer: C
Clarification: Pattern storage problem becomes hard problem, when number of energy minima i.e stable states are less.

8. What happens when number of available energy minima be more than number of patterns to be stored?
A. no effect
B. pattern storage is not possible in that case
C. error in recall
D. none of the mentioned
Answer: C
Clarification: Due to additional false minima, there is error in recall.

9. How hard problem can be solved?
A. by providing additional units in a feedback network
B. nothing can be done
C. by removing units in hidden layer
D. none of the mentioned
Answer: A
Clarification: Hard problem can be solved by providing additional units in a feedback network.

10. Why there is error in recall, when number of energy minima is more the required number of patterns to be stored?
A. due to noise
B. due to additional false maxima
C. due to additional false minima
D. none of the mentioned
Answer: C
Clarification: Due to additional false minima, there is error in recall.

250+ MCQs on Applications of Neural Networks – 1 and Answers

Neural Networks Multiple Choice Questions on “Applications Of Neural Networks – 1″.

1. Which application out of these of robots can be made of single layer feedforward network?
A. wall climbing
B. rotating arm and legs
C. gesture control
D. wall following
Answer: D
Clarification: Wall folloing is a simple task and doesn’t require any feedback.

2. Which is the most direct application of neural networks?
A. vector quantization
B. pattern mapping
C. pattern classification
D. control applications
Answer: C
Clarification: Its is the most direct and multilayer feedforward networks became popular because of this.

3. What are pros of neural networks over computers?
A. they have ability to learn b examples
B. they have real time high computational rates
C. they have more tolerance
D. all of the mentioned
Answer: D
Clarification: Because of their parallel structure, they have high computational rates than conventional computers, so all are true.

4. what is true about single layer associative neural networks?
A. performs pattern recognition
B. can find the parity of a picture
C. can determine whether two or more shapes in a picture are connected or not
D. none of the mentioned
Answer: A
Clarification: It can only perform pattern recognition, rest is not true for a single layer neural.

5. which of the following is false?
A. neural networks are artificial copy of the human brain
B. neural networks have high computational rates than conventional computers
C. neural networks learn by examples
D. none of the mentioned
Answer: D
Clarification: All statements are true for a neural network.

6. For what purpose, hamming network is suitable?
A. classification
B. association
C. pattern storage
D. none of the mentioned
Answer: A
Clarification: Hamming network performs template matching between stored templates and inputs.

7. What happens in upper subnet of the hamming network?
A. classification
B. storage
C. output
D. none of the mentioned
Answer: D
Clarification: In upper subnet, competitive interaction among units take place.

8. The competition in upper subnet of hamming network continues till?
A. only one unit remains negative
B. all units are destroyed
C. output of only one unit remains positive
D. none of the mentioned
Answer: C
Clarification: The competition in upper subnet of hamming network continues till output of only one unit remains positive.

9. What does the activation value of winner unit is indicative of?
A. greater the degradation more is the activation value of winning units
B. greater the degradation less is the activation value of winning units
C. greater the degradation more is the activation value of other units
D. greater the degradation less is the activation value of other units
Answer: B
Clarification: Simply, greater the degradation less is the activation value of winning units.

10. What does the matching score at first layer in recognition hamming network is indicative of?
A. dissimilarity of input pattern with patterns stored
B. noise immunity
C. similarity of input pattern with patterns stored
D. none of the mentioned
Answer: C
Clarification: Matching score is simply a indicative of similarity of input pattern with patterns stored.