250+ MCQs on Topology and Answers

Neural Networks Multiple Choice Questions on “Topology″.

1. In neural how can connectons between different layers be achieved?
A. interlayer
B. intralayer
C. both interlayer and intralayer
D. either interlayer or intralayer

Answer: C
Clarification: Connections between layers can be made to one unit to another and within the units of a layer.

2. Connections across the layers in standard topologies & among the units within a layer can be organised?
A. in feedforward manner
B. in feedback manner
C. both feedforward & feedback
D. either feedforward & feedback

Answer: D
Clarification: Connections across the layers in standard topologies can be in feedforward manner or in feedback manner but not both.

3. What is an instar topology?
A. when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
B. when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
C. can be either way
D. none of the mentioned

Answer: A
Clarification: Restatement of basic definition of instar.

4. What is an outstar topology?
A. when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
B. when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
C. can be either way
D. none of the mentioned

Answer: B
Clarification: Restatement of basic definition of outstar.

5. The operation of instar can be viewed as?
A. content addressing the memory
B. memory addressing the content
C. either content addressing or memory addressing
D. both content & memory addressing

Answer: A
Clarification: Because in instar, when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent.

6. The operation of outstar can be viewed as?
A. content addressing the memory
B. memory addressing the content
C. either content addressing or memory addressing
D. both content & memory addressing

Answer: B
Clarification: Because in outstar, when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector).

7. If two layers coincide & weights are symmetric(wij=wji), then what is that structure called?
A. instar
B. outstar
C. autoassociative memory
D. heteroassociative memory

Answer: C
Clarification: In autoassociative memory each unit is connected to every other unit & to itself.

8. Heteroassociative memory can be an example of which type of network?
A. group of instars
B. group of oustar
C. either group of instars or outstars
D. both group of instars or outstars

Answer: C
Clarification: Depending upon the flow, the memory can be of either of the type.

9. What is STM in neural network?
A. short topology memory
B. stimulated topology memory
C. short term memory
D. none of the mentioned

Answer: C
Clarification: Full form of STM.

10. What does STM corresponds to?
A. activation state of network
B. encoded pattern information pattern in synaptic weights
C. either way
D. both way

Answer: A
Clarification: Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time,hence the option.

11. What LTM corresponds to?
A. activation state of network
B. encoded pattern information pattern in synaptic weights
C. either way
D. both way

Answer: B
Clarification: Long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time) & hence the option.

250+ TOP MCQs on Neural Networks – Pattern Mapping and Answers

Neural Networks Multiple Choice Questions on “Pattern Mapping″.

1. Can all hard problems be handled by a multilayer feedforward neural network, with nonlinear units?
A. yes
B. no
Answer: A
Clarification: Multilayer perceptrons can deal with all hard problems.

2. What is a mapping problem?
A. when no restrictions such as linear separability is placed on the set of input – output pattern pairs
B. when there may be restrictions such as linear separability placed on input – output patterns
C. when there are restriction but other than linear separability
D. none of the mentioned
Answer: A
Clarification: Its a more general case of classification problem.

3. Can mapping problem be a more general case of pattern classification problem?
A. yes
B. no
Answer: A
Clarification: Since no restrictions such as linear separability is placed on the set of input – output pattern pairs, mapping problem becomes a more general case of pattern classification problem.

4. What is the objective of pattern mapping problem?
A. to capture weights for a link
B. to capture inputs
C. to capture feedbacks
D. to capture implied function
Answer: D
Clarification: The objective of pattern mapping problem is to capture implied function.

5. To provide generalization capability to a network, what should be done?
A. all units should be linear
B. all units should be non – linear
C. except input layer, all units in other layers should be non – linear
D. none of the mentioned
Answer: C
Clarification: To provide generalization capability to a network, except input layer, all units in other layers should be non – linear.

6. What is the objective of pattern mapping problem?
A. to capture implied function
B. to capture system characteristics from observed data
C. both to implied function and system characteristics
D. none of the mentioned
Answer: D
Clarification: The implied fuction is all about system characteristics.

7. Does an approximate system produce strictly an interpolated output?
A. yes
B. no
Answer: B
Clarification: An approximate system doesn’t produce strictly an interpolated output.

8. The nature of mapping problem decides?
A. number of units in second layer
B. number of units in third layer
C. overall number of units in hidden layers
D. none of the mentioned
Answer: C
Clarification: The nature of mapping problem decides overall number of units in hidden layers.

9. How is hard learning problem solved?
A. using nonlinear differentiable output function for output layers
B. using nonlinear differentiable output function for hidden layers
C. using nonlinear differentiable output function for output and hidden layers
D. it cannot be solved
Answer: C
Clarification: Hard learning problem is solved by using nonlinear differentiable output function for output and hidden layers.

10. The number of units in hidden layers depends on?
A. the number of inputs
B. the number of outputs
C. both the number of inputs and outputs
D. the overall characteristics of the mapping problem
Answer: D
Clarification: The number of units in hidden layers depends on the overall characteristics of the mapping problem.

250+ MCQs on Associative Memories and Answers

Neural Networks Multiple Choice Questions on “Associative Memories″.

1. What are the tasks that cannot be realised or recognised by simple networks?
A. handwritten characters
B. speech sequences
C. image sequences
D. all of the mentioned
Answer: D
Clarification: These all are complex recognition tasks.

2. Can data be stored directly in associative memory?
A. yes
B. no
Answer: B
Clarification: Data cannot be stored directly in associative memory.

3. If the weight matrix stores the given patterns, then the network becomes?
A. autoassoiative memory
B. heteroassociative memory
C. multidirectional assocative memory
D. temporal associative memory
Answer: A
Clarification: If the weight matrix stores the given patterns, then the network becomes autoassoiative memory.

4. If the weight matrix stores association between a pair of patterns, then network becomes?
A. autoassoiative memory
B. heteroassociative memory
C. multidirectional assocative memory
D. temporal associative memory
Answer: B
Clarification: If the weight matrix stores the given patterns, then the network becomes heteroassociative memory.

5. If the weight matrix stores multiple associations among several patterns, then network becomes?
A. autoassoiative memory
B. heteroassociative memory
C. multidirectional assocative memory
D. temporal associative memory
Answer: A
Clarification: If the weight matrix stores the given patterns, then the network becomes multidirectional assocative memory.

6. If the weight matrix stores association between adjacent pairs of patterns, then network becomes?
A. autoassoiative memory
B. heteroassociative memory
C. multidirectional assocative memory
D. temporal associative memory
Answer: A
Clarification: If the weight matrix stores the given patterns, then the network becomes temporal associative memory.

7. Heteroassociative memory is also known as?
A. unidirectional memory
B. bidirectional memory
C. multidirectional assocative memory
D. temporal associative memory
Answer: B
Clarification: Heteroassociative memory is also known as bidirectional memory.

8. What are some of desirable characteristics of associative memories?
A. ability to store large number of patterns
B. fault tolerance
C. able to recall, even for input pattern is noisy
D. all of the mentioned
Answer: D
Clarification: These all are desirable characteristics of associative memories.

9. What is the objective of BAM?
A. to store pattern pairs
B. to recall pattern pairs
C. to store a set of pattern pairs and they can be recalled by giving either of pattern as input
D. none of the mentioned
Answer: C
Clarification: The objective of BAM i.e Bidirectional Associative Memory, is to store a set of pattern pairs and they can be recalled by giving either of pattern as input.

10. BAM is a special case of MAM, is that true?
A. yes
B. no
Answer: A
Clarification: BAM i.e Bidirectional Associative Memory is a special case of MAM i.e Multidirectional Associative Memory.

250+ MCQs on Learning – 1 and Answers

Neural Networks Multiple Choice Questions on “Learning – 1″.

1. On what parameters can change in weight vector depend?
A. learning parameters
B. input vector
C. learning signal
D. all of the mentioned
Answer: D
Clarification: Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.

2. If the change in weight vector is represented by ∆wij, what does it mean?
A. describes the change in weight vector for ith processing unit, taking input vector jth into account
B. describes the change in weight vector for jth processing unit, taking input vector ith into account
C. describes the change in weight vector for jth & ith processing unit.
D. none of the mentioned
Answer: A
Clarification: ∆wij= µf(wi A.aj, where a is the input vector.

3. What is learning signal in this equation ∆wij= µf(wi A.aj?
A. µ
B. wi a
C. aj
D. f(wi A.
Answer: D
Clarification: This the non linear representation of output of the network.

4. State whether Hebb’s law is supervised learning or of unsupervised type?
A. supervised
B. unsupervised
C. either supervised or unsupervised
D. can be both supervised & unsupervised
Answer: B
Clarification: No desired output is required for it’s implementation.

5. Hebb’s law can be represented by equation?
A. ∆wij= µf(wi A.aj
B. ∆wij= µ(si) aj, where (si) is output signal of ith input
C. both way
D. none of the mentioned
Answer: C
Clarification: (si)= f(wi A., in Hebb’s law.

6. State which of the following statements hold foe perceptron learning law?
A. it is supervised type of learning law
B. it requires desired output for each input
C. ∆wij= µ(bi – si) aj
D. all of the mentioned
Answer: D
Clarification: all statements follow from ∆wij= µ(bi – si) aj, where bi is the target output & hence supervised learning.

7. Delta learning is of unsupervised type?
A. yes
B. no
Answer: B
Clarification: Change in weight is based on the error between the desired & the actual output values for a given input.

8. widrow & hoff learning law is special case of?
A. hebb learning law
B. perceptron learning law
C. delta learning law
D. none of the mentioned
Answer: C
Clarification: Output function in this law is assumed to be linear , all other things same.

9. What’s the other name of widrow & hoff learning law?
A. Hebb
B. LMS
C. MMS
D. None of the mentioned
Answer: B
Clarification: LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.

10. Which of the following equation represent perceptron learning law?
A. ∆wij= µ(si) aj
B. ∆wij= µ(bi – si) aj
C. ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
D. ∆wij= µ(bi – (wi A.) aj
Answer: B
Clarification: Perceptron learning law is supervised, nonlinear type of learning.

250+ MCQs on Pattern Recognition and Answers

Neural Networks Multiple Choice Questions on “Pattern Recognition″.

1. From given input-output pairs pattern recognition model should capture characteristics of the system?
A. true
B. false
Answer: A
Clarification: From given input-output pairs pattern recognition model should be able to capture characteristics of the system & hence should be designed in that manner.

2. Let a(l), b(l) represent in input-output pairs, where “l” varies in natural range of no.s, then if a(l)=b(l)?
A. problem is heteroassociation
B. problem is autoassociation
C. can be either auto or heteroassociation
D. none of the mentioned
Answer: B
Clarification: When a(l)=b(l) problem is classified as autoassociation.

3. Let a(l), b(l) represent in input-output pairs, where “l” varies in natural range of no.s, then if a(l)=!b(l)?
A. problem is heteroassociation
B. problem is autoassociation
C. can be either auto or heteroassociation
D. none of the mentioned
Answer: A
Clarification: When a(l) & b(l)are distinct, problem is classified as autoassociation.

4. The recalled output in pattern association problem depends on?
A. nature of input-output
B. design of network
C. both input & design
D. none of the mentioned
Answer: C
Clarification: The recalled output in pattern association problem depends on both input & design of network.

5. If a(l) gives output b(l) & a’=a(l)+m,where m is small quantity & if a’ gives ouput b(l) then?
A. network exhibits accretive behaviour
B. network exhibits interpolative behaviour
C. exhibits both accretive & interpolative behaviour
D. none of the mentioned
Answer: A
Clarification: This follows from basic definition of accretive behaviour in neural.

6. If a(l) gives output b(l) & a’=a(l)+m,where m is small quantity & if a’ gives ouput b(l)+n then?
A. network exhibits accretive behaviour
B. network exhibits interpolative behaviour
C. exhibits both accretive & interpolative behaviour
D. none of the mentioned
Answer: B
Clarification: This follows from basic definition in neural.

7. Can system be both interpolative & accretive at same time?
A. yes
B. no
Answer: B
Clarification: System can’t exhibit both behaviour at same time. since these are based on different approach & algorithm.

8. What are 3 basic types of neural nets that form basic functional units among
i)feedforward ii) loop iii) recurrent iv) feedback v) combination of feed forward & back
A. i, ii, iii
B. i, ii, iv
C. i, iv, v
D. i, iii, v
Answer: C
Clarification: These form the basic functional units of neural nets.

9. Feedback networks are used for autoassociation & pattern storage?
A. yes
B. no
Answer: A
Clarification: Feedback networks are typically used for autoassociation & pattern storage.

10. Feedforward networks are also used for autoassociation & pattern storage?
A. yes
B. no
Answer: B
Clarification: Feedforward networks are used for pattern mapping.

250+ MCQs on Multi Layer Feedforward Neural Network and Answers

Neural Networks Multiple Choice Questions on “Multi Layer Feedforward Neural Network″.

1. What is the use of MLFFNN?
A. to realize structure of MLP
B. to solve pattern classification problem
C. to solve pattern mapping problem
D. to realize an approximation to a MLP
Answer: D
Clarification: MLFFNN stands for multilayer feedforward network and MLP stands for multilayer perceptron.

2. What is the advantage of basis function over mutilayer feedforward neural networks?
A. training of basis function is faster than MLFFNN
B. training of basis function is slower than MLFFNN
C. storing in basis function is faster than MLFFNN
D. none of the mentioned
Answer: A
Clarification: The main advantage of basis function is that the training of basis function is faster than MLFFNN.

3. Why is the training of basis function is faster than MLFFNN?
A. because they are developed specifically for pattern approximation
B. because they are developed specifically for pattern classification
C. because they are developed specifically for pattern approximation or classification
D. none of the mentioned
Answer: C
Clarification: Training of basis function is faster than MLFFNN because they are developed specifically for pattern approximation or classification.

4. Pattern recall takes more time for?
A. MLFNN
B. Basis function
C. Equal for both MLFNN and basis function
D. None of the mentioned
Answer: B
Clarification: The first layer of basis function involves computations.

5. In which type of networks training is completely avoided?
A. GRNN
B. PNN
C. GRNN and PNN
D. None of the mentioned
Answer: C
Clarification: In GRNN and PNN networks training is completely avoided.

6. What does GRNN do?
A. function approximation task
B. pattern classification task
C. function approximation and pattern classification task
D. none of the mentioned
Answer: A
Clarification: GRNN stand for Generalized Regression Neural Networks.

7. What does PNN do?
A. function approximation task
B. pattern classification task
C. function approximation and pattern classification task
D. none of the mentioned
Answer: B
Clarification: PNN stand for Probabilistic Neural Networks.

8. Th CPN provides practical approach for implementing?
A. patter approximation
B. pattern classification
C. pattern mapping
D. pattern clustering
Answer: C
Clarification: CPN i.e counterpropagation network provides a practical approach for implementing pattern mapping.

9. What consist of a basic counterpropagation network?
A. a feedforward network only
B. a feedforward network with hidden layer
C. two feedforward network with hidden layer
D. none of the mentioned
Answer: C
Clarification: Counterpropagation network consist of two feedforward network with a common hidden layer.

10. How does the name counterpropagation signifies its architecture?
A. its ability to learn inverse mapping functions
B. its ability to learn forward mapping functions
C. its ability to learn forward and inverse mapping functions
D. none of the mentioned
Answer: C
Clarification: Counterpropagation network has ability to learn forward and inverse mapping functions.