300+ TOP Multi Core Architectures and Programming MCQs & Answers

Multi Core Architectures and Programming Multiple Choice Questions

chapter: Multi-core Processors

1. A collection of lines that connects several devices is called ______________
A. bus

B. peripheral connection wires

C. Both a and b

D. internal wires

Answer: A.bus

2. PC Program Counter is also called ____________
A. instruction pointer

B. memory pointer

C. data counter

D. file pointer

Answer: A.instruction pointer

3. Which MIMD systems are best scalable with respect to the number ofprocessors?
A. Distributed memory computers

B. ccNUMA systems

C. nccNUMA systems

D. Symmetric multiprocessors

Answer: A.Distributed memory computers

4. Cache coherence: For which shared (virtual) memory systems is the snooping protocol suited?
A. Crossbar connected systems

B. Systems with hypercube network

C. Systems with butterfly network

D. Bus based systems

Answer: D.Bus based systems

5. The idea of cache memory is based ______
A. on the property of locality of reference

B. on the heuristic 90-10 rule

C. on the fact that references generally tend to cluster

D. all of the above

Answer: A.on the property of locality of reference

6. When number of switch ports is equal to or larger than number of devices,this simple network is referred to as ______________
A. Crossbar

B. Crossbar switch

C. Switching

D. Both a and b

Answer: D.Both a and b

7. A remote node is being node which has a copy of a ______________
A. Home block

B. Guest block

C. Remote block

D. Cache block

Answer: D.Cache block

8. A pipeline is like _______________
A. an automobile assembly line

B. house pipeline

C. both a and b

D. a gas line

Answer: A.an automobile assembly line

9. Which cache miss does not occur in case of a fully associative cache?
A. Conflict miss

B. Capacity miss

C. Compulsory miss

D. Cold start miss

Answer: A.Conflict miss

10. Bus switches are present in ____________
A. bus window technique

B. crossbar switching

C. linked input/output

D. shared bus

Answer: B.crossbar switching

11. Systems that do not have parallel processing capabilities are ______________
A. SISD

B. MIMD

C. SIMD

D. MISD

Answer: A.SISD

12. Parallel programs: Which speedup could be achieved according to Amdahl´s law for infinite number of processors if 5% of a program is sequential and the remaining part is ideally parallel?
A. 10

B. 20

C. 30

D. 40

Answer: B.20

13. SIMD represents an organization that ______________
A. Includes many processing units under the supervision of a common control unit

B. vector supercomputer and MIMD systems

C. logic behind pipelining an instruction as observe

D. receive an instruction from the controlling unit

Answer: A.Includes many processing units under the supervision of a common control unit

14. Cache memory works on the principle of ____________
A. communication links

B. Locality of reference

C. Bisection bandwidth

D. average access time

Answer: B.Locality of reference

15. In shared bus architecture, the required processor(s) to perform a bus cycle, for fetching data or instructions is ________________
A. One Processor

B. Two Processor

C. Multi-Processor

D. None of the above

Answer: A.One Processor

16. Alternative way of a snooping-based coherence protocol, is called a____________
A. Write invalidate protocol

B. Snooping protocol

C. Directory protocol

D. Write update protocol

Answer: C.Directory protocol

17. If no node having a copy of a cache block, this technique is known as ______
A. Cached

B. Un-cached

C. Shared data

D. Valid data

Answer: B.Un-cached

18. Requesting node sending the requested data starting from the memory,and the requestor which has made the only sharing node, known as ________.
A. Read miss

B. Write miss

C. Invalidate

D. Fetch

Answer: A.Read miss

19. A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______.
A. Direct interconnects

B. Indirect interconnects

C. Pipe-lining

D. Uniform Memory Access

Answer: C.Pipe-lining

20. All nodes in each dimension form a linear array, in the __________.
A. Star topology

B. Ring topology

C. Connect topology

D. Mesh topology

Answer: D.Mesh topology

21. The concept of pipelining is most effective in improving performance if the tasks being performed in different stages :
A. require different amount of time

B. require about the same amount of time

C. require different amount of time with time difference between any two tasks being same

D. require different amount with time difference between any two tasks being different

Answer: B.require about the same amount of time

22. The expression ‘delayed load’ is used in context of
A. processor-printer communication

B. memory-monitor communication

C. pipelining

D. none of the above

Answer: C.pipelining

23. During the execution of the instructions, a copy of the instructions isplaced in the ______ .
A. Register

B. RAM

C. System heap

D. Cache

Answer: D.Cache

chapter: Parallel Program Challenges

24. Producer consumer problem can be solved using _____________
A. semaphores

B. event counters

C. monitors

D. All of the above

Answer: C.monitors

25. A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which access takes place is called:
A. data consistency

B. race condition

C. aging

D. starvation

Answer: B.race condition

26. The segment of code in which the process may change common variables, update tables, write into files is known as :
A. program

B. critical section

C. non – critical section

D. synchronizing

Answer: B.critical section

27. All deadlocks involve conflicting needs for __________
A. Resources

B. Users

C. Computers

D. Programs

Answer: A.Resources

28. ___________ are used for signaling among processes and can be readily usedto enforce a mutual exclusion discipline.
A. Semaphores

B. Messages

C. Monitors

D. Addressing

Answer: A.Semaphores

29. To avoid deadlock ____________
A. there must be a fixed number of resources to allocate

B. resource allocation must be done only once

C. all deadlocked processes must be aborted

D. inversion technique can be used

Answer: A.there must be a fixed number of resources to allocate

30. A minimum of _____ variable(s) is/are required to be shared between processes to solve the critical section problem.
A. one

B. two

C. three

D. four

Answer: B.two

31. Spinlocks are intended to provide __________ only.
A. Mutual Exclusion

B. Bounded Waiting

C. Aging

D. Progress

Answer: B.Bounded Waiting

32. To ensure difficulties do not arise in the readers – writer’s problem, _______are given exclusive access to the shared object.
A. readers

B. writers

C. readers and writers

D. none of the above

Answer: B.writers

33. If a process is executing in its critical section, then no other processes canbe executing in their critical section. This condition is called ___________.
A. Out-of-order execution

B. Hardware prefetching

C. Software prefetching

D. mutual exclusion

Answer: D.mutual exclusion

34. A semaphore is a shared integer variable ____________.
A. lightweight process

B. that cannot drop below zero

C. program counter

D. stack space

Answer: B.that cannot drop below zero

35. A critical section is a program segment ______________.
A. where shared resources are accessed

B. single thread of execution

C. improves concurrency in multi-core system

D. Lower resource consumption

Answer: A.where shared resources are accessed

36. A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is ___________
A. 4

B. 6

C. 9

D. 8

Answer: D.8

37. A system has 3 processes sharing 4 resources. If each process needs amaximum of 2 units, then _____________
A. Better system utilization

B. deadlock can never occur

C. Responsiveness

D. Faster execution

Answer: B.deadlock can never occur

38. _____________ refers to the ability of multiple process (or threads) to share code, resources or data in such a way that only one process has access to shared object at a time.
A. Readers_writer locks

B. Barriers

C. Semaphores

D. Mutual Exclusion

Answer: D.Mutual Exclusion

39. ____________ is the ability of multiple processes to co-ordinate their activitiesby exchange of information.
A. Deadlock

B. Synchronization

C. Mutual Exclusion

D. Cache

Answer: B.Synchronization

40. Paths that have an unbounded number of allowed nonminimal hops from packet sources, this situation is referred to as __________.
A. Livelock

B. Deadlock

C. Synchronization

D. Mutual Exclusion

Answer: A.Livelock

41. Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); —; signal(S);signal(Q) and wait(Q); wait(S);—;signal(Q);signal(S); respectively. The above situation depicts a _________.
A. Livelock

B. Critical Section

C. Deadlock

D. Mutual Exclusion

Answer: C.Deadlock

42. Which of the following conditions must be satisfied to solve the criticalsection problem?
A. Mutual Exclusion

B. Progress

C. Bounded Waiting

D. All of the mentioned

Answer: D.All of the mentioned

43. Mutual exclusion implies that ____________.
A. if a process is executing in its critical section, then no other process must be executing in their critical sections

B. if a process is executing in its critical section, then other processes must be executing in their critical sections

C. if a process is executing in its critical section, then all the resources of the system must be blocked until it finishes execution

D. none of the mentioned

Answer: A.if a process is executing in its critical section, then no other process must be executing in their critical sections

44. Bounded waiting implies that there exists a bound on the number of times a process is allowed to enter its critical section ____________.
A. after a process has made a request to enter its critical section and before the request is granted

B. when another process is in its critical section

C. before a process has made a request to enter its critical section

D. none of the mentioned

Answer: A.after a process has made a request to enter its critical section and before the request is granted

45. What are the two atomic operations permissible on semaphores?
A. Wait

B. Stop

C. Hold

D. none of the mentioned

Answer: A.Wait

46. What are Spinlocks?
A. CPU cycles wasting locks over critical sections of programs

B. Locks that avoid time wastage in context switches

C. Locks that work better on multiprocessor systems

D. All of the mentioned

Answer: D.All of the mentioned

47. What is the main disadvantage of spinlocks?
A. they are not sufficient for many process

B. they require busy waiting

C. they are unreliable sometimes

D. they are too complex for programmers

Answer: B.they require busy waiting

48. The signal operation of the semaphore basically works on the basic _______system call.
A. continue()

B. wakeup()

C. getup()

D. start()

Answer: B.wakeup()

49. If the semaphore value is negative ____________.
A. its magnitude is the number of processes waiting on that semaphore

B. it is invalid

C. no operation can be further performed on it until the signal operation is performed on it

D. none of the mentioned

Answer: A.its magnitude is the number of processes waiting on that semaphore

chapter: Shared Memory Programming with OpenMP

50. Which directive must precede the directive: #pragma omp sections (not necessarily immediately)?
A. #pragma omp section

B. #pragma omp parallel

C. None

D. #pragma omp master

Answer: A.#pragma omp section

51. When compiling an OpenMP program with gcc, what flag must be included?
A. -fopenmp

B. #pragma omp parallel

C. –o hello

D. ./openmp

Answer: A.-fopenmp

52. Within a parallel region, declared variables are by default ________ .
A. Private

B. Local

C. Loco

D. Shared

Answer: D.Shared

53. A ______________ construct by itself creates a “single program multiple data”program, i.e., each thread executes the same code.
A. Parallel

B. Section

C. Single

D. Master

Answer: A.Parallel

54. _______________ specifies that the iteration of the loop must be executed asthey would be in serial program.
A. Nowait

B. Ordered

C. Collapse

D. for loops

Answer: B.Ordered
55. ___________________ initializes each private copy with the corresponding valuefrom the master thread.
A. Firstprivate

B. lastprivate

C. nowait

D. Private (OpenMP) and reduction.

Answer: A.Firstprivate
56. The __________________ of a parallel region extends the lexical extent by thecode of functions that are called (directly or indirectly) from within the parallel region.
A. Lexical extent

B. Static extent

C. Dynamic extent

D. None of the above

Answer: C.Dynamic extent
57. The ______________ specifies that the iterations of the for loop should beexecuted in parallel by multiple threads.
A. Sections construct

B. for pragma

C. Single construct

D. Parallel for construct

Answer: B.for pragma
58. _______________ Function returns the number of threads that are currentlyactive in the parallel section region.
A. omp_get_num_procs ( )

B. omp_get_num_threads ( )

C. omp_get_thread_num ( )

D. omp_set_num_threads ( )

Answer: B.omp_get_num_threads ( )
59. The size of the initial chunksize _____________.
A. total_no_of_iterations / max_threads

B. total_no_of_remaining_iterations / max_threads

C. total_no_of_iterations / No_threads

D. total_no_of_remaining_iterations / No_threads

Answer: A.total_no_of_iterations / max_threads
60. A ____________ in OpenMP is just some text that modifies a directive.
A. data environment

B. clause

C. task

D. Master thread

Answer: B.clause
61. In OpenMP, the collection of threads executing the parallel block theoriginal thread and the new thread is called a ____________
A. team

B. executable code

C. implicit task

D. parallel constructs

Answer: A.team
62. When a thread reaches a _____________ directive, it creates a team of threadsand becomes the master of the team.
A. Synchronization

B. Parallel

C. Critical

D. Single

Answer: B.Parallel
63. Use the _________ library function to determine if nested parallel regions areenabled.
A. Omp_target()

B. Omp_declare target()

C. Omp_target data()

D. omp_get_nested()

Answer: D.omp_get_nested()
64. The ____________ directive ensures that a specific memory location is updated atomically, rather than exposing it to the possibility of multiple, simultaneous writing threads.
A. Parallel

B. For

C. atomic

D. Sections

Answer: C.atomic
65. A ___________ construct must be enclosed within a parallel region in orderfor the directive to execute in parallel.
A. Parallel sections

B. Critical

C. Single

D. work-sharing

Answer: D.work-sharing
66. ____________ is a form of parallelization across multiple processors in parallelcomputing environments.
A. Work-Sharing Constructs

B. Data parallelism

C. Functional Parallelism

D. Handling loops

Answer: B.Data parallelism
67. In OpenMP, assigning iterations to threads is called ________________
A. scheduling

B. Static

C. Dynamic

D. Guided

Answer: A.scheduling
68. The ____________is implemented more efficiently than a general parallelregion containing possibly several loops.
A. Sections

B. Parallel Do/For

C. Parallel sections

D. Critical

Answer: B.Parallel Do/For
69. _______________ causes no synchronization overhead and can maintain datalocality when data fits in cache.
A. Guided

B. Auto

C. Runtime

D. Static

Answer: D.Static
70. How does the difference between the logical view and the reality ofparallel architectures affect parallelization?
A. Performance

B. Latency

C. Bandwidth

D. Accuracy

Answer: A.Performance
71. How many assembly instructions does the following C instruction take?global_count += 5;
A. 4 instructions

B. 3 instructions

C. 5 instructions

D. 2 instructions

Answer: A.4 instructions
chapter:
Distributed Memory Programming
72. MPI specifies the functionality of _________________ communication routines.
A. High-level

B. Low-level

C. Intermediate-level

D. Expert-level

Answer: A.High-level
73. _________________ generate log files of MPI calls.
A. mpicxx

B. mpilog

C. mpitrace

D. mpianim

Answer: B.mpilog
74. A collective communication in which data belonging to a single process issent to all of the processes in the communicator is called a ________________.
A. Scatter

B. Gather

C. Broadcast

D. Allgather

Answer: C.Broadcast
75. __________________ is a nonnegative integer that the destination can use toselectively screen messages.
A. Dest

B. Type

C. Address

D. length

Answer: B.Type

chapter:
Distributed Memory Programming
76. The routine ________________ combines data from all processes by addingthem in this case and returning the result to a single process.
A. MPI _ Reduce

B. MPI_ Bcast

C. MPI_ Finalize

D. MPI_ Comm size

Answer: A.MPI _ Reduce
77. The easiest way to create communicators with new groups iswith_____________.
A. MPI_Comm_rank

B. MPI_Comm_create

C. MPI_Comm_Split

D. MPI_Comm_group

Answer: C.MPI_Comm_Split
78. _______________ is an object that holds information about the received message,including, for example, it’s actually count.
A. buff

B. count

C. tag

D. status

Answer: D.status
79. The _______________ operation similarly computes an element-wise reductionof vectors, but this time leaves the result scattered among the processes.
A. Reduce-scatter

B. Reduce (to-one)

C. Allreduce

D. None of the above

Answer: A.Reduce-scatter
80. __________________is the principal alternative to shared memory parallelprogramming.
A. Multiple passing

B. Message passing

C. Message programming

D. None of the above

Answer: B.Message passing
81. ________________may complete even if less than count elements have beenreceived.
A. MPI_Recv

B. MPI_Send

C. MPI_Get_count

D. MPI_Any_Source

Answer: A.MPI_Recv
82. A ___________ is a script whose main purpose is to run some program. In thiscase, the program is the C compiler.
A. wrapper script

B. communication functions

C. wrapper simplifies

D. type definitions

Answer: A.wrapper script
83. ________________ returns in its second argument the number of processes inthe communicator.
A. MPI_Init

B. MPI_Comm_size

C. MPI_Finalize

D. MPI_Comm_rank

Answer: B.MPI_Comm_size
84. _____________ always blocks until a matching message has been received.
A. MPI_TAG

B. MPI_ SOURCE

C. MPI Recv

D. MPI_ERROR

Answer: C.MPI Recv
85. Communication functions that involve all the processes in a communicatorare called ___________
A. MPI_Get_count

B. collective communications

C. buffer the message

D. nonovertaking

Answer: B.collective communications
86. MPI_Send and MPI_Recv are called _____________ communications.
A. Collective Communication

B. Tree-Structured Communication

C. point-to-point

D. Collective Computation

Answer: C.point-to-point
87. The processes exchange partial results instead of using onewaycommunications. Such a communication pattern is sometimes called a ___________.
A. butterfly

B. broadcast

C. Data Movement

D. Synchronization

Answer: A.butterfly
88. A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________.
A. broadcast

B. reductions

C. Scatter

D. Gather

Answer: A.broadcast
89. In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory.
A. Allgather

B. derived datatype

C. displacement

D. beginning

Answer: B.derived datatype
90. MPI provides a function, ____________ that returns the number of secondsthat have elapsed since some time in the past.
A. MPI_Wtime

B. MPI_Barrier

C. MPI_Scatter

D. MPI_Comm

Answer: A.MPI_Wtime
91. Programs that can maintain a constant efficiency without increasing theproblem size are sometimes said to be _______________.
A. weakly scalable

B. strongly scalable

C. send_buf

D. recv_buf

Answer: B.strongly scalable
92. Parallelism can be used to increase the (parallel) size of the problemis applicable in ___________________.
A. Amdahl’s Law

B. Gustafson-Barsis’s Law

C. Newton’s Law

D. Pascal’s Law

Answer: B.Gustafson-Barsis’s Law
93. Synchronization is one of the common issues in parallelprogramming. The issues related to synchronization include the followings, EXCEPT:
A. Deadlock

B. Livelock

C. Fairness

D. Correctness

Answer: D.Correctness
94. Considering to use weak or strong scaling is part of ______________ inaddressing the challenges of distributed memory programming.
A. Splitting the problem

B. Speeding up computations

C. Speeding up communication

D. Speeding up hardware

Answer: B.Speeding up computations
95. Which of the followings is the BEST description of Message PassingInterface (MPI)?
A. A specification of a shared memory library

B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other

C. Only communicators and not groups are accessible to the programmer only by a “handle”

D. A communicator is an ordered set of processes

Answer: B.MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
chapter:
Parallel Program Development
96. An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles
A. Program

B. Particle

C. Programmer

D. All of the above

Answer: A.Program
97. The set of NP-complete problems is often denoted by ____________
A. NP-C

B. NP-C or NPC

C. NPC

D. None of the above

Answer: B.NP-C or NPC
98. Pthreads has a nonblocking version of pthreads_mutex_lock called__________
A. pthread_mutex_lock

B. pthread_mutex_trylock

C. pthread_mutex_acquirelock

D. pthread_mutex_releaselock

Answer: B.pthread_mutex_trylock
99. What are the algorithms for identifying which subtrees we assign to theprocesses or threads __________
A. breadth-first search

B. depth-first search

C. depth-first search breadth-first search

D. None of the above

Answer: C.depth-first search breadth-first search
100. What are the scoping clauses in OpenMP _________
A. Shared Variables & Private Variables

B. Shared Variables

C. Private Variables

D. None of the above

Answer: A.Shared Variables & Private Variables

chapter:
Parallel Program Development
101. The function My_avail_tour count can simply return the ________
A. Size of the process’ stack

B. Sub tree rooted at the partial tour

C. Cut-off length

D. None of the above

Answer: A.Size of the process’ stack
102. MPI provides a function ________, for packing data into a buffer of contiguousmemory.
A. MPI_Pack

B. MPI_UnPack

C. MPI_Pack Count

D. MPI_Packed

Answer: A.MPI_Pack
103. Two MPI_Irecv calls are made specifying different buffers and tags, but the same sender and request location. How can one determine that the buffer specified in the first call has valid data?
A. Call MPI_Probe

B. Call MPI_Testany with the same request listed twice

C. Call MPI_Wait twice with the same request

D. Look at the data in the buffer and try to determine whether it is

Answer: C.Call MPI_Wait twice with the same request
104. Which of the following statements is not true?
A. MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI

B. MPI_lssend and MPI_Ibsend are non-blocking message passing routines of MPI

C. MPI_Send and MPI_Recv are non-blocking message passing routines of MPI

D. MPI_Ssend and MPI_Bsend are blocking message passing routines of MPI

Answer: A.MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI
105. Which of the following is not valid with reference to Message PassingInterface (MPI)?
A. MPI can run on any hardware platform

B. The programming model is a distributed memory model

C. All parallelism is implicit

D. MPI – Comm – Size returns the total number of MPI processes in specified communication

Answer: C.All parallelism is implicit
106. An _____________ is a program that finds the solution to an n-body problemby simulating the behavior of the particles.
A. Two N-Body Solvers

B. n-body solver

C. n-body problem

D. Newton‘s second law

Answer: B.n-body solver
107. For the reduced n-body solver, a ________________ will best distribute theworkload in the computation of the forces.
A. cyclic distribution

B. velocity of each particle

C. universal gravitation

D. gravitational constant

Answer: A.cyclic distribution
108. Parallelizing the two n-body solvers using _______________ is very similar toparallelizing them using OpenMP.
A. thread‘s rank

B. function Loopschedule

C. Pthreads

D. loop variable

Answer: C.Pthreads

109. The run-times of the serial solvers differed from the single-process MPIsolvers by ______________.
A. More than 1%

B. less than 1%

C. Equal to 1%

D. Greater than 1%

Answer: B.less than 1%

110. Each node of the tree has an_________________ , that is, the cost of the partialtour.
A. Euler‘s method

B. associated cost

C. three-dimensional problems

D. fast function

Answer: A.Euler‘s method

111. Using _____________ we can systematically visit each node of the tree that could possibly lead to a least-cost solution.
A. depth-first search

B. Foster‘s methodology

C. reduced algorithm

D. breadth first search

Answer: A.depth-first search

112. The newly created stack into our private stack, set the newstack variable to_____________.
A. Infinite

B. Zero

C. NULL

D. None of the above

Answer: C.NULL

113. The ____________________ is a pointer to a block of memory allocated by theuser program and buffersize is its size in bytes.
A. tour data

B. node tasks

C. actual computation

D. buffer argument

Answer: B.node tasks

114. A _____________ function is called by Fulfillrequest.
A. descendants

B. Splitstack

C. dynamic mapping scheme

D. ancestors

Answer: B.Splitstack

115. The cost of stack splitting in the MPI implementation is quite high; in addition to the cost of the communication, the packing and unpacking is very ________________.
A. global least cost

B. time- consuming

C. expensive tours

D. shared stack

Answer: B.time- consuming

116. _____________ begins by checking on the number of tours that the processhas in its stack.
A. Terminated

B. Send rejects

C. Receive rejects

D. Empty

Answer: A.Terminated

117. The ____________ is the distributed-memory version of the OpenMP busywait loop.
A. For loop

B. while(1) loop

C. Do while loop

D. Empty

Answer: B.while(1) loop

118. ______________ sent to false and continue in the loop.
A. work_request

B. My_avail_tour_count

C. Fulfill_request

D. Split_stack packs

Answer: A.work_request

119. ________________ takes the data in data to be packed and packs it intocontig_buf.
A. MPI Unpack

B. MPI_Pack

C. MPI_Datatype

D. MPI_Comm

Answer: B.MPI_Pack

120. The _______________ function when executed by a process other than 0 sendsits energy to process 0.
A. Out of work

B. No_work_left

C. zero-length message

D. request for work

Answer: A.Out of work

Multi Core Architectures and Programming objective questions with answers pdf download online exam test