250+ TOP MCQs on Quicksort using Random Sampling and Answers

Data Structures & Algorithms Multiple Choice Questions on “Quicksort using Random Sampling”.

1. Quick sort uses which of the following algorithm to implement sorting?
a) backtracking
b) greedy algorithm
c) divide and conquer
d) dynamic programming
Answer: c
Clarification: Quick sort uses the technique of divide and conquer in order to sort a given array. It divides the array into two parts about the pivot and then apply a quick sort to both the parts.

2. What is a randomized quick sort?
a) quick sort with random partitions
b) quick sort with random choice of pivot
c) quick sort with random output
d) quick sort with random input

Answer: b
Clarification: Randomized quick sort chooses a random element as a pivot. It is done so as to avoid the worst case of quick sort in which the input array is already sorted.

3. What is the purpose of using randomized quick sort over standard quick sort?
a) so as to avoid worst case time complexity
b) so as to avoid worst case space complexity
c) to improve accuracy of output
d) to improve average case time complexity
Answer: a
Clarification: Randomized quick sort helps in avoiding the worst case time complexity of O(n2) which occurs in case when the input array is already sorted. However the average case and best case time complexities remain unaltered.

4. What is the auxiliary space complexity of randomized quick sort?
a) O(1)
b) O(n)
c) O(log n)
d) O(n log n)

Answer: c
Clarification: Auxiliary space complexity of randomized quick sort is O(log n) which is used for storing call stack formed due to recursion. Note that the algorithms with space complexity as O(log n) also qualifies as in place algorithms as the value of log n is close to 1.

5. What is the average time complexity of randomized quick sort?
a) O(n log n)
b) O(n2)
c) O(n2 log n)
d) O(n log n2)

Answer: a
Clarification: The average case time complexity of randomized quick sort is same as that of standard quick sort as randomized quick sort only helps in preventing the worst case. It is equal to O(n log n).

6. Quick sort uses which of the following method to implement sorting?
a) merging
b) partitioning
c) selection
d) exchanging
View Answer

Answer: b
Clarification: Quick sort makes partitions of the input array about the pivot in order to implement sorting. Thus its method of sorting is called partitioning.

7. Randomized quick sort is an in place sort.
a) true
b) false

Answer: a
Clarification: In-place algorithms requires constant or very less auxiliary space. Quick sort qualifies as an in place sorting algorithm as it has a very low auxiliary space requirement of O(log n).

8. Randomized quick sort is a stable sort.
a) true
b) false

Answer: b
Clarification: Randomized quick sort like standard quick sort is also not a stable sorting algorithm. It is because the elements with the same values are not guaranteed to appear in the same relative order in the output sorted array.

9. What is the best case time complexity randomized quick sort?
a) O(log n)
b) O(n log n)
c) O(n2)
d) O(n2 log n)

Answer: b
Clarification: Best case time complexity is given in the case when there is equal partitioning of the array about the pivot. It is given by the relation T(n) = 2T(n/2) + n which gives the result O(n log n).

10. Which of the following is incorrect about randomized quicksort?
a) it has the same time complexity as standard quick sort
b) it has the same space complexity as standard quick sort
c) it is an in-place sorting algorithm
d) it cannot have a time complexity of O(n2) in any case.

Answer: d
Clarification: Randomized quick sort prevents the worst case complexity of O(n2) in most of the cases. But in some rare cases the time complexity can become O(n2). The probability of such a case is however very low.

11. Which of the following function chooses a random index as pivot.
a)

void partition_random(int arr[], int low, int high) 
{     
    srand(time(NULL)); 
    int random = low + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

b)

void partition_random(int arr[], int low, int high) 
{    
    srand(time(NULL)); 
    int random = high + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

c)

void partition_random(int arr[], int low, int high) 
{     
    srand(1); 
    int random = low + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

d)

void partition_random(int arr[], int low, int high) 
{     
    srand(time(NULL)); 
    int random = low + rand() % (high - low-1); 
    swap(arr[low], arr[high]); 
}

View Answer

Answer: a
Clarification: For generating unique random numbers every time we use srand(time(NULL)). Then after generating the random index we swap the value of element at the random index with the element at last index.

 
 

12. What is the worst case time complexity of randomized quicksort?
a) O(n)
b) O(n log n)
c) O(n2)
d) O(n2 log n)

Answer: c
Clarification: Randomized quicksort prevents the worst case complexity of O(n2) in most of the cases. But in some rare cases the time complexity can become O(n2). The probability of such a case is however very low.

& Algorithms.

and Answers.

250+ TOP MCQs on Bogosort and Answers

Data Structures & Algorithms Multiple Choice Questions on “Bogosort”.

1. Which of the following is not an alternative name of bogosort?
a) stupid sort
b) permutation sort
c) donkey sort
d) monkey sort
Answer: c
Clarification: Bogosort is also known by names like stupid sort, monkey sort, permutation sort, slow sort and shotgun sort.These names are particularly chosen due to its inefficient algorithm.

2. Bogosort works by __________
a) generating random permutations of its input
b) partitioning the array
c) dividing the value of input elements
d) generating permutations according to the value of first element of array

Answer: a
Clarification: Bogosort algorithm successively generates permutations of its input. This process is repeated until the sorted version of the array is found.

3. What is the auxiliary space requirement of bogosort?
a) O(n)
b) O(1)
c) O(log n)
d) O(n log n)

Answer: b
Clarification: Bogosort algorithm do not require any extra space for sorting the input array. Thus its auxiliary space requirement is O(1).

4. What is the best case time complexity of bogosort?
a) O(n2)
b) O(n)
c) O(n log n)
d) O(1)

Answer: b
Clarification: Best case time complexity of bogosort occurs when the input array is already sorted. So in such a case we only need to check whether all the elements are sorted which can be done in O(n) time.

5. What is the worst case time complexity of bogosort?
a) O(n2)
b) O(n*n!)
c) O(infinity)
d) O(n log n)

Answer: c
Clarification: There is no upper bound to the worst case of this algorithm. It can go on to take very large amount of time if the array has many elements. So the worst case of this algorithm can be taken as O(infinity).

6. Which of the following sorting algorithm is not stable __________
a) insertion sort
b) bubble sort
c) merge sort
d) bogosort

Answer: d
Clarification: Out of the given algorithms only bogosort is not stable. This is because it creates permutations of the input array in order to obtain the sorted version. So there is no guarantee that the sorted version obtained by such a method gives a stable output.

7. Which of the following is an in-place sorting algorithm?
a) Merge sort
b) Bogosort
c) Radix sort
d) Counting sort
Answer: b
Clarification: Out of the given algorithms only bogosort is an in-place sorting algorithm. It is because bogosort algorithm do not require any extra space for sorting the input array.

8. Sleep sort should be preferred over bogosort as it has better time complexity.
a) true
b) false
Answer: b
Clarification: If we sort an array using sleep sort then there is no guarantee that the output we get is correctly sorted. So even though sleep sort is better than bogosort in time complexity but it cannot be preferred due to its inaccuracy.

9. What is the average case time complexity of bogosort?
a) O(n2)
b) O(n*n!)
c) O(infinity)
d) O(n log n)

Answer: b
Clarification: For calculating the average we first need to calculate the number of possible permutations an array of size n can have. This will be equal to n!. As each permutation also needs to be checked whether it is sorted or not so this takes another n time. Thus overall time complexity becomes O(n*n!).

10. Which of the following code correctly implements bogosort algorithm?
a)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return false; 
    return true; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

b)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return true; 
    return false; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

c)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] > a[n-1]) 
            return true; 
    return false; 
}
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

d)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return false; 
    return true; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( isSorted(a, n) ) 
        shuffle(a, n); 
}

Answer: a
Clarification: To implement bogosort algorithm we need to shuffle the input array until we get the sorted array. So we first check whether the array is sorted using function isSorted(). If it is not, then we shuffle it using function shuffle(). This process is repeated until we get a sorted array.

 
 

contest

250+ TOP MCQs on Rabin-Karp Algorithm and Answers

Data Structures & Algorithms Multiple Choice Questions on “Rabin-Karp Algorithm”.

1. What is a Rabin and Karp Algorithm?
a) String Matching Algorithm
b) Shortest Path Algorithm
c) Minimum spanning tree Algorithm
d) Approximation Algorithm

Answer: a
Clarification: The string matching algorithm which was proposed by Rabin and Karp, generalizes to other algorithms and for two-dimensional pattern matching problems.

2. What is the pre-processing time of Rabin and Karp Algorithm?
a) Theta(m2)
b) Theta(mlogn)
c) Theta(m)
d) Big-Oh(n)
Answer: c
Clarification: The for loop in the pre-processing algorithm runs for m(length of the pattern) times. Hence the pre-processing time is Theta(m).

3. Rabin Karp Algorithm makes use of elementary number theoretic notions.
a) True
b) False

Answer: a
Clarification: Rabin Karp Algorithm makes use of elementary theoretic number notions such as the equivalence of two numbers modulo a third number.

4. What is the basic formula applied in Rabin Karp Algorithm to get the computation time as Theta(m)?
a) Halving rule
b) Horner’s rule
c) Summation lemma
d) Cancellation lemma

Answer: b
Clarification: The pattern can be evaluated in time Theta(m) using Horner’s rule:
p = P[m] + 10(P[m-1] + 10(P[m-2] +…+ 10(P[2]+10P[1])…)).

5. What is the worst case running time of Rabin Karp Algorithm?
a) Theta(n)
b) Theta(n-m)
c) Theta((n-m+1)m)
d) Theta(nlogm)

Answer: c
Clarification: The worst case running time of Rabin Karp Algorithm is Theta(n-m+1)m). We write Theta(n-m+1) instead of Theta(n-m) because there are n-m+1 different values that the given text takes on.

6. Rabin- Karp algorithm can be used for discovering plagiarism in a sentence.
a) True
b) False

Answer: a
Clarification: Since Rabin-Karp algorithm is a pattern detecting algorithm in a text or string, it can be used for detecting plagiarism in a sentence.

7. If n is the length of text(T) and m is the length of the pattern(P) identify the correct pre-processing algorithm. (where q is a suitable modulus to reduce the complexity)
p=0; t0=0;
a)

for i=1 to n
do t0=(dt0 + P[i])mod q 
p=(dp+T[i])mod q

b)

for i=1 to n
do p=(dp + P[i])mod q 
t0=(dt0+T[i])mod q

c)

for i=1 to m
do t0=(dp + P[i])mod q 
p=(dt0+T[i])mod q

d)

for i=1 to m
do p=(dp + P[i])mod q 
t0=(dt0+T[i])mod q

Answer: d
Clarification: The pre-processing algorithm runs m (the length of pattern) times. This algorithm is used to compute p as the value of P[1….m] mod q and t0 as the value of T[1….m]mod q.

 
 

8. If n is the length of text(T) and m is the length of the pattern(P) identify the correct matching algorithm.
a)

for s=0 to n
		do if p=t0	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

b)

for s=0 to n-m
		do if p=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

c)

 for s=0 to m
		do if p=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

d)

 for s=0 to n-m
		do if p!=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

Answer: b
Clarification: The matching algorithm runs for n-m times. Rabin Karp algorithm explicitly verifies every valid shift. If the required pattern matches with the given text then the algorithm prints pattern found as result.

 
 

9. What happens when the modulo value(q) is taken large?
a) Complexity increases
b) Spurious hits occur frequently
c) Cost of extra checking is low
d) Matching time increases
Answer: c
Clarification: If the modulo value(q) is large enough then the spurious hits occur infrequently enough that the cost of extra checking is low.

10. Given a pattern of length-5 window, find the suitable modulo value.

a) 13
b) 14
c) 12
d) 11
Answer: a
Clarification: The modulus q is typically chosen as a prime number that is large enough to reduce the complexity when p is very large.

11. Given a pattern of length- 5 window, find the valid match in the given text.

Pattern: 2 1 9 3 6 
Modulus: 21
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Text:   9 2 7 2 1 8 3 0 5 7 1   2   1   2   1   9   3   6   2   3   9  7

a) 11-16
b) 3-8
c) 13-18
d) 15-20
View Answer

Answer: c
Clarification: The pattern 2 1 9 3 6 occurs in the text starting from position 13 to 18. In the given pattern value is computed as 12 by having the modulus as 21. The same text string values are computed for each possible position of a 5 length window.

12. Given a pattern of length- 5 window, find the spurious hit in the given text string.

Pattern: 3 1 4 1 5 
Modulus: 13
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Text:   2 3 5 9 0 2 3 1 4 1 5   2   6   7   3   9   9   2   1   3   9

a) 6-10
b) 12-16
c) 3-7
d) 13-17
View Answer

Answer: d
Clarification: The sub string in the range 13-17, 6 7 3 9 9 produces the same value 7 as the given pattern. But the pattern numbers don’t match with sub string identified, hence it is a spurious hit.

13. If the expected number of valid shifts is small and modulus is larger than the length of pattern what is the matching time of Rabin Karp Algorithm?
a) Theta(m)
b) Big-Oh(n+m)
c) Theta(n-m)
d) Big-Oh(n)

Answer: b
Clarification: When the number of valid shifts(v) is Big-Oh(1) and q>=m then the matching time is given by O(n)+O(m(v+n/q)) is simplified as O(n+m).

14. What is the basic principle in Rabin Karp algorithm?
a) Hashing
b) Sorting
c) Augmenting
d) Dynamic Programming

Answer: a
Clarification: The basic principle employed in Rabin Karp algorithm is hashing. In the given text every substring is converted to a hash value and compared with the hash value of the pattern.

15. Who created the Rabin Karp Algorithm?
a) Joseph Rabin and Michael Karp
b) Michael Rabin and Joseph Karp
c) Richard Karp and Michael Rabin
d) Michael Karp and Richard Rabin

Answer: c
Clarification: Rabin Karp algorithm was invented by Richard Karp and Michael Rabin in the year 1987 for searching a pattern in the given string.

and Answers.

250+ TOP MCQs on Non-recursive Depth First Search and Answers

Data Structures & Algorithms Multiple Choice Questions on “Non-recursive Depth First Search”.

1. Which of the following data structure is used to implement DFS?
a) linked list
b) tree
c) stack
d) queue

Answer: c
Clarification: Stack is used in the standard implementation of depth first search. It is used to store the elements which are to be explored.

2. Which of the following traversal in a binary tree is similar to depth first traversal?
a) level order
b) post order
c) pre order
d) in order

Answer: c
Clarification: In DFS we keep on exploring as far as possible along each branch before backtracking. It terminates when all nodes are visited. So it is similar to pre order traversal in binary tree.

3. Which of the following represent the correct pseudo code for non recursive DFS algorithm?
a)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.push(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

b)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.pop()
  while St is not empty
    v = St.push(v)
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

c)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.push(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(v)

d)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.pop(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

Answer: a
Clarification: In the iterative approach we first push the source node into the stack. If the node has not been visited then it is printed and marked as visited. Then the unvisited adjacent nodes are added to the stack. Then the same procedure is repeated for each node of the stack.

4. What will be the time complexity of the iterative depth first traversal code(V=no. of vertices E=no.of edges)?
a) O(V+E)
b) O(V)
c) O(E)
d) O(V*E)

Answer: a
Clarification: As the time required to traverse a full graph is V+E so its worst case time complexity becomes O(V+E). The time complexity of iterative and recursive DFS are same.

5. Which of the following functions correctly represent iterative DFS?
a)

void DFS(int s) 
{     
    vector<bool> discovered(V, true); 
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

b)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();        
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

c)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        st.pop(); 
        s = st.top();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

d)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = false; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (discovered[*i]) 
                st.push(*i); 
    } 
}

View Answer

Answer: b
Clarification: In the correct version we first push the source node into the stack. If the node has not been visited then it is printed and marked as visited. Then the unvisited adjacent nodes are added to the stack. Then the same procedure is repeated for each node of the stack.

8. What is the space complexity of standard DFS(V: no. of vertices E: no. of edges)?
a) O(V+E)
b) O(V)
c) O(E)
d) O(V*E)

Answer: b
Clarification: In the worst case the space complexity of DFS will be O(V) in the case when all the vertices are stored in stack. This space complexity is excluding the space required to store the graph.

9. Which of the following data structure is used to implement BFS?
a) linked list
b) tree
c) stack
d) queue

Answer: d
Clarification: Queue is used in the standard implementation of breadth first search. It is used to store the vertices according to the code algorithm.

10. Choose the incorrect statement about DFS and BFS from the following?
a) BFS is equivalent to level order traversal in trees
b) DFS is equivalent to post order traversal in trees
c) DFS and BFS code has the same time complexity
d) BFS is implemented using queue

Answer: b
Clarification: DFS is equivalent to pre order traversal in trees, not post order traversal. It is so because in DFS we keep on exploring as far as possible along each branch before backtracking. So it should be equivalent to pre order traversal.

250+ TOP MCQs on Bipartite Graph and Answers

Data Structures & Algorithms Multiple Choice Questions on “Bipartite Graph”.

1. Given G is a bipartite graph and the bipartitions of this graphs are U and V respectively. What is the relation between them?
a) Number of vertices in U = Number of vertices in V
b) Sum of degrees of vertices in U = Sum of degrees of vertices in V
c) Number of vertices in U > Number of vertices in V
d) Nothing can be said

Answer: b
Clarification: We can prove this by induction. By adding one edge, the degree of vertices in U is equal to 1 as well as in V. Let us assume that this is true for n-1 edges and add one more edge. Since the given edge adds exactly once to both U and V we can tell that this statement is true for all n vertices.

2. A k-regular bipartite graph is the one in which degree of each vertices is k for all the vertices in the graph. Given that the bipartitions of this graph are U and V respectively. What is the relation between them?
a) Number of vertices in U=Number of vertices in V
b) Number of vertices in U not equal to number of vertices in V
c) Number of vertices in U always greater than the number of vertices in V
d) Nothing can be said

Answer: a
Clarification: We know that in a bipartite graph sum of degrees of vertices in U=sum of degrees of vertices in V. Given that the graph is a k-regular bipartite graph, we have k*(number of vertices in U)=k*(number of vertices in V).

3. There are four students in a class namely A, B, C and D. A tells that a triangle is a bipartite graph. B tells pentagon is a bipartite graph. C tells square is a bipartite graph. D tells heptagon is a bipartite graph. Who among the following is correct?
a) A
b) B
c) C
d) D

Answer: c
Clarification: We can prove it in this following way. Let ‘1’ be a vertex in bipartite set X and let ‘2’ be a vertex in the bipartite set Y. Therefore the bipartite set X contains all odd numbers and the bipartite set Y contains all even numbers. Now let us consider a graph of odd cycle (a triangle). There exists an edge from ‘1’ to ‘2’, ‘2’ to ‘3’ and ‘3’ to ‘1’. The latter case (‘3’ to ‘1’) makes an edge to exist in a bipartite set X itself. Therefore telling us that graphs with odd cycles are not bipartite.

4. A complete bipartite graph is a one in which each vertex in set X has an edge with set Y. Let n be the total number of vertices. For maximum number of edges, the total number of vertices hat should be present on set X is?
a) n
b) n/2
c) n/4
d) data insufficient

Answer: b
Clarification: We can prove this by calculus. Let x be the total number of vertices on set X. Therefore set Y will have n-x. We have to maximize x*(n-x). This is true when x=n/2.

5. When is a graph said to be bipartite?
a) If it can be divided into two independent sets A and B such that each edge connects a vertex from to A to B
b) If the graph is connected and it has odd number of vertices
c) If the graph is disconnected
d) If the graph has at least n/2 vertices whose degree is greater than n/2

Answer: a
Clarification: A graph is said to be bipartite if it can be divided into two independent sets A and B such that each edge connects a vertex from A to B.

6. Are trees bipartite?
a) Yes
b) No
c) Yes if it has even number of vertices
d) No if it has odd number of vertices

Answer: a
Clarification: Condition needed is that there should not be an odd cycle. But in a tree there are no cycles at all. Hence it is bipartite.

7. A graph has 20 vertices. The maximum number of edges it can have is? (Given it is bipartite)
a) 100
b) 140
c) 80
d) 20

Answer: a
Clarification: Let the given bipartition X have x vertices, then Y will have 20-x vertices. We need to maximize x*(20-x). This will be maxed when x=10.

8. Given that a graph contains no odd cycle. Is it enough to tell that it is bipartite?
a) Yes
b) No

Answer: a
Clarification: It is required that the graph is connected also. If it is not then it cannot be called a bipartite graph.

9. Can there exist a graph which is both eulerian and is bipartite?
a) Yes
b) No
c) Yes if it has even number of edges
d) Nothing can be said

Answer: a
Clarification: If a graph is such that there exists a path which visits every edge atleast once, then it is said to be Eulerian. Taking an example of a square, the given question evaluates to yes.

10. A graph is found to be 2 colorable. What can be said about that graph?
a) The given graph is eulerian
b) The given graph is bipartite
c) The given graph is hamiltonian
d) The given graph is planar

Answer: b
Clarification: A graph is said to be colorable if two vertices connected by an edge are never of the same color. 2 colorable mean that this can be achieved with just 2 colors.

250+ TOP MCQs on Recursive Selection Sort and Answers

Data Structures & Algorithms Multiple Choice Questions on “Recursive Selection Sort”.

1. Which of the following sorting algorithm has best case time complexity of O(n2)?
a) bubble sort
b) selection sort
c) insertion sort
d) stupid sort

Answer: b
Clarification: Selection sort is not an adaptive sorting algorithm. It finds the index of minimum element in each iteration even if the given array is already sorted. Thus its best case time complexity becomes O(n2).

2. Which of the following is the biggest advantage of selection sort?
a) its has low time complexity
b) it has low space complexity
c) it is easy to implement
d) it requires only n swaps under any condition

Answer: d
Clarification: Selection sort works by obtaining least value element in each iteration and then swapping it with the current index. So it will take n swaps under any condition which will be useful when memory write operation is expensive.

3. What will be the recurrence relation of the code of recursive selection sort?
a) T(n) = 2T(n/2) + n
b) T(n) = 2T(n/2) + c
c) T(n) = T(n-1) + n
d) T(n) = T(n-1) + c

Answer: c
Clarification: Function to find the minimum element index takes n time.The recursive call is made to one less element than in the previous call so the overall recurrence relation becomes T(n) = T(n-1) + n.

4. Which of the following sorting algorithm is NOT stable?
a) Selection sort
b) Brick sort
c) Bubble sort
d) Merge sort

Answer: a
Clarification: Out of the given options selection sort is the only algorithm which is not stable. It is because the order of identical elements in sorted output may be different from input array.

5. What will be the best case time complexity of recursive selection sort?
a) O(n)
b) O(n2)
c) O(log n)
d) O(n log n)

Answer: b
Clarification: Selection sort’s algorithm is such that it finds the index of minimum element in each iteration even if the given array is already sorted. Thus its best case time complexity becomes O(n2).

6. Recursive selection sort is a comparison based sort.
a) true
b) false

Answer: a
Clarification: In selection sort we need to compare elements in order to find the minimum element in each iteration. So we can say that it uses comparisons in order to sort the array. Thus it qualifies as a comparison based sort.

7. What is the average case time complexity of recursive selection sort?
a) O(n)
b) O(n log n)
c) O(n2)
d) O(log n)

Answer: c
Clarification: The overall recurrence relation of recursive selection sort is given by T(n) = T(n-1) + n. It is found to be equal to O(n2). It is unvaried throughout the three cases.

8. What is the bidirectional variant of selection sort?
a) cocktail sort
b) bogo sort
c) gnome sort
d) bubble sort

Answer: a
Clarification: A bidirectional variant of selection sort is called cocktail sort. It’s an algorithm which finds both the minimum and maximum values in the array in every pass. This reduces the number of scans of the array by a factor of 2.

9. Choose correct C++ code for recursive selection sort from the following.
a)

#include  
using namespace std; 
int minIndex(int a[], int i, int j) 
{ 
	if (i == 0) 
		return i; 	
	int k = minIndex(a, i + 1, j); 	
	return (a[i] < a[k])? i : k; 
} 
void recursiveSelectionSort(int a[], int n, int index = 0) 
{ 
 
	if (index == n) 
	return; 	 
	int x = minIndex(a, index, n-1); 	 
	if (x == index) 
	{	    
	    	swap(a[x], a[index]);
	} 
	recursiveSelectionSort(a, n, index + 1); 
}  
int main() 
{ 
	int arr[] = {5,3,2,4,1}; 
	int n = sizeof(arr)/sizeof(arr[0]);  
	recursiveSelectionSort(arr, n); 	
	return 0; 
}

b)

#include  
using namespace std; 
int minIndex(int a[], int i, int j) 
{ 
	if (i == j) 
		return i; 	
	int k = minIndex(a, i + 1, j); 	
	return (a[i] < a[k])? i : k; 
} 
void recursiveSelectionSort(int a[], int n, int index = 0) 
{ 
 
	if (index == n) 
        return; 	 
	int x = minIndex(a, index, n-1);	 
	if (x != index) 
	{
	    	swap(a[x], a[index]);
	} 
	recursiveSelectionSort(a, n, index + 1); 
}  
int main() 
{ 
	int arr[] = {5,3,2,4,1}; 
	int n = sizeof(arr)/sizeof(arr[0]);  
	recursiveSelectionSort(arr, n); 	
	return 0; 
}

c)

#include  
using namespace std; 
int minIndex(int a[], int i, int j) 
{ 
	if (i == j) 
		return i; 	
	int k = minIndex(a, i + 1, j); 
	return (a[i] > a[k])? i : k; 
} 
void recursiveSelectionSort(int a[], int n, int index = 0) 
{ 	
	if (index == n) 
	return; 	 
	int x = minIndex(a, index, n-1); 	 
	if (x != index) 
	{	    
	    	swap(a[x], a[index]);
	} 
	recursiveSelectionSort(a, n, index + 1); 
}  
int main() 
{ 
	int arr[] = {5,3,2,4,1}; 
	int n = sizeof(arr)/sizeof(arr[0]); 
        recursiveSelectionSort(arr, n); 
	return 0; 
}

d)

 #include  
using namespace std; 
int minIndex(int a[], int i, int j) 
{ 
	if (i == j) 
		return i; 	
	int k = minIndex(a, i + 1, j); 	
	return (a[i] > a[k])? i : k; 
} 
void recursiveSelectionSort(int a[], int n, int index = 0) 
{ 	
	if (index == 0) 
	return; 	 
	int x = minIndex(a, index, n-1); 	 
	if (x == index) 
	{	    
	    	swap(a[x], a[index]);
	}
 	recursiveSelectionSort(a, n, index + 1); 
}  
int main() 
{ 
	int arr[] = {5,3,2,4,1}; 
	int n = sizeof(arr)/sizeof(arr[0]); 
	recursiveSelectionSort(arr, n); 
        return 0; 
}

View Answer

Answer: b
Clarification: Using the function recursiveSelectionSort() we find the element that needs to be placed at the current index. For finding the minimum element index we use another function minIndex(). After finding the minimum element index the current element is swapped with this element in the function recursiveSelectionSort().

 

10. What is the number of swaps required to sort the array arr={5,3,2,4,1} using recursive selection sort?
a) 0
b) 1
c) 2
d) 3

Answer: c
Clarification: The first swap takes place between 1 and 5. The second swap takes place between 3 and 2 which sorts our array.