250+ TOP MCQs on Set Partition Problem and Answers

Data Structures & Algorithms Multiple Choice Questions & Answers (MCQs) on “Set Partition Problem”.

1. What is meant by the power set of a set?
a) subset of all sets
b) set of all subsets
c) set of particular subsets
d) an empty set

Answer: b
Clarification: Power set of a set is defined as the set of all subsets. Ex- if there is a set S={1,3} then power set of set S will be P={{},{1},{3}{1,3}}.

2. What is the set partition problem?
a) finding a subset of a set that has sum of elements equal to a given number
b) checking for the presence of a subset that has sum of elements equal to a given number
c) checking whether the set can be divided into two subsets of with equal sum of elements and printing true or false based on the result
d) finding subsets with equal sum of elements

Answer: c
Clarification: In set partition problem we check whether a set can be divided into 2 subsets such that the sum of elements in each subset is equal. If such subsets are present then we print true otherwise false.

3. Which of the following is true about the time complexity of the recursive solution of set partition problem?
a) It has an exponential time complexity
b) It has a linear time complexity
c) It has a logarithmic time complexity
d) it has a time complexity of O(n2)
Answer: a
Clarification: Set partition problem has both recursive as well as dynamic programming solution. The recursive solution has an exponential time complexity as it will require to check for all subsets in the worst case.

4. What is the worst case time complexity of dynamic programming solution of set partition problem(sum=sum of set elements)?
a) O(n)
b) O(sum)
c) O(n2)
d) O(sum*n)

Answer: d
Clarification: Set partition problem has both recursive as well as dynamic programming solution. The dynamic programming solution has a time complexity of O(n*sum) as it as a nested loop with limits from 1 to n and 1 to sum respectively.

5. Set partition problem is an example of NP complete problem.
a) true
b) false

Answer: a
Clarification: Set partition problem takes exponential time when we implement a recursive solution. Set partition problem is known to be a part of NP complete problems.

6. Recursive solution of Set partition problem is faster than dynamic problem solution in terms of time complexity.
a) true
b) false

Answer: b
Clarification: The recursive solution to set partition problem takes exponential time complexity whereas the dynamic programming solution takes polynomial time complexity. So dynamic programming solution is faster in terms of time complexity.

7. Which of the following is not true about set partition problem?
a) the recursive solution has a time complexity of O(2n)
b) there is no known solution that takes polynomial time
c) the recursive solution is slower than dynamic programming solution
d) the dynamic programming solution has a time complexity of O(n log n)
View Answer

Answer: d
Clarification: Recursive solution of set partition problem is slower than dynamic problem solution in terms of time complexity. Dynamic programming solution has a time complexity of O(n*sum).

8. Which of the following should be the base case for the recursive solution of a set partition problem?
a)

If(sum%2!=0)
   return false;
if(sum==0)
   return true;

b)

If(sum%2!=0)
   return false;
if(sum==0)
   return true;
if (n ==0 && sum!= 0) 
   return false; 

c)

if (n ==0 && sum!= 0) 
    return false; 

d)

if(sum
Answer: b
Clarification: In this case, we need to make sure that, the sum does not become 0 and there should be elements left in our array for recursion to happen. Also if the sum of elements of the set is an odd number then that set cannot be partitioned into two subsets with an equal sum so under such a condition false should be returned.

 
 

9. What will be the output for the given code?

#include  
#include  
bool func1(int arr[], int n, int sum) 
{ 
    if (sum == 0) 
	return true; 
    if (n == 0 && sum != 0) 
	return false; 
    if (arr[n-1] > sum) 
	return func1(arr, n-1, sum); 
    return func1(arr, n-1, sum) || func1(arr, n-1, sum-arr[n-1]); 
} 
bool func (int arr[], int n) 
{ 	
	int sum = 0; 
	for (int i = 0; i < n; i++) 
	sum += arr[i]; 	
	if (sum%2 != 0) 
	return false; 
	return func1 (arr, n, sum/2); 
} 
int main() 
{ 
    int arr[] = {4,6, 12, 2}; 
    int n = sizeof(arr)/sizeof(arr[0]); 
    if (func(arr, n) == true) 
	printf("true"); 
    else
	printf("false"); 
    return 0; 
}

a) true
b) false
c) 4 6 2
d) 12

Answer: a
Clarification: The given code represents the recursive approach of solving the set partition problem. The code checks whether a set can be divided into 2 subsets such that the sum of elements in each subset is equal. If such a partition is possible then we print true otherwise false. In this case true should be printed.

10. What will be the output for the given code?

#include 
bool func (int arr[], int n) 
{ 
	int sum = 0; 
	int i, j; 
	for (i = 0; i < n; i++) 
	sum += arr[i];
	if (sum%2 != 0) 
	return false; 
	bool partition[sum/2+1][n+1]; 
	for (i = 0; i <= n; i++) 
	partition[0][i] = true; 
	for (i = 1; i <= sum/2; i++) 
	partition[i][0] = false;	 
	for (i = 1; i <= sum/2; i++) 
	{ 
	    for (j = 1; j <= n; j++) 
	    { 
		partition[i][j] = partition[i][j-1]; 
		if (i >= arr[j-1]) 
		partition[i][j] = partition[i][j] || partition[i - arr[j-1]][j-1]; 
	    }		 
	}	 
	return partition[sum/2][n]; 
}	
int main() 
{ 
    int arr[] = {3, 3, 4, 4, 7}; 
    int n = sizeof(arr)/sizeof(arr[0]); 
    if (func(arr, n) == true) 
	printf("true"); 
    else
	printf("false"); 
    return 0; 
}

a) true
b) false
c) 0
d) error

Answer: b
Clarification: The given code represents the dynamic programming approach of solving set partition problem. The code checks whether a set can be divided into 2 subsets such that the sum of elements in each subset is equal. If such a partition is possible then we print true otherwise false. In this case, false should be printed.

11. What will be the auxiliary space complexity of dynamic programming solution of set partition problem(sum=sum of set elements)?
a) O(n log n)
b) O(n2)
c) O(2n)
d) O(sum*n)

Answer: d
Clarification: The auxiliary space complexity of set partition problem is required in order to store the partition table. It takes up a space of n*sum, so its auxiliary space requirement becomes O(n*sum).

& Algorithms.

and Answers.

250+ TOP MCQs on Uniform Binary Search and Answers

Data Structure Multiple Choice Questions on “Uniform Binary Search”.

1. In which of the cases uniform binary search fails compared to binary search?
a) A table lookup is generally faster than an addition and a shift
b) Many searches will be performed on the same array
c) Many searches will be performed on several arrays of the same length
d) Complexity of code

Answer: d
Clarification: Uniform binary search code is more complex to implement than binary search as it involves mid points to be computed in hand before search.

2. Which of the following is a suitable lookup table that can be used in the uniform binary search?(N is the number of elements in the array and the delta array is global)
a)

public static void make_delta(int N) 
{
       int power = 1;
       int i = 0;
       do 
       {
            int half = power;
            power <<= 1;
            delta[i] = (N + half) / power;
       } 
       while (delta[i++] != 0);
}

b)

public static void make_delta(int N) 
{
       int power = 0;
       int i = 0;
       do 
       {
            int half = power;
            power <<= 1;
            delta[i] = (N + half) / power;
       } 
       while (delta[i++] != 0);
}

c)

public static void make_delta(int N) 
{
       int power = 1;
       int i = 0;
       do 
       {
            int half = power;
            power >>= 1;
            delta[i] = (N + half) / power;
       }
       while (delta[i++] != 0);
}

d)

public static void make_delta(int N) 
{
       int power = 1;
       int i = 0;
       do 
       {
            int half = power;
            power <<= 1;
            delta[i] = (N - half) / power;
       } 
       while (delta[i++] != 0);
}

Answer: a
Clarification: This provides a single lookup index and the values are dependent on the the number of elements(N) in the array.

3. Given delta[4] is a global array and number of elements in the sorted array is 10, what are the values in the delta array?
a) 4, 3, 1, 0
b) 5, 3, 1, 0
c) 4, 2, 1, 1
d) 5, 2, 1, 1

Answer: b
Clarification: Trace with respect to the make_delta function, always note that the last element is always 0.

4. Choose the appropriate code snippet that performs uniform binary search.
a)

public static int unisearch(int key) 
{
       int i = delta[0] - 1; 
       int j = 0;
       while (true) 
       {
            if (key == arr[i])
                return i;
            else if (delta[j] == 0)
                return -1;
            else 
            {
                if (key < arr[i])
                    i += delta[++j];
                else
                    i -= delta[++j];
            }
       }
}

b)

public static int unisearch(int key) 
{
       int i = delta[1] - 1; 
       int j = 0;
       while (true) 
       {
            if (key == arr[i])
                return i;
            else if (delta[j] == 0)
                return -1;
            else 
            {
                if (key < arr[i])
                    i -= delta[++j];
                else
                    i += delta[++j];
            }
       }
}

c)

public static int unisearch(int key) 
{
       int i = delta[0] - 1; 
       int j = 0;
       while (true) 
       {
            if (key == arr[i])
                return i;
            else if (delta[j] == 0)
                return -1;
            else 
            {
                if (key < arr[i])
                    i -= delta[++j];
                else
                    i += delta[++j];
            }
       }
}

d)

public static int unisearch(int key) 
{
       int i = delta[0] - 1; 
       int j = 0;
       while (true) 
       {
            if (key == arr[i])
                return i;
            else if (delta[j] == 0)
                return -1;
            else 
            {
                if (key < arr[i])
                    i += delta[++j];
                else
                    i += delta[++j];
            }
       }
}

Answer: c
Clarification: Unlike the usual binary search which a low, high and a mid variable and every time comparing the key with the mid value, the comparing index is obtained from the lookup delta table, choosing the left or right side of the array is same as with the normal binary search.

 
 

5. What is the time complexity of uniform binary search?
a) O(nlogn)
b) O(logn)
c) O(n)
d) O(n2)

Answer: b
Clarification: With every iteration we are dividing the array into two parts(though not equal halves), the complexity remains same as the normal binary search.

6. Given, arr = {1,3,5,6,7,9,14,15,17,19} key = 17 and delta = {5,3,1,0}
How many key comparisons are made?(exclude the comparison used to decide the left or right sub array)
a) 4
b) 3
c) 5
d) 6
Answer: b
Clarification: Tracing with the above code, comparison #1: i=4, comparison #2: i=7, comparison #3: i=8

7. How can Jump Search be improved?
a) Start searching from the end
b) Begin from the kth item, where k is the step size
c) Cannot be improved
d) Step size should be other than sqrt(n)
Answer: b
Clarification: This gives a very slight improvement as you are skipping the first k elements.

8. Which of the following false about Jump Search?
a) Jump Search is better than Linear Search
b) Useful when jumping back is more costly than jumping forward
c) Jump Search is worse than Binary Search
d) Jump search starts from the index 0 even though specified index is k

Answer: d
Clarification: Linear search has O(n) complexity and Binary search has O(logn) complexity, in Jump search you have to jump backwards only once, hence it is preferable if jumping backwards is costly. Jump search starts from index k (specified index) and searches for the element. It won’t start searching from index 0.

contest

250+ TOP MCQs on Quicksort using Random Sampling and Answers

Data Structures & Algorithms Multiple Choice Questions on “Quicksort using Random Sampling”.

1. Quick sort uses which of the following algorithm to implement sorting?
a) backtracking
b) greedy algorithm
c) divide and conquer
d) dynamic programming
Answer: c
Clarification: Quick sort uses the technique of divide and conquer in order to sort a given array. It divides the array into two parts about the pivot and then apply a quick sort to both the parts.

2. What is a randomized quick sort?
a) quick sort with random partitions
b) quick sort with random choice of pivot
c) quick sort with random output
d) quick sort with random input

Answer: b
Clarification: Randomized quick sort chooses a random element as a pivot. It is done so as to avoid the worst case of quick sort in which the input array is already sorted.

3. What is the purpose of using randomized quick sort over standard quick sort?
a) so as to avoid worst case time complexity
b) so as to avoid worst case space complexity
c) to improve accuracy of output
d) to improve average case time complexity
Answer: a
Clarification: Randomized quick sort helps in avoiding the worst case time complexity of O(n2) which occurs in case when the input array is already sorted. However the average case and best case time complexities remain unaltered.

4. What is the auxiliary space complexity of randomized quick sort?
a) O(1)
b) O(n)
c) O(log n)
d) O(n log n)

Answer: c
Clarification: Auxiliary space complexity of randomized quick sort is O(log n) which is used for storing call stack formed due to recursion. Note that the algorithms with space complexity as O(log n) also qualifies as in place algorithms as the value of log n is close to 1.

5. What is the average time complexity of randomized quick sort?
a) O(n log n)
b) O(n2)
c) O(n2 log n)
d) O(n log n2)

Answer: a
Clarification: The average case time complexity of randomized quick sort is same as that of standard quick sort as randomized quick sort only helps in preventing the worst case. It is equal to O(n log n).

6. Quick sort uses which of the following method to implement sorting?
a) merging
b) partitioning
c) selection
d) exchanging
View Answer

Answer: b
Clarification: Quick sort makes partitions of the input array about the pivot in order to implement sorting. Thus its method of sorting is called partitioning.

7. Randomized quick sort is an in place sort.
a) true
b) false

Answer: a
Clarification: In-place algorithms requires constant or very less auxiliary space. Quick sort qualifies as an in place sorting algorithm as it has a very low auxiliary space requirement of O(log n).

8. Randomized quick sort is a stable sort.
a) true
b) false

Answer: b
Clarification: Randomized quick sort like standard quick sort is also not a stable sorting algorithm. It is because the elements with the same values are not guaranteed to appear in the same relative order in the output sorted array.

9. What is the best case time complexity randomized quick sort?
a) O(log n)
b) O(n log n)
c) O(n2)
d) O(n2 log n)

Answer: b
Clarification: Best case time complexity is given in the case when there is equal partitioning of the array about the pivot. It is given by the relation T(n) = 2T(n/2) + n which gives the result O(n log n).

10. Which of the following is incorrect about randomized quicksort?
a) it has the same time complexity as standard quick sort
b) it has the same space complexity as standard quick sort
c) it is an in-place sorting algorithm
d) it cannot have a time complexity of O(n2) in any case.

Answer: d
Clarification: Randomized quick sort prevents the worst case complexity of O(n2) in most of the cases. But in some rare cases the time complexity can become O(n2). The probability of such a case is however very low.

11. Which of the following function chooses a random index as pivot.
a)

void partition_random(int arr[], int low, int high) 
{     
    srand(time(NULL)); 
    int random = low + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

b)

void partition_random(int arr[], int low, int high) 
{    
    srand(time(NULL)); 
    int random = high + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

c)

void partition_random(int arr[], int low, int high) 
{     
    srand(1); 
    int random = low + rand() % (high - low); 
    swap(arr[random], arr[high]); 
}

d)

void partition_random(int arr[], int low, int high) 
{     
    srand(time(NULL)); 
    int random = low + rand() % (high - low-1); 
    swap(arr[low], arr[high]); 
}

View Answer

Answer: a
Clarification: For generating unique random numbers every time we use srand(time(NULL)). Then after generating the random index we swap the value of element at the random index with the element at last index.

 
 

12. What is the worst case time complexity of randomized quicksort?
a) O(n)
b) O(n log n)
c) O(n2)
d) O(n2 log n)

Answer: c
Clarification: Randomized quicksort prevents the worst case complexity of O(n2) in most of the cases. But in some rare cases the time complexity can become O(n2). The probability of such a case is however very low.

& Algorithms.

and Answers.

250+ TOP MCQs on Bogosort and Answers

Data Structures & Algorithms Multiple Choice Questions on “Bogosort”.

1. Which of the following is not an alternative name of bogosort?
a) stupid sort
b) permutation sort
c) donkey sort
d) monkey sort
Answer: c
Clarification: Bogosort is also known by names like stupid sort, monkey sort, permutation sort, slow sort and shotgun sort.These names are particularly chosen due to its inefficient algorithm.

2. Bogosort works by __________
a) generating random permutations of its input
b) partitioning the array
c) dividing the value of input elements
d) generating permutations according to the value of first element of array

Answer: a
Clarification: Bogosort algorithm successively generates permutations of its input. This process is repeated until the sorted version of the array is found.

3. What is the auxiliary space requirement of bogosort?
a) O(n)
b) O(1)
c) O(log n)
d) O(n log n)

Answer: b
Clarification: Bogosort algorithm do not require any extra space for sorting the input array. Thus its auxiliary space requirement is O(1).

4. What is the best case time complexity of bogosort?
a) O(n2)
b) O(n)
c) O(n log n)
d) O(1)

Answer: b
Clarification: Best case time complexity of bogosort occurs when the input array is already sorted. So in such a case we only need to check whether all the elements are sorted which can be done in O(n) time.

5. What is the worst case time complexity of bogosort?
a) O(n2)
b) O(n*n!)
c) O(infinity)
d) O(n log n)

Answer: c
Clarification: There is no upper bound to the worst case of this algorithm. It can go on to take very large amount of time if the array has many elements. So the worst case of this algorithm can be taken as O(infinity).

6. Which of the following sorting algorithm is not stable __________
a) insertion sort
b) bubble sort
c) merge sort
d) bogosort

Answer: d
Clarification: Out of the given algorithms only bogosort is not stable. This is because it creates permutations of the input array in order to obtain the sorted version. So there is no guarantee that the sorted version obtained by such a method gives a stable output.

7. Which of the following is an in-place sorting algorithm?
a) Merge sort
b) Bogosort
c) Radix sort
d) Counting sort
Answer: b
Clarification: Out of the given algorithms only bogosort is an in-place sorting algorithm. It is because bogosort algorithm do not require any extra space for sorting the input array.

8. Sleep sort should be preferred over bogosort as it has better time complexity.
a) true
b) false
Answer: b
Clarification: If we sort an array using sleep sort then there is no guarantee that the output we get is correctly sorted. So even though sleep sort is better than bogosort in time complexity but it cannot be preferred due to its inaccuracy.

9. What is the average case time complexity of bogosort?
a) O(n2)
b) O(n*n!)
c) O(infinity)
d) O(n log n)

Answer: b
Clarification: For calculating the average we first need to calculate the number of possible permutations an array of size n can have. This will be equal to n!. As each permutation also needs to be checked whether it is sorted or not so this takes another n time. Thus overall time complexity becomes O(n*n!).

10. Which of the following code correctly implements bogosort algorithm?
a)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return false; 
    return true; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

b)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return true; 
    return false; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

c)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] > a[n-1]) 
            return true; 
    return false; 
}
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( !isSorted(a, n) ) 
        shuffle(a, n); 
}

d)

bool isSorted(int a[], int n) 
{ 
    while ( --n > 1 ) 
        if (a[n] < a[n-1]) 
            return false; 
    return true; 
} 
void shuffle(int a[], int n) 
{ 
    for (int i=0; i < n; i++) 
        swap(a[i], a[rand()%n]); 
} 
void bogosort(int a[], int n) 
{ 
 
    while ( isSorted(a, n) ) 
        shuffle(a, n); 
}

Answer: a
Clarification: To implement bogosort algorithm we need to shuffle the input array until we get the sorted array. So we first check whether the array is sorted using function isSorted(). If it is not, then we shuffle it using function shuffle(). This process is repeated until we get a sorted array.

 
 

contest

250+ TOP MCQs on Rabin-Karp Algorithm and Answers

Data Structures & Algorithms Multiple Choice Questions on “Rabin-Karp Algorithm”.

1. What is a Rabin and Karp Algorithm?
a) String Matching Algorithm
b) Shortest Path Algorithm
c) Minimum spanning tree Algorithm
d) Approximation Algorithm

Answer: a
Clarification: The string matching algorithm which was proposed by Rabin and Karp, generalizes to other algorithms and for two-dimensional pattern matching problems.

2. What is the pre-processing time of Rabin and Karp Algorithm?
a) Theta(m2)
b) Theta(mlogn)
c) Theta(m)
d) Big-Oh(n)
Answer: c
Clarification: The for loop in the pre-processing algorithm runs for m(length of the pattern) times. Hence the pre-processing time is Theta(m).

3. Rabin Karp Algorithm makes use of elementary number theoretic notions.
a) True
b) False

Answer: a
Clarification: Rabin Karp Algorithm makes use of elementary theoretic number notions such as the equivalence of two numbers modulo a third number.

4. What is the basic formula applied in Rabin Karp Algorithm to get the computation time as Theta(m)?
a) Halving rule
b) Horner’s rule
c) Summation lemma
d) Cancellation lemma

Answer: b
Clarification: The pattern can be evaluated in time Theta(m) using Horner’s rule:
p = P[m] + 10(P[m-1] + 10(P[m-2] +…+ 10(P[2]+10P[1])…)).

5. What is the worst case running time of Rabin Karp Algorithm?
a) Theta(n)
b) Theta(n-m)
c) Theta((n-m+1)m)
d) Theta(nlogm)

Answer: c
Clarification: The worst case running time of Rabin Karp Algorithm is Theta(n-m+1)m). We write Theta(n-m+1) instead of Theta(n-m) because there are n-m+1 different values that the given text takes on.

6. Rabin- Karp algorithm can be used for discovering plagiarism in a sentence.
a) True
b) False

Answer: a
Clarification: Since Rabin-Karp algorithm is a pattern detecting algorithm in a text or string, it can be used for detecting plagiarism in a sentence.

7. If n is the length of text(T) and m is the length of the pattern(P) identify the correct pre-processing algorithm. (where q is a suitable modulus to reduce the complexity)
p=0; t0=0;
a)

for i=1 to n
do t0=(dt0 + P[i])mod q 
p=(dp+T[i])mod q

b)

for i=1 to n
do p=(dp + P[i])mod q 
t0=(dt0+T[i])mod q

c)

for i=1 to m
do t0=(dp + P[i])mod q 
p=(dt0+T[i])mod q

d)

for i=1 to m
do p=(dp + P[i])mod q 
t0=(dt0+T[i])mod q

Answer: d
Clarification: The pre-processing algorithm runs m (the length of pattern) times. This algorithm is used to compute p as the value of P[1….m] mod q and t0 as the value of T[1….m]mod q.

 
 

8. If n is the length of text(T) and m is the length of the pattern(P) identify the correct matching algorithm.
a)

for s=0 to n
		do if p=t0	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

b)

for s=0 to n-m
		do if p=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

c)

 for s=0 to m
		do if p=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

d)

 for s=0 to n-m
		do if p!=ts	
			then if P[1..m]=T[s+1..s+m]
				then print “Pattern occurs with shift” s

Answer: b
Clarification: The matching algorithm runs for n-m times. Rabin Karp algorithm explicitly verifies every valid shift. If the required pattern matches with the given text then the algorithm prints pattern found as result.

 
 

9. What happens when the modulo value(q) is taken large?
a) Complexity increases
b) Spurious hits occur frequently
c) Cost of extra checking is low
d) Matching time increases
Answer: c
Clarification: If the modulo value(q) is large enough then the spurious hits occur infrequently enough that the cost of extra checking is low.

10. Given a pattern of length-5 window, find the suitable modulo value.

a) 13
b) 14
c) 12
d) 11
Answer: a
Clarification: The modulus q is typically chosen as a prime number that is large enough to reduce the complexity when p is very large.

11. Given a pattern of length- 5 window, find the valid match in the given text.

Pattern: 2 1 9 3 6 
Modulus: 21
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Text:   9 2 7 2 1 8 3 0 5 7 1   2   1   2   1   9   3   6   2   3   9  7

a) 11-16
b) 3-8
c) 13-18
d) 15-20
View Answer

Answer: c
Clarification: The pattern 2 1 9 3 6 occurs in the text starting from position 13 to 18. In the given pattern value is computed as 12 by having the modulus as 21. The same text string values are computed for each possible position of a 5 length window.

12. Given a pattern of length- 5 window, find the spurious hit in the given text string.

Pattern: 3 1 4 1 5 
Modulus: 13
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Text:   2 3 5 9 0 2 3 1 4 1 5   2   6   7   3   9   9   2   1   3   9

a) 6-10
b) 12-16
c) 3-7
d) 13-17
View Answer

Answer: d
Clarification: The sub string in the range 13-17, 6 7 3 9 9 produces the same value 7 as the given pattern. But the pattern numbers don’t match with sub string identified, hence it is a spurious hit.

13. If the expected number of valid shifts is small and modulus is larger than the length of pattern what is the matching time of Rabin Karp Algorithm?
a) Theta(m)
b) Big-Oh(n+m)
c) Theta(n-m)
d) Big-Oh(n)

Answer: b
Clarification: When the number of valid shifts(v) is Big-Oh(1) and q>=m then the matching time is given by O(n)+O(m(v+n/q)) is simplified as O(n+m).

14. What is the basic principle in Rabin Karp algorithm?
a) Hashing
b) Sorting
c) Augmenting
d) Dynamic Programming

Answer: a
Clarification: The basic principle employed in Rabin Karp algorithm is hashing. In the given text every substring is converted to a hash value and compared with the hash value of the pattern.

15. Who created the Rabin Karp Algorithm?
a) Joseph Rabin and Michael Karp
b) Michael Rabin and Joseph Karp
c) Richard Karp and Michael Rabin
d) Michael Karp and Richard Rabin

Answer: c
Clarification: Rabin Karp algorithm was invented by Richard Karp and Michael Rabin in the year 1987 for searching a pattern in the given string.

and Answers.

250+ TOP MCQs on Non-recursive Depth First Search and Answers

Data Structures & Algorithms Multiple Choice Questions on “Non-recursive Depth First Search”.

1. Which of the following data structure is used to implement DFS?
a) linked list
b) tree
c) stack
d) queue

Answer: c
Clarification: Stack is used in the standard implementation of depth first search. It is used to store the elements which are to be explored.

2. Which of the following traversal in a binary tree is similar to depth first traversal?
a) level order
b) post order
c) pre order
d) in order

Answer: c
Clarification: In DFS we keep on exploring as far as possible along each branch before backtracking. It terminates when all nodes are visited. So it is similar to pre order traversal in binary tree.

3. Which of the following represent the correct pseudo code for non recursive DFS algorithm?
a)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.push(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

b)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.pop()
  while St is not empty
    v = St.push(v)
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

c)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.push(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(v)

d)

procedure DFS-non_recursive(G,v):
  //let St be a stack 
  St.pop(v)
  while St is not empty
    v = St.pop()
    if v is not discovered:
      label v as discovered
      for all adjacent vertices of v do
        St.push(a) //a being the adjacent vertex

Answer: a
Clarification: In the iterative approach we first push the source node into the stack. If the node has not been visited then it is printed and marked as visited. Then the unvisited adjacent nodes are added to the stack. Then the same procedure is repeated for each node of the stack.

4. What will be the time complexity of the iterative depth first traversal code(V=no. of vertices E=no.of edges)?
a) O(V+E)
b) O(V)
c) O(E)
d) O(V*E)

Answer: a
Clarification: As the time required to traverse a full graph is V+E so its worst case time complexity becomes O(V+E). The time complexity of iterative and recursive DFS are same.

5. Which of the following functions correctly represent iterative DFS?
a)

void DFS(int s) 
{     
    vector<bool> discovered(V, true); 
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

b)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();        
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

c)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        st.pop(); 
        s = st.top();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = true; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (!discovered[*i]) 
                st.push(*i); 
    } 
}

d)

void DFS(int s) 
{     
    vector<bool> discovered(V, false);     
    stack<int> st;     
    st.push(s);   
    while (!st.empty()) 
    { 
        s = st.top(); 
        st.pop();         
        if (!discovered[s]) 
        { 
            cout << s << " "; 
            discovered[s] = false; 
        } 
        for (auto i = adjacent[s].begin(); i != adjacent[s].end(); ++i) 
            if (discovered[*i]) 
                st.push(*i); 
    } 
}

View Answer

Answer: b
Clarification: In the correct version we first push the source node into the stack. If the node has not been visited then it is printed and marked as visited. Then the unvisited adjacent nodes are added to the stack. Then the same procedure is repeated for each node of the stack.

8. What is the space complexity of standard DFS(V: no. of vertices E: no. of edges)?
a) O(V+E)
b) O(V)
c) O(E)
d) O(V*E)

Answer: b
Clarification: In the worst case the space complexity of DFS will be O(V) in the case when all the vertices are stored in stack. This space complexity is excluding the space required to store the graph.

9. Which of the following data structure is used to implement BFS?
a) linked list
b) tree
c) stack
d) queue

Answer: d
Clarification: Queue is used in the standard implementation of breadth first search. It is used to store the vertices according to the code algorithm.

10. Choose the incorrect statement about DFS and BFS from the following?
a) BFS is equivalent to level order traversal in trees
b) DFS is equivalent to post order traversal in trees
c) DFS and BFS code has the same time complexity
d) BFS is implemented using queue

Answer: b
Clarification: DFS is equivalent to pre order traversal in trees, not post order traversal. It is so because in DFS we keep on exploring as far as possible along each branch before backtracking. So it should be equivalent to pre order traversal.