Data Structures & Algorithms Multiple Choice Questions on “Huffman Code”.
1. Which of the following algorithms is the best approach for solving Huffman codes? Answer: b 2. How many printable characters does the ASCII character set consists of? 3. Which bit is reserved as a parity bit in an ASCII set? 4. How many bits are needed for standard encoding if the size of the character set is X? Answer: a 5. The code length does not depend on the frequency of occurrence of characters. Answer: b 6. In Huffman coding, data in a tree always occur? Answer: b 7. From the following given tree, what is the code word for the character ‘a’? Answer: a 8. From the following given tree, what is the computed codeword for ‘c’? Answer: c 9. What will be the cost of the code if character ci is at depth di and occurs at frequency fi? Answer: c 10. An optimal code will always be present in a full tree. Answer: a 11. The type of encoding where no character code is the prefix of another character code is called? Answer: b 12. What is the running time of the Huffman encoding algorithm? Answer: c 13. What is the running time of the Huffman algorithm, if its implementation of the priority queue is done using linked lists? Answer: d
a) exhaustive search
b) greedy algorithm
c) brute force algorithm
d) divide and conquer algorithm
Clarification: Greedy algorithm is the best approach for solving the Huffman codes problem since it greedily searches for an optimal solution.
a) 120
b) 128
c) 100
d) 98
Answer: c
Clarification: Out of 128 characters in an ASCII set, roughly, only 100 characters are printable while the rest are non-printable.
a) first
b) seventh
c) eighth
d) tenth
Answer: c
Clarification: In an ASCII character set, seven bits are reserved for character representation while the eighth bit is a parity bit.
a) log X
b) X+1
c) 2X
d) X2
Clarification: If the size of the character set is X, then [log X] bits are needed for representation in a standard encoding.
a) true
b) false
Clarification: The code length depends on the frequency of occurrence of characters. The more frequent the character occurs, the less is the length of the code.
a) roots
b) leaves
c) left sub trees
d) right sub trees
Clarification: In Huffman encoding, data is always stored at the leaves of a tree inorder to compute the codeword effectively.
a) 011
b) 010
c) 100
d) 101
Clarification: By recording the path of the node from root to leaf, the code word for character ‘a’ is found to be 011.
a) 111
b) 101
c) 110
d) 011
Clarification: By recording the path of the node from root to leaf, assigning left branch as 0 and right branch as 1, the codeword for c is 110.
a) cifi
b) ∫cifi
c) ∑fidi
d) fidi
Clarification: If character ci is at depth di and occurs at frequency fi, the cost of the codeword obtained is ∑fidi.
a) true
b) false
Clarification: An optimal tree will always have the property that all nodes are either leaves or have two children. Otherwise, nodes with one child could move up a level.
a) optimal encoding
b) prefix encoding
c) frequency encoding
d) trie encoding
Clarification: Even if the character codes are of different lengths, the encoding where no character code is the prefix of another character code is called prefix encoding.
a) O(C)
b) O(log C)
c) O(C log C)
d) O( N log C)
Clarification: If we maintain the trees in a priority queue, ordered by weight, then the running time is given by O(C log C).
a) O(C)
b) O(log C)
c) O(C log C)
d) O(C2)
View Answer
Clarification: If the implementation of the priority queue is done using linked lists, the running time of Huffman algorithm is O(C2).