**ARTIFICIAL INTELLIGENCE GAMES Interview Questions:-**

1. A game can be formally defined as a kind of search problem with the following components:

a) Initial State

b) Successor Function

c) Terminal Test

d) Utility Function

a) Initial State

b) Successor Function

c) Terminal Test

d) Utility Function

Explanation: The initial state includes the board position and identifies the player to move. A successor function returns a list of (move, state) pairs, each indicating a legal move and the resulting state. A terminal test determines when the game is over. States where the game has ended are called terminal states. A utility function (also called an objective function or payoff function), which gives a numeric value for the terminal states. In chess, the outcome is a win, loss, or draw, with values +1, -1, or 0.

2. General algorithm applied on game tree for making decision of win/lose is ____________

a) DFS/BFS Search Algorithms

b) Heuristic Search Algorithms

c) Greedy Search Algorithms

d) MIN/MAX Algorithms

d) MIN/MAX Algorithms

Explanation: Given a game tree, the optimal strategy can be determined by examining the min/max value of each node, which we write as MINIMAX- VALUE(n). The min/max value of a node is the utility (for MAX) of being in the corresponding state, assuming that both players play optimally from there to the end of the game. Obviously, the min/max value of a terminal state is just its utility. Furthermore, given a choice, MAX will prefer to move to a state of maximum value, whereas MIN prefers a state of minimum value.

3. Which search is equal to minimax search but eliminates the branches that can’t influence the final decision?

a) Depth-first search

b) Breadth-first search

c) Alpha-beta pruning

d) None of the mentioned

c) Alpha-beta pruning

Explanation: The alpha-beta search computes the same optimal moves as minimax, but eliminates the branches that can’t influence the final decision.

4. Which values are independent in minimax search algorithm?

a) Pruned leaves x and y

b) Every state is dependent

c) Root is independent

d) None of the mentioned

a) Pruned leaves x and y

Explanation: The minimax decision are independent of the values of the pruned values x and y because of the root values.

5. Which value is assigned to alpha and beta in the alpha-beta pruning?

a) Alpha = max

b) Beta = min

c) Beta = max

d) Both a & b

d) Both a & b

Explanation: Alpha and beta are the values of the best choice we have found so far at any choice point along the path for MAX and MIN.

6. To which depth does the alpha-beta pruning can be applied?

a) 10 states

b) 8 States

c) 6 States

d) Any depth

d) Any depth

Explanation: Alpha-beta pruning can be applied to trees of any depth and it is possible to prune entire sub-tree rather than leaves.

7. Which search is similar to minimax search?

a) Hill-climbing search

b) Depth-first search

c) Breadth-first search

d) All of the mentioned

b) Depth-first search

Explanation: The minimax search is depth-first search, So at one time we just have to consider the nodes along a single path in the tree.

8. Where does the values of alpha-beta search get updated?

a) Along the path of search

b) Initial state itself

c) At the end

d) None of the mentioned

a) Along the path of search

Explanation: Alpha-beta search updates the value of alpha and beta as it gets along and prunes the remaining branches at node.

9. What is called as transposition table?

a) Hash table of next seen positions

b) Hash table of previously seen positions

c) Next value in the search

d) None of the mentioned

b) Hash table of previously seen positions

Explanation: Transposition is the occurrence of repeated states frequently in the search.

10. Which function is used to calculate the feasibility of whole game tree?

a) Evaluation function

b) Transposition

c) Alpha-beta pruning

d) All of the mentioned

a) Evaluation function

Explanation: Because we need to cut the search off at some point and apply an evaluation function that gives an estimate of the utility of the state.

11. Which is identical to the closed list in Graph search?

a) Hill climbing search algorithm

b) Depth-first search

c) Transposition table

d) None of the mentioned

c) Transposition table

12. How the effectiveness of the alpha-beta pruning gets increased?

a) Depends on the nodes

b) Depends on the order in which they are executed

c) Both a & b

d) None of the mentioned

a) Depends on the nodes

13. The complexity of minimax algorithm is

a) Same as of DFS

b) Space – bm and time – bm

c) Time – bm and space – bm

d) Same as BFS

a) Same as of DFS

b) Space – bm and time – bm

Explanation: Same as DFS

14. The minimax algorithm (Figure 6.3) computes the minimax decision from the current state. It uses a simple recursive computation of the minimax values of each successor state, directly implementing the defining equations. The recursion proceeds all the way down to the leaves of the tree, and then the minimax values are backed up through the tree as the recursion unwinds.

a) True

b) False

a) True

Explanation: Refer definition of minimax algorithm.

15. The initial state and the legal moves for each side define the __________ for the game.

a) Search Tree

b) Game Tree

c) State Space Search

d) Forest

b) Game Tree

Explanation: An example of game tree for Tic-Tac-Toe game.

16. Zero sum game has to be a ______ game.

a) Single player

b) Two player

c) Multiplayer

d) Three player

c) Multiplayer

Explanation: Zero sum games could be multiplayer games as long as the condition for zero sum game is satisfied

17. Zero sum games are the one in which there are two agents whose actions must alternate and in which the utility values at the end of the game are always the same.

a) True

b) False

b) False

Explanation: Utility values are always same and opposite.

18. Mathematical game theory, a branch of economics, views any multi-agent environment as a game provided that the impact of each agent on the others is “significant,” regardless of whether the agents are cooperative or competitive.

a) True

b) False

a) True

19. Adversarial search problems uses,

a) Competitive Environment

b) Cooperative Environment

c) Neither a nor b

d) Only a and b

a) Competitive Environment

Explanation: Since in cooperative environment agents’ goals are I conflicts. They compete for goal.

20. This set of Artificial Intelligence MCQs focuses on “Game Theory – 1?.

1. General games involves,

a) Single-agent

b) Multi-agent

c) Neither a nor b

d) Only a and b

d) Only a and b

Explanation: Depending upon games it could be single agent (Sudoku) or multi-agent (Chess)

21. Is this artificial intelligence lives over the other software programs and their flexibility?

Yes artificial intelligence Games lives over the other software programs and their flexibility

22. Consider this: after a while Tesuaros temporal difference program will likely stop learning, so does this means that it lost its intelligence?

Some game playing programs are getting quite good and I expect that in the long run all the best “players” will be programs. While that is wonderful and while those programs that learn to play their games get a rating of minimal intelligence from me remember that what’s impressive about people is that not only can they do games, they do heuristic search, theorem proving, use natural language and cope with the real world. The real challenge is to get programs to do that. If you simply pursue techniques for game playing will you ever end up with all these human capabilities in one program?

Is This Answer Correct? 1 Yes 0 No

23. What is AI Backgammon?

This section looks at Berliner’s program, two backprop versions by Tesauro and a temporal difference method by Tesauro. This latter program is VERY good and has found strategies that now human backgammon players acknowledge are better than some of the old humanly devised strategies

24. What is AI Checkers?

The main programs here are Arthur Samuel’s, the rote learning method which is a lot like a memory based method, generalization learning which is a lot like backprop and a signature table approach that also gives you a feed-forward type network. One of Samuel’s programs did beat a checkers champion and the AI community has often make a fuss over that saying that this AI program played a “championship-level” game however that expert beat the program in the next 6 games. Note too, what Samuels says: “the program is quite capable of beating any amateur player and can give better players a good contest”.

25. What is Game Playing AI?

This covers a number of game playing techniques, notably checkers and backgammon because so much good research has been done on these problems and because so many different techniques have been tried.