Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Homework 7 | Introduction to Artificial Intelligence | CS 5804, Assignments of Computer Science

Material Type: Assignment; Professor: Ramakrishnan; Class: Intro Artificial Intelligence; Subject: Computer Science; University: Virginia Polytechnic Institute And State University; Term: Fall 2006;

Typology: Assignments

Pre 2010

Uploaded on 02/13/2009

koofers-user-4tl
koofers-user-4tl 🇺🇸

10 documents

1 / 1

Toggle sidebar

Related documents


Partial preview of the text

Download Homework 7 | Introduction to Artificial Intelligence | CS 5804 and more Assignments Computer Science in PDF only on Docsity! CS 5804: Homework #5 Assigned: November 10, 2006 Date Due: November 20, 2006 1. (35 points) Pickup an algorithms textbook such as CLRS (“Introduction to Algorithms,” Cormen, Leiserson, Rivest, and Stein, MIT Press) and navigate to the dynamic programming chapter. Choose an example from this chapter, such as the associativity of matrix multipli- cation. Explain how to cast that problem through a reinforcement learning lens: carefully define the states, actions, state transition and reward matrices. Then interpret the specific algorithm presented for that example as solving the RL problem. What technique is the algorithm using? Is it just plain dynamic programming (the way we have seen it in the RL book) or something more? What new ‘tricks’ do you learn about solving RL problems by reading this chapter? 2. (15 points) Problem 3.8 from the RL book (Sutton and Barto). 3. (25 points) The RL book defines a new type of update called a value iteration, distinct from policy iteration (which was covered in class). Learn more about this update and how it is different from policy iteration. Then solve problem 17.4 (of AIMA, not the RL book). Solve this problem with both value iteration and policy iteration and explain any discrepancies you observe. 4. (25 points) We have studied RL only for Markov decision processes (MDPs). Consider the problem of finding a Knight’s tour in a chessboard: the Knight must visit all squares, each square exactly once, by taking only ‘knightly’ moves. Is this problem Markovian? Can the RL algorithms we know so far be used for solving the Knight’s tour? If not, how can they be adapted to solving it?
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved