Windy grid world problem


Windy Grid World

This assignment is to use Reinforcement Learning to solve the following ‘Windy Grid World’ problem. There are four actions: move up, down, right, and left. This is a deterministic domain each action deterministically moves the agent one cell in the direction indicated. If the agent is on the boundary of the world and executes an action which would move it ‘off’ of the world, it remains on the grid in the same cell from that it executed the action.

These are the "windy" states. In these states, the agent experiences an extra ‘push’ upward. For illustration, if the agent is in a windy state and executes an action to the left or right, the result of the action is to move left or right (respectively) however also to move one cell upward. As a result, the agent moves diagonally upward to the left or right.

This is an episodic task where each episode lasts no more than 30 time steps. At the start of each episode, the agent is placed in the ‘Start’ state. Reward in this domain is zero everywhere except when the agent is in the goal state (labeled "goal" in the diagram). The agent obtains a reward of positive ten when it executes any action {\it from} the goal state. The episode ends subsequent to 30 time steps or when the agent takes any action after having landed in the goal state.

You must solve the problem using Q-learning. Employ e-greedy exploration with epsilon=0.1 (the agent takes a random action 10 percent of the time in order to explore.) Employ a learning rate of 0.1 and a discount rate of 0.9.

The programming must be done in MATLAB. Students might get access to MATLAB here. Alternatively, students might code in Python (using Numpy). If the student would rather code in a different language, please see Dr Platt or the TA.

Students must submit their homework in the form of a ZIP file that includes the following:

1. A PDF of a plot of grid world that illustrates the policy and a path found by Q-learning after it has approximately converged. The policy plot must identify the action taken by the policy in each state. The path must begin in the start state and follow the policy to the goal state.

2. A PDF of a plot of reward per episode.

3. A text file showing output from a sample run of your code.

4. A directory having all source code for your project.

5. A short readme file enumerating the imperative files in your submission.

Updates

You can initialize the Q function randomly or you can initialize it to a uniform value of 10. i.e., you can initialize Q such that each value in the table is equivalent to 10.

There have been questions about how to know when the algorithm has converged. The algorithm has converged when the value function has stopped changing significantly and the policy has stopped changing completely. Because we are using q-learning, the algorithm must converge to a single optimal policy.

Please as well submit a short readme file with your homework that enumerates the significant files in your submission.

Request for Solution File

Ask an Expert for Answer!!
MATLAB Programming: Windy grid world problem
Reference No:- TGS0402

Expected delivery within 24 Hours