Author ORCID Identifier

http://orcid.org/0000-0003-3188-4470

Degree Year

2021

Document Type

Thesis - Open Access

Degree Name

Bachelor of Arts

Department

Computer Science

Keywords

POMDPs, Artificial intelligence, Computer Science, Planning, Uncertainty, Domain, Markov Decision Process, POMCP

Abstract

Prior studies have demonstrated that for many real-world problems, POMDPs can be solved through online algorithms both quickly and with near optimality [10, 8, 6]. However, on an important set of problems where there is a large time delay between when the agent can gather information and when it needs to use that information, these solutions fail to adequately consider the value of information. As a result, information gathering actions, even when they are critical in the optimal policy, will be ignored by existing solutions, leading to sub-optimal decisions by the agent. In this research, we develop a novel solution that rectifies this problem by introducing a new algorithm that improves upon state-of-the-art online planning by better reflecting on the value of actions that gather information. We do this by adding Entropy to the UCB1 heuristic in the POMCP algorithm. We test this solution on the hallway problem. Results indicate that our new algorithm performs significantly better than POMCP.

Share

COinS