Please use this identifier to cite or link to this item:
Title: Adaptive Agent Architectures in Modern Virtual Games
Keywords: game artificial intelligence, agent architectures, decision theory, MDP, POMDP, knowledge representation
Issue Date: 24-Nov-2009
Citation: TAN CHEK TIEN (2009-11-24). Adaptive Agent Architectures in Modern Virtual Games. ScholarBank@NUS Repository.
Abstract: This thesis describes a generic decision-theoretic approach towards agent architectures in modern virtual games. It largely aims to resolve the problem of sparsely unrelated work in game AI that are too specialized, making it hard to integrate in a generic decision making game agent. Although a large body of literature exists in contemporary generic AI research that can provide insights for generic agent architectures, they are hardly seen in modern game AI research. Moreover, as such a generic architecture needs a profusely huge representation of the game world, naive implementations are intractable. Model-free learning approaches appear to eliminate the problem of representation but suffer similarly in terms of learning time required. This is unacceptable in modern games where the agents have insufficient time to evolve for results to be noticeable to the player. Additionally, the player constitutes the single most important element in a game, and a good game architecture needs to establish player awareness as a priority. Most player modeling work rely on the fact that a set of possibly unbounded player archetypes can be formulated in advance by experts, but this is time consuming and confines the adaptability within the knowledge of the experts. Motivated by the above-mentioned considerations, this thesis proposes a model-based approach for a unified adaptive agent architecture. The essence of the approach lies in exploiting the philosophical structure of a modern virtual game to enable tractability. A modern virtual game is almost entirely completely observable (a virtual world) and minimally partially observable (the human player). Hence the architecture decomposes the problem into completely observable and partially observable attributes, utilizing a Markov Decision Process (MDP) abstract to represent the former and a Partially Observable Markov Decision Processes (POMDP) abstract to represent the latter. From another point of view, the problem is decomposedinto environment-based adaptation and player-based adaptation. This greatly improves the tractability of the behavior computation as the much larger game world is represented by an MDP, which is much more tractable than a POMDP. To generate the game model prior to adaptation, this thesis has formulated modeling concepts for both the POMDP and MDP abstracts respectively. In the POMDP abstract, an action-based Tactical Agent Personality (TAP) representation is formulated as the player modeling component of the architecture. As the formulation is based on agent actions, it overcomes the need for hand-crafting player archetypes and provides a bound for the states. In the MDP abstract, an automated model building process based on priority sweeping is created. Thereafter the MDP and POMDP policies are computed and combined to produce a single eventual policy that adapts to both the game environment as well as the player. A minimal amount of online learning is also incorporated to handle in-game adaptation. The architecture and its components are implemented and compared in a variety of modern game scenarios, whereby they are shown to produce plausible results both in terms of speed and adaptation performance.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
TCT.pdf6.16 MBAdobe PDF



Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.