Publications:
AAMAS Conference 2023: Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
AAAI Spring Symposia 2022: An Architecture for Novelty Handling in a Multi-Agent Stochastic Environment: Case Study in Open-World Monopoly
Fifth ICAPS Workshop on Integrated Planning, Acting, and Execution 2021: Integrating Planning, Execution and Monitoring in the presence of Open World Novelties: Case Study of an Open World Monopoly Solver
AI agents often struggle to detect and adapt to sudden changes in their environments, particularly in multi-agent games where game rules, available actions, environment dynamics, and agent goals may shift unexpectedly. This paper presents an architecture that enables agents to detect, categorize, and adapt to novelties using Answer Set Programming for logic and reasoning. We apply this architecture to the multi-agent, imperfect information game, Monopoly, and compare its performance to heuristics and standard Monte-Carlo Tree Search methods. Our results show effective novelty detection and significant improvements in agent performance.
In recent years, several major cybersecurity attacks on SCADA devices have been reported, causing economic and societal damage. In 2010, Ten et al. proposed an attack tree model for impact analysis. We applied this model alongside financial loss data and Monte Carlo techniques to conduct a cost-benefit analysis of security improvements. Time to attack is modeled as an exponentially distributed variable, and financial losses are predicted using regression to logistic functions. The model simulates future losses for various attack scenarios and improvement plans, considering budget constraints to prioritize system enhancements. We compare genetic and differential evolution algorithms for optimal budget allocation.
Publication
Joint Mathematics Meeting - January 2019 - Presentation: A Cost-benefit Analysis of Cyber Defense Improvements
Our task was to design a robot capable of autonomously climbing a simulated "tree" with a variable geometry pillar, controlled by a Raspberry Pi Zero, and using camera-based surface detection for navigation. The robot had to be untethered, with a target mass under 1 kg and size constraints of 30.5 cm in a cube (40.5 cm when extended). We built the structure using MakerBeam extruded aluminum and 3D-printed PLA for the body and grippers. Each gripper utilized a rack-and-pinion system, with a servo-driven pinion sliding through PLA linear bearings. The two grippers were connected by an extendable body, also using a telescoping rack-and-pinion design. To navigate the pillar’s complex surface, we added pitch servos, giving the robot five degrees of freedom.
Reinforcement Learning (RL) is one of the most popular machine learning paradigms applied to address problems that have only limited environmental feedback. With the advent of reinforcement learning algorithms and tree search algorithms, significant research has been conducted on their potential application to several research areas such as robotics, games, strategy planning, or data processing. However, most of the existing reinforcement learning agents have been made to solve a single task. The essential idea of transfer learning is to utilize experience gained in learning to perform a different task with similar characteristics. Transfer learning is used to mitigate the training time of an AI agent and also optimize the outcomes. This project aims to develop a transfer learning method that can be applied to pathfinding and object avoidance AI agents in different Pac-Man environments such as Pac-Man and Pac-Man Capture the Flag.
With the advent of advanced search and machine learning algorithms, significant research has been conducted on its potential application to strategy board game AI’s. Many game AI’s, as a result, have begun to take advantage of these algorithms’ strong learning capabilities. More specifically, neural network-based game AI’s such as AlphaGo Zero have demonstrated their potential to defeat expert players. However, the application of these techniques has largely been explored for games of perfect information in which no knowledge is hidden from both players. Many board games exist where information about the opponent is unknown and must be revealed through piece interactions. This project explores the effectiveness of these algorithms when applied to Stratego, a strategy board game with elements of randomness and unknown information. The results of this experiment hope to reveal how effective these algorithms are at making rational decisions when applied to systems with incomplete information.