Skip to main content

Research Projects

photo: students working a laboratory.


Autonomous Systems Group
Dr. Ufuk Topcu

Mastering Atari Games: Is it easier to play using images or raw data?

Mentor: Tyler Ingebrand
tyleringebrand@gmail.com

Reinforcement learning has demonstrated the ability to achieve superhuman performance at Atari games. Most advances use the RGB image from the game as input to the machine learning algorithm. However, it is also possible to directly copy the RAM state of the game, which is the raw data, and use that as input to the algorithm. This project will investigate which method performs the best. For humans, learning from raw data is likely impossible, but the same may not be true for AI.

This project requires knowledge of Python.

Pathfinding and RL in PacMan

Mentor: Shenghui Chen
shenghui.chen@utexas.edu

Pathfinding is an essential problem in artificial intelligence, with applications spanning robotics, logistics, and interactive environments. The project begins with developing and implementing classic search algorithms such as Breadth-First Search (BFS), Depth-First Search (DFS), and A*. Students will then transition to reinforcement learning (RL), exploring algorithms like Q-learning to enable agents to learn optimal behaviors through interaction with their environment.

This project requires knowledge of Python.

Learning Complex Systems: Exploring Neural Operators for Scientific Modeling

Mentor: Adam Thorpe
adam.thorpe@austin.utexas.edu

What if we could train a machine to predict how air flows over a wing or how heat moves through a new material—without solving complex equations from scratch? Scientific machine learning aims to learn patterns in the underlying physics from data. Neural operator learning takes this process a step further by finding relationships between entire functions, allowing machine learning models to learn more than just a single case and expand to a whole family of solutions. In this project, you will train neural operators to predict solutions to real-world systems, such as fluid flow or material stress, and compare different approaches like DeepONet and Fourier Neural Operators (FNO).

This project requires knowledge of Python.

Markov Decision Process for space mission planning under uncertainties

Mentor: Quentin Rommel
quentin.rommel@utexas.edu

This project aims to find the best way to manage the GRACE mission while dealing with several challenges and uncertainties. These include a broken accelerometer on GRACE-FO 2, changes in solar activity that affect atmospheric drag, and a small fuel leak in the cold-gas thrusters. A Markov Decision Process will be used to create a model of the mission environment, taking these issues into account. The goal is to develop a strategy that keeps the mission running as long as possible while collecting good-quality data.

Guiding Robots with Unmodeled Dynamics without Machine Learning

Mentor: Filippos Fotiadis
ffotiadis@utexas.edu

Machine learning has greatly enabled autonomy for systems that operate in harsh and unmodeled environments, but it can often be slow and inefficient for complex autonomous robots. This project will show that, with just instantaneous measurements of a robotic manipulator’s positions and velocities, it is possible to make its endpoint track any reference with user-prescribed accuracy. Students participating in this project will apply and compare low-complexity, learning-free schemes for control of robotic manipulators with unknown inertia, Coriolis, gravity, and friction.

Programming a Robot Arm to Play Checkers

Mentor: Ruihan (Philip) Zhao
ruihan.zhao@utexas.edu

In this project, students will program a robotic arm to play a complete game of checkers against a human opponent. First, they will learn camera calibration and basic computer vision techniques to accurately detect and track checker pieces on the board, handling challenges like lighting variations and perspective distortion. They will then implement a search algorithm (e.g., minimax with alpha-beta pruning) to analyze possible moves and select optimal strategies. Finally, students will integrate these components with robotic motion planning to pick up and place the pieces precisely. This activity introduces students to practical robotics control, image processing, and AI.

This project requires knowledge of Python.