Projects
-
Guidance Navigation and Control Research for the SENSS Lab at NASA JSC (NASA Intern)
Spring 2024 and Summer 2024 Internship (Putting pages together currently) -
Autonomous Rover System using LeoRover Platform in Collaboration with AR Spacesuits (NASA Intern)
Developing an autonomous rover and other auxillary components that facilitate the Spacesuit User Interface Technologies for Students (SUITS) design challenge. Tasked with ensuring autonomous functionality of a rover for use in the SUITS EVA test week. Subtasks included setting up manual control via joystick, LIDAR integration, web servers for streaming telemetry data including the rover’s position, video streams, and RGB-D streams. Additionally built Maps for the simulated Mars terrain field (rock yard) and voice communications for participants with the Local Mission Control Center (LMCC) -
Vision-Based Control of Unmanned Aerial Systems Swarm (NSF-REU)
Project focusing on developing vision-based navigation and control for a swarm of Unmanned Aerial Systems (UAS). Using monocular cameras on DJI Tello drones for autonomous guidance and control, we designed and tested our control framework through various simulations and real-world flight tests. This project also used physical tests which included work in the DroneDome at Dr. Sun’s Lab and outdoor testing at the Choctaw Nation UASIPP, as well as simulation experiments and code abstraction between simulation and physical implementation. -
Gesture-based UAV Control through EMG and IMU Data Fusion (Research Project)
Worked on a novel UAV control scheme using human gestures captured through electromyography (EMG) and inertial measurement unit (IMU) data. Developed and tested different machine learning classifiers, with an emphasis on real-time gesture classification accuracy. -
SLAM Integration for Autonomous UAVs (Personal Project / Independent Study)
Developed an autonomous micro UAV system through exploration of different simulation software. Learned fundamentals of ROS such as how to effectively use topics, publishers, and subscribers, and how to display data through RVIZ, OpenCV, or RQT Graph to show the node structure. The focus was on using visual SLAM for map creation and localization in unexplored environments, and control of a system within simulation. -
Experiment using pre-trained word embeddings for Joint Personalized Search and Recommendation Task (Research Project)
A research project building off of previous work, meant to observe the effects of adding pre-trained word embeddings to the HyperSaR pipeline. The project involves creating a model that generates a personalized list of items ranked by relevance based on a user, their interactions, and a possibly empty query. This involves the use of several datasets, including Lastfm, Point-of-Interest search engine (POI) from Naver, and MovieLens 25m. -
Experiment adding SMOTE to CNN Net Traffic Identifier (Research Project)
Experiment with integrating SMOTE into a Pytorch implementation for encrypted traffic classification. -
Fashion MNIST Classifier using Tensorflow Convolutional Neural Network (Machine Learning Project)
Built and optimized a TensorFlow CNN model to classify the Fashion MNIST and MNIST digit datasets. -
MNIST Classifier using Support Vector Machines and Random Forest Classifiers (Machine Learning Project)
Developed an SVM and Random Forest classifier for the MNIST digit dataset. -
OpenCV Motion Detector / Object Tracking (Personal Project)
Experimented with OpenCV to develop a motion detector for live streams and videos. -
Tic-Tac-Toe Adversary A.I. (Artificial Intelligence Course Project)
Designed an adversary A.I. for Tic-Tac-Toe using LISP and a heuristic minimax search.