
Jerry N
- Research Program Mentor
PhD at Massachusetts Institute of Technology
Expertise
Robotics, Machine Learning, Generative AI, Quantitative Finance, Sustainability, CleanTech, Computer Science
Bio
Hi! I am currently a founding member of two start up companies. One is a computer vision, machine learning project focused on helping barn owners identify issues with their horses, and the other is a project around cutting energy waste in commercial buildings. Two years ago, I completed my PhD in mechanical engineering, mainly doing research in robotics and nonlinear systems. Prior to the PhD, my Master's research focused on development of an educational platform to teach people on the manufacturing line how to use and work alongside robotic systems. In undergrad, I did material science research at UC San Diego while completing my Bachelor's in mechanical engineering.Project ideas
Adversarial Machine Learning
Machine learning models are now ubiquitous in society. LLMs are how most people are now introduced to the concept but a majority of machine learning models have specific tasks that they are designed to complete. This includes object detection, classification, pose estimation, and image segmentation. A core issue with the prevalence of machine learning models and referring to this as AI is that the humans utilizing the models do not understand how the model functions and when it breaks or why it breaks. Adversarial Machine learning is the study of how exactly to break a model. In this project, you will take an open source model and create a collection of data that the model performs exceptionally well on. Afterwards, you will develop a systematic method to "break" the model while changing the input in a way that is imperceptible to a human.
Wall-following drone
Drones are becoming extremely inexpensive and decreasing in size dramatically, making it feasible for flight indoors to become somewhat safe. In this project, we would analyze the physics that make a drone move and augment a run of the mill drone to be able to sense its surroundings and be able to follow a wall.
Data Poisoning for AI
AI models are developed for specific purposes and trained on large swaths of data to enable a wide range of inputs. These datasets are heavily curated to ensure that the outputs of the model align with the objectives of a designed system. However, as AI has become very popular, there has been an increasing reliance on open source datasets and scraping publicly available data. These methods rely on the idea that the underlying dataset is correctly labeled. In this project, you will learn how to poison a dataset. Specifically, you will explore training a model utilizing a reasonable dataset, and by adjusting small amounts of data in the training data, you will demonstrate dramatic performance issues with the model that may not be perceptible from a statistical perspective.