

Saachi Kalra
Class of 2028Foster City, CA
About
Hello! My name is Saachi Kalra, and my project focuses on the impact of inherent societal biases on Generative AI. After the initial shockwave of its existence following the COVID pandemic, I became interested in the lawsuits and ethical questions that were springing up regarding this new technology, and decided to research and form my own testable framework for determining the range of biases within different chatbots. My work is showcased in the Original Research section of my paper and has taught me a lot about the humane aspects, such as moral compasses, that we instill within the tools we use every day.Projects
- "AI Ethics: Generative AI’s Effects on Societal Bias & Discrimination" with mentor Lisa (July 12, 2025)
Project Portfolio
AI Ethics: Generative AI’s Effects on Societal Bias & Discrimination
Started Dec. 17, 2024

Abstract or project description
This paper will address the ethical concerns surrounding Generative AI’s effects on today’s overly technology-reliant society, with a focus on the discriminatory issues caused by such autonomous technology. Lenses such as underpaid labor, misinformation, accessibility, hallucination, gender/race bias, and the threat of identity appropriation will be considered. Generative AI technologies and chatbots will be evaluated and weighed through existing research and anecdotes, as well as an original prompt experiment conducted utilizing multiple chatbots in a monitored environment. The outcomes of this study and the Generative AI bots’ performance will be determined through the Common Rule Framework. Questions such as whether Gen. AI’s societal benefits and industrial usage outweigh its long-term effects on society’s mentality through the introduction of unfounded prejudice and overreliance on technology will be investigated. This paper seeks to further the development of introducing an ethical boundary when approaching generative technologies, keeping in mind the lack of accessibility, steep wealth gaps, and prominent biases among newer models.