Polypilot product mascot

Introducing PolyPilot:

Our AI-Powered Mentorship Program

Learn More
Go to Polygence Scholars page
Sudhiksha Ramesh's cover illustration
Polygence Scholar2023
Sudhiksha Ramesh's profile

Sudhiksha Ramesh

Class of 2024Solon, Ohio

About

Projects

  • "Promoting Self-Disclosure in Sexual Assault Hotlines: Opportunities and Challenges for Incorporating AI Chatbots" with mentor Kimi (Aug. 8, 2023)

Project Portfolio

Promoting Self-Disclosure in Sexual Assault Hotlines: Opportunities and Challenges for Incorporating AI Chatbots

Started May 18, 2023

Portfolio item's cover image

Abstract or project description

Sexual assault is a pervasive social issue that leaves its victims and survivors with long-lasting sociopsychological wounds. Their journey through recovery and closure often includes looking for consolidation and support through hotlines, the most common being the National Sexual Assault Hotline. However, due to stigma, distress, and biases associated with sexual assault, victims often cannot obtain much-needed assistance. Automated conversational agents with regencies may be the best venue for combating these psychological issues while promoting self-disclosure for support-seeking victims. Self-disclosure is vital for psychological understanding of psychotherapy, and its application in helplines is crucial. In its context in helplines, AI chatbots can elicit self-disclosure. However, current designs of chatbots need to be revised. This paper seeks to dissect the current lack of self-disclosure for victims and assesses how empathetic chatbots can increase victims’ openness to sharing, hastening the healing process. In identifying the flaws in existing support systems, including vicarious trauma and decreasing productivity for helpline workers in traditional helplines, this paper underscores the urgency for emotionally attuned AI agents that utilize objective support mechanisms and algorithms while creating an empathetic conversational environment for users. This paper furthermore outlines challenges and risks associated with AI agents, including potential retraumatization, user autonomy, and privacy, while also proposing a safe and effective strategy to integrate AI chatbots through human oversight. The findings of this paper unveil the paramount potential that AI chatbots may have in reshaping support systems for survivors and pioneering an innovative approach to offering practical solutions to address the needs of survivors in their journey to find support. By analyzing these various mechanisms that can be promoted through chatbots, sympathetic AI chatbots may reshape how survivors are supported, offering a venue for empowerment and healing.