Abstract
OnSecondThought explores how real-time AI feedback can support cognitive reappraisal and emotional self-regulation by enhancing self-awareness during both solitary reflection and interpersonal communication. As language plays a central role in shaping cognition, the system detects negative self-talk and provides adaptive nudges to encourage more constructive thinking. This approach builds on the growing potential of large language models (LLMs) to act as real-time decision-making agents, raising critical questions about how such systems influence emotional processing, user autonomy, and well-being.
By examining user experiences across different contexts, OnSecondThought seeks to uncover the trade-offs between intervention effectiveness and perceived autonomy. The findings will contribute to the design of AI-driven feedback systems that promote emotional resilience while respecting individual agency, seamlessly integrating into everyday interactions to offer timely, personalized support. Ultimately, this work advances the understanding of how adaptive AI can enhance mental well-being while emphasizing the importance of ethically aligned, user-centered design.