AI chatbots can be risky for teens in crisis.
Published In: Science News, 2026, v. 208, n. 1. P. 18 1 of 3
Database: Academic Search Ultimate 2 of 3
Authored By: SANDERS, LAURA 3 of 3
Abstract
The article discusses the ethical concerns surrounding the use of AI chatbots as mental health counselors, particularly for adolescents. Two studies reveal that these chatbots often fail to provide adequate support in crisis situations, such as self-harm or sexual assault, and may exhibit unethical behaviors, including cultural and gender biases. Despite their accessibility and privacy, experts caution that the technology requires significant refinement and regulation to ensure safety, especially given the rising use of chatbots among teenagers. The American Psychological Association has called for more research and education on the limitations of these digital tools. [Extracted from the article]
Additional Information
- Source:Science News. 2026/01, Vol. 208, Issue 1, p18
- Document Type:Article
- Subject Area:Social Sciences and Humanities
- Publication Date:2026
- ISSN:0036-8423
- Accession Number:189777695
- Copyright Statement:Copyright of Science News is the property of Society for Science & the Public and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Looking to go deeper into this topic? Look for more articles on EBSCOhost.