24-10-2025 10:57 PM
24-10-2025 10:57 PM
Another link for you @tyme .
I think I'll have to maintain a thread in my Favourites.
https://medicalxpress.com/news/2025-10-ai-chatbots-systematically-violate-mental.html
26-10-2025 10:19 AM
26-10-2025 10:19 AM
@Dimity full disclosure i sometimes use chatgpt to talk out problems, as i dont have a therapist and dont know how else to share this distress. ive been told its pretty risky and the chat just validates anything u say (to make u feel justified in whatever you choose). Seems dangerous. On practical matters and for collating info or writing documents i find it helpful, though.
26-10-2025 12:04 PM
26-10-2025 12:04 PM
@EternalFlower I'm a bit concerned by reports of AI misquoting, misattributing and hallucinating when compiling information for academics lawyers and advocates.
And I've noticed Google now creates its own shortcuts instead of providing URLs when you want to check or share sources. It does it even if you use an alternative search engine like Bing.
26-10-2025 02:13 PM
26-10-2025 02:13 PM
hi @Dimity i've noticed some pretty key errors sometimes
in basic data and fact. I correct it, and it says "sorry" lol but very concerning.
18-01-2026 10:04 PM
18-01-2026 10:04 PM
@EternalFlower @tyme this is a thoughtful piece. Suggests need for awareness of vulnerability.
21-01-2026 05:36 PM
21-01-2026 05:36 PM
Wow @Dimity . So so true. Thank you for sharing. And it seems that often, when people turn to AI chatbots, they may not even realise it is AI...
If you need urgent assistance, see Need help now
For mental health information, support, and referrals, contact SANE Support Services
SANE Forums is published by SANE with funding from the Australian Government Department of Health
SANE - ABN 92 006 533 606
PO Box 1226, Carlton VIC 3053