Uncategorized

Mental health apps are collecting more than emotional conversations

People use mental health apps to talk about depression, trauma and suicidal thoughts in moments they may not share with anyone else. Many users likely assume those conversations carry protections similar to therapy sessions. In reality, mental health apps operate without the same confidentiality and privacy standards that govern licensed therapists.

mental health apps privacy

A new academic study examining 25 popular Android mental health and therapy apps found that every app contained at least one undisclosed tracker absent from its privacy policy.

The study found that 68% of the apps failed to disclose at least half of the trackers detected inside their software. Runtime testing of the 20 apps the team was able to instrument showed that each contacted at least one third-party tracking or analytics domain not named in its privacy policy.

These trackers can collect information about how people interact with apps and may send behavioral or usage data to outside firms. The researchers warned that companies could infer sensitive details about users through behavioral signals tied to app activity.

Usage frequency, interaction timing, feature engagement, and session patterns may indicate that someone is seeking help for depression, anxiety, trauma, addiction, or loneliness, even without access to conversation contents.

One app embedded 20 trackers while naming none of them in its privacy policy.

Personal conversations used for AI systems

AI-provider disclosures formed another concern. Researchers found that 48% of the apps referenced third-party AI providers in their privacy policies. Some policies identified companies such as OpenAI, Anthropic, and Groq directly, while others used broad phrases like “AI services” or “large language models” without identifying where information was processed.

The conclusions about AI providers were based on privacy-policy disclosures rather than direct verification of backend data transfers. The authors noted they could not confirm which user content, if any, was transmitted to AI providers.

Several apps also disclosed using personal entries or conversations for AI training or improvement. One privacy policy stated that datasets and trained models derived from user information could be commercialized or licensed to outside organizations.

They cited prior literature arguing that information incorporated into AI training datasets or model weights may be difficult to remove after account deletion, although they did not independently verify how individual AI providers handle deletion requests.

Camera and microphone access

The apps also requested extensive access to sensitive device features. 19 of the 25 apps requested camera access, microphone access, or both.

Researchers identified 16 cases where apps requested dangerous permissions without properly disclosing them in privacy policies. Those permissions included camera access, microphone access, photo library access, calendar access, device account access, and phone identifier access.

Some apps requested these permissions despite having no obvious reason to record audio, capture images, or access additional device information. Users often grant permissions without knowing how recordings, photos, or other collected material may be stored or shared.

Sensitive data collected before signup

Privacy concerns often began before people even reached the main interface. Around 80% of the accessible apps required an email address during onboarding. Nearly half asked mental health screening questions before account creation.

That means users may disclose sensitive emotional or psychological information before account creation or first use, and potentially before reviewing privacy practices.

Deleting data is often difficult

The research also found that only 28% of the apps offered in-app deletion tools. Most required users to send email requests asking companies to remove their information. Two apps stated that they offered no deletion mechanism at all.

Several apps described conversations or user data as “private,” “safe,” or “confidential” during onboarding while simultaneously embedding tracking technologies not specifically disclosed in their privacy policies.

Data brokers and behavioral profiling

Researchers also pointed to the growing data broker industry, where personal information collected from apps and online services is bought and sold.

A 2023 investigation identified 11 brokers selling lists of people categorized by conditions such as depression, anxiety, and PTSD for as little as $0.20 per record. Those records included names, home addresses, and income information.

Most mental health apps fall outside HIPAA protections, potentially allowing information collected through advertising and analytics systems to circulate through broader commercial data markets with fewer restrictions.

“We have found no evidence of malicious intent,” the authors wrote, adding that the findings “should not be read as accusations that any of these apps deliberately deceive users.”

Source link

Visited 1 times, 1 visit(s) today

Leave a Reply

Your email address will not be published. Required fields are marked *