USF and FAU researchers find Floridians hesitant to trust AI for mental health support

5 hours ago 2

A new survey shows most Floridians are still reluctant about using artificial intelligence to help with their mental health concerns.

University of South Florida and Florida Atlantic University researchers created the survey to analyze 500 participants’ experiences with AI in health care.

Among the researchers is Stephen Neely, an associate professor in the School of Public Affairs and a faculty fellow for the Global and National Security Institute at USF.

Neely said the survey was conducted in May — Mental Health Awareness Month — because the topic is “near and dear” to his heart.

The survey also analyzed whether people trust AI to give health care recommendations and treatments.

“We know that those apps can't fully replace human practitioners, but we did want to get a sense of whether people are using them, how comfortable they are with using them and whether they find a sense of connection and support,” Neely said.

ALSO READ: Health care AI, intended to save money, requires a lot of expensive humans

AI played a part in the survey itself. Researchers used a panel vendor, Prodege, to find respondents using a “stratified quota sampling approach.” This means the AI platform divided the state into regions and identified individuals by different ages, genders, races and ethnicities — with an error margin of around 4%.

Neely said the survey didn’t mention names of AI platforms because it aimed to measure Floridians’ comfort with using AI rather than finding out which specific tools they use.

The survey found that 42% of respondents have used an AI tool at least once to discuss their health. About a third of people could say the same about mental health-related AI tools.

Data showed those polled are comfortable with AI being used for administrative tasks, such as scheduling appointments, collecting symptom information and assisting doctors in making decisions.

But they were “far less” comfortable with the idea of AI administering medication or making a diagnosis on its own.

“I think we're seeing a lot of hesitancy from people when we start crossing over administrative health care tasks into actual clinical health care tasks, where the AI is suddenly performing the roles that a doctor would traditionally perform,” Neely said.

Around 31% of respondents said they think AI tools can give accurate information about mental health. However, 83% said they would still prefer to talk to a human therapist over AI.

Neely said this is because people don’t feel emotionally supported by AI in a mental health discussion.

In fact, he said the support aspect was a “big factor” in the survey.

“They feel that there can be more of an emotional connection with another human, whereas the AI might be able to give me accurate information about what I'm experiencing, but it doesn't necessarily make me feel seen, heard and understood in a human level,” Neely said.

The survey also showed that people have privacy concerns when using AI in health scenarios. Around 75% of respondents said they are not confident that AI will keep their information private.

This is because, when sharing mental health experiences with a human doctor, people might trust that professional to keep them private. But when sharing the same information with an AI tool, people are not sure whether they can trust the company not to sell their information.

Data also showed that over one-third of respondents have dealt with anxiety symptoms, and between 20% and 30% have cyberchondria symptoms, meaning they excessively search for health information online.

Neely explained that cyberchondria has existed since “the advent” of the internet, when people started going down “rabbit holes” of symptom checking.

The problem, Neely said, is that the internet will often show people the worst-case scenarios, resulting in increased anxiety and even leading to unnecessary medical treatment.

ALSO READ: Put some thought into asking Google for medical help 

Neely said AI introduced more sophisticated tools that people can use to have a conversation about specific experiences.

AI tools generally don't lean into “catastrophizing” information, but instead on the range of medical issues that a specific person could have based on their symptoms, he said.

Still, the survey found that about a third of respondents feel the need to repeatedly search for the same health-related symptoms online, while 25% said doing so makes them more anxious.

“If you are prone to health anxiety, you're going to pull the worst-case scenarios out of that list, and it's going to make you kind of spiral on looking for symptoms that you weren't necessarily feeling,” Neely said.

Neely said people may not be fully aware of how much AI is already being used in health care, such as in reading X-rays, monitoring vital systems and controlling the flow of intravenous medications.

And he said AI developers should listen to patients' concerns when designing these tools so that people are comfortable trusting them with health-related matters.

“These things are coming, and they have a lot of question marks that come with them,” Neely said. “We're at a very critical period where it's important that we get these things right.”

Read Entire Article