Building Consumer Trust in AI Innovation: Key Points for Healthcare Leaders – MedCity News

Building Consumer Trust in AI Innovation: Key Points for Healthcare Leaders - MedCity News

As consumers, we tend to provide our health information for free online, as we ask Dr. Google “how to treat a broken finger.” Yet the idea of ​​our doctor using artificial intelligence (AI) for diagnosis based on analysis of our health data makes many of us uncomfortable, a Pew Research Center study found. get it.

So how concerned can consumers be when they know that much of their medical data is being fed into AI-powered models for analysis in the name of innovation?

It’s a question healthcare leaders may wish to ask themselves, especially given the complexity, complexity and responsibility associated with entering patient data into these models.

What is at stake

As the use of AI in health research and healthcare expands, so do the risks associated with assessing AI’s potential – and the greater the potential for a breakdown in trust. of customers.

A recent survey conducted by Fierce Health and Sermo, a physician online specialist, found that 76% of physician respondents use large-scale language models (LLMs), such as ChatGPT, for making medical decisions. These publicly available resources provide access to information such as potential side effects from medications, diagnostic support and treatment planning recommendations. They can also help capture physician data from patient encounters in real time through remote listening, an increasingly common way to relieve the administrative burden on physicians so they can focus on care. . In both cases, mature practices to incorporate AI technologies are important, such as using LLM for actual assessment or assessment rather than relying on it to provide answers to complex care questions.

But there are indications that the dangers of using LLMs for care and research need more attention.

For example, there are significant concerns about the quality and completeness of patient data fed into AI models for analysis. Most health care information is unstructured, captured in the open spaces of the electronic health record (EHR), patient messages, images, and even handwritten text. In fact, half of healthcare organizations say less than 30% of unstructured data is available for analysis. There are also inconsistencies in the types of data that fall into the “unstructured data” bucket. These factors reduce the overall health of the patient and population. They also increase the chances that AI testing will be biased, showing data that is representative of certain segments of the population or that is incomplete.

And while laws surrounding the use of protected health information (PHI) have kept researchers and other analysts from using all the data available to them, the high cost of data storage and sharing information is a big reason why most healthcare information is ineffective, especially compared to other industries. So are the complexities associated with applying advanced data analytics to healthcare data while maintaining compliance with healthcare regulations, including those related to PHI.

Right now, health care leaders, doctors and researchers find themselves in a unique place of change. AI has great potential to drive innovation by using clinical data for diagnosis in ways that industry could only imagine two years ago. With one in six people using AI chatbots at least once a month for health information and advice, demonstrating the power of AI in healthcare beyond “Dr. Google” while protecting what is most important to patients – such as the privacy and integrity of their health data – is critical to gaining consumer confidence in these efforts. The challenge is maintaining compliance is about health data while he is innovating with AI-driven data analytics and applications.

Making the right moves for AI testing

As the use of AI in healthcare continues to rise, a modern data management strategy requires an advanced data security approach, which puts the customer at the center while meeting regulatory requirements. the core of effective data tracking in a changing regulatory environment.

Here are three top tips for leaders and researchers about protecting patient privacy, compliance, and ultimately, consumer trust as AI innovation accelerates.

1. Start by trusting the customer in mind. Instead of simply following regulations around privacy and data protection, consider the impact of your efforts on the patients your organization serves. When patients trust your ability to safely and securely add data for AI innovation, this not only helps create the level of trust needed to develop AI solutions, but also participate in sharing their data for AI analysis, which is essential to build a personalized care plan. Today, 45% of healthcare industry executives surveyed by Deloitte are prioritizing efforts to build customer trust so that customers feel comfortable sharing their data and making their data available for AI analysis.

One important step to consider to protect consumer trust: implement strong controls over who accesses and uses data—and how. This principle of effective data protection helps ensure compliance with all applicable laws. It also enhances an organization’s ability to generate the insights needed to achieve better health outcomes while keeping customers engaged.

2. Create a data management committee for AI development. The proper use of AI in a business context depends on a number of factors, from the analysis of the risks involved to the growth of data systems, relationships with customers, and more. That is why the data management committee must include health IT experts as well as doctors and experts in all fields of study, from nurses to population health professionals to team members of finance. This ensures that the right data development projects are carried out at the right time and that the organization’s resources provide the right support. It also brings together all key stakeholders to identify the risks and rewards of using AI-driven analytics and how to create effective data protection without unnecessarily stifling innovation. Instead of “picking your own work,” consider whether an outside professional would be helpful in determining whether proper coverage is in place.

3. Reduce the risks associated with re-identifying sensitive patient information. It is a myth to think that simple anonymization techniques, such as removing names and addresses, are sufficient to protect patient privacy. The reality is that the advanced authentication methods used by bad actors can often aggregate so-called anonymous data. This requires sophisticated methods to protect data from the risk of re-identification when data is at rest. This is an area where the general approach to data management is no longer sufficient. The key strategic question for organizations becomes: “How will our organization deal with the risks of re-identification—and how can we continue to assess these risks?”

While healthcare organizations face the biggest hurdles to successfully implementing AI, they are also poised to develop some of the life-changing tools of this technology. By addressing the risks associated with AI-driven data analysis, healthcare providers and researchers can use the data available to them more effectively – and maintain consumer trust.

Photo: steved_np3, Getty Images


Timothy Nobles is the chief business officer for Integral. Prior to joining Integral, Nobles served as chief product officer at Trilliant Health and head of product at Embold Health, where he developed advanced analytics solutions for healthcare providers and payers. . With more than 20 years of experience in data and analytics, he has held leadership roles at innovative companies across multiple industries.

This post appears in the Developers of MedCity program. Anyone can publish their thoughts on entrepreneurship and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

#Building #Consumer #Trust #Innovation #Key #Points #Healthcare #Leaders #MedCity #News

Leave a Reply

Your email address will not be published. Required fields are marked *