Microsoft captured global attention with a recent announcement that its new artificial intelligence model can outperform doctors in diagnosing diseases. Trained on vast amounts of medical data, the diagnostic AI surpassed physicians in tests across multiple conditions. It marks a pivotal moment in the evolution of healthcare technology and sends a clear message: the future of medicine is here, and it’s digital.
As a physician and health system leader, I welcome the progress. I’m a strong believer and advocate for the many use cases of AI in the health space. The technology holds immense potential—catching illnesses earlier, identifying rare diseases, improving efficiency across healthcare systems, and reducing administrative burden.
But as the world races to uncover the possibilities for AI in the health space, the effort must be constantly guided by one core question: What is best for the patient?
AI is revolutionizing diagnostics
By rapidly analyzing large volumes of patient data—such as imaging, lab results, clinical notes, and genetic information—AI has the potential to identify patterns that signal disease far faster than traditional diagnostics by a human doctor. As Microsoft demonstrated, these tools are increasingly accurate, and in some cases can outperform clinicians in identifying certain conditions. For example, AI can flag early-stage lung nodules in CT scans that might be overlooked by the human eye, or detect subtle cardiac anomalies across thousands of ECGs in real time.
By diagnosing faster and with high precision, AI can reduce diagnostic errors, shorten time to treatment, and support more personalized care—especially when used alongside the necessary clinical judgment of trained physicians.
But diagnosis is only the beginning of a patient’s journey
No matter how accurate or fast, the diagnosis is just the beginning of a patient’s journey. A diagnosis is often met with fear and apprehension and comes with tough, potentially life-altering decisions, which can create immense, indeterminate uncertainty. A well-delivered diagnosis isn’t just about the condition, but also the “why,” and “what’s next.” These are human conversations. They require trust, empathy, and, often, cultural context.
Studies show patients still want a human connection and don’t yet fully trust AI health information. A 2023 Pew study found that 7 in 10 Americans trust their doctor’s advice, while only 24% trust AI-generated health information. And 60% said they were uncomfortable with their doctor relying on AI to assist in their care. A more recent study from researchers at the University of Wuerzburg and the University of Cambridge showed that patients lost confidence in doctors if they mentioned or advertised using AI.
Many physicians also remain cautious, even as the technology continues to make proven advancements and adoption rates climb. A 2024 Johns Hopkins University (JHU) study found that many doctors still do not trust AI tools, citing opaque decision-making processes, lack of contextual awareness, and liability concerns. Even high-performing models are unable to explain why they reached a certain conclusion—something unacceptable in clinical care, where accountability is essential. Ask any doctor if they’d trust a “black box” with their patient’s life and you’ll get a resounding: “No.” If AI systems can’t explain their reasoning—or if they lack sufficient real-world context—physicians are right to question their use.
Trust is the foundation of healthcare systems and the patient-doctor relationship. Advancing too quickly in the AI revolution in health risks undermining it.
Closing the AI trust gap
Many global hospitals, including King Faisal Specialist Hospital and Research Center, where I work, are already deploying AI and exploring new use cases to improve everything from hospital operations and administration to diagnostics. But as AI permeates further into the health field, we need to ensure it’s incorporated responsibly and improves health outcomes for patients.
The field is facing a patient-doctor-AI triangle dilemma—the patient needs to trust the doctor, who needs to trust AI. As JHU study shows, transparency and reasoning from AI tools are essential to ensuring doctors understand the technology and trust its recommendations.
To truly bridge the gap, AI systems should be built with physicians, not for them. This looks like investing in AI literacy and training programs for doctors, nurses, and medical students—to improve current implementation and secure health AI’s future advancement and sustainability. Strong regulatory frameworks that ensure accountability and patient safety, particularly when it comes to medical and patient data, are essential.
One promising real-world initiative is Saudi Arabia’s Public Investment Fund’s Humain, an AI agent designed to support healthcare delivery and relieve administrative burdens. Codesigned with medical professionals, Humain integrates into existing teams to enhance the capabilities of health professionals, not replace them. Transparency and reasoning are embedded functions.
When rolled out thoughtfully and effectively, these types of health AI initiatives have the potential to dramatically improve the quality of care, particularly in underserved areas lacking health infrastructure and medical professionals. With responsible and collaboratively designed health AI tools, we can take a major step globally towards the democratization of the health sector.
The best outcome for patients
The future of medicine is not AI versus doctors. It’s AI with doctors—a partnership. AI may be better and faster at identifying rare cancers or subtle diagnostic patterns, but only a doctor can consider a patient’s story, their social context, their fears—and translate a diagnosis into a treatment plan they can understand and embrace.
To get there, we must bridge the trust gap between patients, doctors, and technology. If we get this right, we can build a healthcare system that is more accurate, more efficient, more accessible, and more sustainable for the long term.
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе

Since its founding in 2018, Hello Divorce has aimed to make the divorce process less stressful and more cost-effective. The startup helps spouses accurately


Clippy has become an unlikely protest symbol against Big Tech.
The trend started when YouTuber Louis Rossmann ">posted a video

Back in March, Facebook introduced a new feature that wasn’t exactly new. The Friends tab—de


While AI features have been creeping into pretty much every popular br

The English Premier League, the world’s most popular soccer league, kicks off this weekend to a global TV audience of around one billion peo