Dr. S. Scott Graham, the author of The Doctor and the Algorithm, discusses tech in healthcare and what you can do to prepare yourself.
For more podcast resources to help you with your medical school journey and beyond, check out Meded Media.
Listen to this podcast episode with the player above, or keep reading for the highlights and takeaway points.
The MCAT Minute is brought to you by Blueprint MCAT.
You need to start thinking about what your six-month or four-month study plan is going to look like if you’re taking the MCAT at the beginning of the year. Go over to Blueprint MCAT and sign up for that free account to get free access to their study planner tool to help plan out your schedule.
For a long time, Scott has been researching the human factors in biomedical research, curious about the issues of bias, funding, and team dynamics and how they influence clinical trials. Recently, there have been huge venture capital investments over the last 10 years in the field of biomedical research.
'Recently, there's been huge investments in AI. It's a whole new area of biomedical research that is exploding.'Click To TweetWe see a collision of two different cultures – the standard healthcare, biomedical research, culture, and tech.
Silicon Valley has its own way of working and thinking. They tend to prioritize rapid development and throwing out products to test them in a live environment. And Scott says this is a little scary from a health and medicine perspective.
And he thinks this is a fascinating problem, looking at how this tradition in the medical world of careful clinical trials, long term research collides with “move fast and break things” in tech.
Scott specifically looks at the dynamic of clinicians and clinical researchers debating, arguing, and trying to find the best way to help patients. It’s an area where there are a ton of competing ideas and rigorous research from different disciplinary domains. Sometimes, patients are incredibly complicated and require multiple subspecialties.
All of that creates an environment where that is grounded in cutting-edge science. And it has to be made to work for a specific clinical case, through discussion, compromise, and consensus.
Scott believes that this has been an ongoing two-year train wreck from a communication perspective. The parts that were most harrowing to Scott were when you hear someone like Dr. Fauci in an interview, and CNN says they were surprised by the lack of vaccine uptake. And no social scientists working in health, for the last decade, could ever have been surprised.
Although Scott thinks Dr. Fauci does great work, Scott wished there would have been a few more social scientists or public health professionals on these teams. That way, they can leverage that research that has been well-funded by the NIH.
One thing Scott thinks is a big missed opportunity was expectations management. There’s constant uncertainty in the scientific process and findings will be revised and will change. Then the public health messaging apparatus didn’t want to communicate that because they wanted people to do certain things.
As a result, it created an expectations problem. The messages that they sent into the world were a little overly confident about the state of science. So as soon as the state of science changed, people didn’t trust it.
And research shows that if climate scientists communicate honestly the uncertainty in the underlying data, the general public is more likely to trust what the scientists are saying. That act of humility is an important part of the communication process. Unfortunately, that’s not being factored in, in the early days of public health messaging in this country.
Scott believes there was a lot of fear in the early days of the pandemic that people weren’t going to comply. They didn’t do the things needed to keep it contained. Then the ignorance came in. People increased their levels of confidence beyond what they felt, hoping that that would lead to compliance, even though it wasn’t clear from the available data on best practices.
Moreover, Scott clarifies that he doesn’t really recommend using the word “compliance” when it comes to having these conversations with people. Because it isn’t the way to think through when you’re having a dialogue with people. He doesn’t think it’s the best health approach to talking to people.
Scott currently sees a ton of enthusiasm, and an unfortunate amount of hype, with how AI plays out in the medical world. And he would describe himself as “AI cautiously optimistic.” There are some exciting developments coming.
Scott is particularly enthusiastic about some of the drug discovery applications. They’re using AI to ingest all sorts of pharmacology data. This will recommend possible candidate drugs for testing in orphan drug conditions, or when a particular patient population hasn’t been responding well to the standard of care. He’s also excited about the potential improvements for clinical notes.
'AI never moves quite as fast as people want it to, and when it does move fast, it can be bad.'Click To TweetOne of the things he argues in the book is that health AI is the best when you’ve got all three perspectives in the room. You need top-notch computer science, top-notch biomedical researchers, and some understanding of the human factors involved.
And so, when AI gets super dangerous, is when you have some computer scientists saying they know how to work data or code. Then they do this without talking to the doctors or doing a rigorous clinical trial. And so, they end up producing dangerous products.
Scott makes a point that we need all the expertise in the room. And we need to understand that good AI is going to have that biomedical research timeline, not that Silicon Valley timeline.
“Good AI is going to have that biomedical research timeline, not that Silicon Valley timeline.”Click To TweetScott clarifies there’s a lot of slippage in the language when talking about AI, coding, machine learning, and all that stuff.
Historically, true AI meant artificial general intelligence. That’s your Hal 9000, your Cortana, your Jarvis, depending on what your media landscape is. It’s the robot who thinks and talks and can do anything. Scott explains this is not happening.
Almost everything that’s deployed right now has to do with machine learning systems. They are high throughput statistical pattern recognition machines. They can usually do one thing really, really well.
Things that feel more and more like AI are often a couple of different machine learning systems stapled together. They can produce a couple of different outputs based on these systems that were stapled together.
And so, most of what we’re looking at these days is machine learning, and people call it AI because machine learning is the dominant paradigm within the big tent of AI. But it’s not what we’re thinking about when we’re thinking of sci-fi robots.
Scott says that AI runs the risk of deploying not entirely well-vetted technology. And so, if some of these high-profile cases raise a note of caution, then we should slow down. We have to make sure that this is all well-grounded in biomedical research.
“Most AI right now in the clinical space is not on autopilot. It's all human in the loop.”Click To TweetOne of the biggest things they’re testing for AI studies is using two comparison groups. One group consists of radiologists diagnosing with AI and another group of radiologists diagnosing without AI. So there’s no case where it’s just the AI by itself.
Scott adds it’s unlikely to reliably replace radiologists or any other clinical subspecialty with AI.
At the end of the day, when it comes to making decisions about individual patients, there’s always this moment that jumps the gap between the science of medicine and the art of medicine. And he believes you need a human in the loop to help bridge that gap.
And so, rather than replacing, we’re going to see efficiency such as an increased number of patients you can see in a day.
If we’re replacing or increasingly augmenting clinical practice, Scott wants to see these systems vetted with the highest quality medical research. He wants to see them vetted on improving health outcomes, and not just increasing the rate at which a provider can see a patient.
Scott reckons we’re seeing major changes sooner than later, but not in the magic wand format. It’s going to start with the same folks who are getting elective full body PET scans just because they want to know and they can pay out of pocket.
However, Scott thinks this is both exciting but also a potential risk in terms of our communities. Because a lot of the benefits of AI are going to accrue to the rich guys, especially early on. That being said, there are a lot of promising opportunities here that can help make patient care less expensive and more effective. But he certainly doesn’t want to live in a world where only the 1% get the benefits of those investments.
Personally, I think this is okay since that’s just the nature of technology. It starts for the rich, and then things get cheaper and cheaper later on.
Scott thinks there are certain investments in some of these foundations that are doing really good things in the world.
The onus is on governments and universities to step up and provide the basic scientific funding and public health funding that we need for issues of broad social concern.
His big fear about the role that these rich white guys play in the research funding ecology is that sometimes we might see state governments, universities, and even federal funders, potentially backing away. And they do that because it’s taking some of the load off their shoulders.
'Philanthropic contributions to research are the fastest growing in public university portfolios.'Click To TweetScott recommends getting some basic conceptual literacy on AI and machine learning.
“The best AI is going to be grounded in rigorous clinical research. This is already a core staple of med school curricula.”Click To TweetStudents have more opportunities to learn AI frequently in residency and other training. And so, just lean into that, as future doctors, to have a good understanding of what rigorous clinical research looks like.
That way, when you see the latest AI that’s being offered on the horizon, you can vet it, according to that rigorous clinical research. Make sure it’s really improving health outcomes, and you don’t get necessarily wrapped up in some of the hype.
Learning about that technology on a more foundational level can provide that pathway. And when you really know computer science and medicine, that’s when we have the potential for some of the highest quality research to happen.
Now, it doesn’t mean everybody needs to do AI development. For the AIs that get deployed, most of them are going to be deployed in the background of electronic health records. So you don’t need to become a computer programmer to be a future doctor.
But Scott still recommends at least getting some basic conceptual understanding of how the AI works. That way, you know when to trust it and when not to trust it.
Check out Scott’s book, The Doctor and the Algorithm. It’s a general introduction for folks who come from medicine to learn more about AI. Or for folks who come from computer science or critical data studies to learn more about the healthcare dimension. If you want a 10,000-foot view of health and AI, then this book is for you!
The Doctor and the Algorithm by Dr. S. Scott Graham
Lorem ipsum dolor sit amet, consectetur adipiscing elit
I just received my admission to XXXXX! This is unreal and almost feels like I am dreaming. I want to thank you for all of your help with my application. I cannot overstate how influential your guidance and insight have been with this result and I am eternally grateful for your support!
IM SO HAPPY!!!! THANK YOU SO MUCH FOR ALL YOUR HELP, IM INDEBTED TO YOU! Truly, thank you so much for all your help. Thank you doesnt do enough.
I want to take a few moments and thank you for all of your very instructive, kind and consistent feedback and support through my applications and it is your wishes, feedback, and most importantly your blessings that have landed me the acceptance!
I got into XXXXX this morning!!!! It still has not hit me that I will be a doctor now!! Thank you for all your help, your words and motivation have brought me to this point.
I wanted to once again express my heartfelt gratitude for your help in providing feedback during my secondary applications. Your guidance has been instrumental in my journey.
Just wanted to share my wonderful news! I received my first medical school acceptance! Thank you for all that you do for us Application Academy!!!
I am excited to tell you that I just got my third interview invite from XXXXX today! I can’t believe it. I didn’t even know if I was good enough to get one, let alone three – by mid-September. Thank you so much for all of your help and support up to this point; I would not be in this position without it!!
I wanted to thank you for helping me prepare for my XXXXX interview. Even in a 30-minute advising session, I learned so much from you. Thank you for believing in me, and here’s to another potential success story from one of your advisees!
I just received an acceptance with XXXXX! This is so exciting and such a huge relief and so nice to have one of our top choice schools! I also received an interview with XXXXX which brings the total up to 20 interviews! Thank so much, none of this would have been possible without you!
Join our newsletter to stay up to date
* By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Resources
Advising Services
Podcasts & Youtube
Books
About