Take-away IV – Robots
Let the robots do the robot work and let the humans do the human work.
One of the core values that has been breached in medicine today is autonomy. As healthcare has become more complex, operations and clinical workflows have also become more complex. Within a healthcare organization, the clinical operations team is tasked with the smooth, efficient running of a healthcare unit.
In my experience, operations leaders think about clinical operations in the same way that you might think of a factory’s operations. You have technology, you have people, and you have processes. You lay out the process and then optimize the movement and time of the people to get the most efficient result. By necessity, this modality of thinking assigns physicians and other clinicians to be cogs in a machine.
There is one huge problem with making the clinicians cogs in a machine – this goes against their core value of autonomy.
As a physician, I was not trained to be a cog in a machine. I was trained to look at the human being suffering in front of me, rapidly gain their trust, make an empathetic connection to them, and start the mental activity of weighing the myriad risks and benefits of any given treatment plan as it relates to that human being.
This task requires a huge amount of autonomy. Each human being is different, and each set of circumstances is different. When I am doing the mental math of what treatment plan is best for that, I am using my own internal algorithm based on a variety of variables. One of these variables is my experience with past patients. Another variable is my intuition and my gut. Another variable is the human being in front of me and how they react to my suggestions.
This task cannot be done by a robot because many of these variables cannot be sensed by robots – at least not yet. So making me as a physician feel like a robot as part of a machine feels absolutely terrible.
When the operations team tells a clinician that all patients must go through the same clinical workflow regardless of the situation, it feels like moral injury. For example, if you tell me that each patient needs to get exactly 15 minutes of my time so they can move smoothly through the rest of the workflow, I feel terrible. Why? What if I have a patient crying? What if I have a patient who tells me they are suicidal? What if I have a patient who wants me to explain a complicated issue to their family?
For every hard and fast operational rule you provide, a physician can think of a million exceptions. When the operations team sticks to their mental model of the factory floor and the clinicians as robots, it creates daily abrasion with the clinicians. Every time a clinician encounters a patient situation that requires deviation from the normal workflow, a tense situation arises.
If you allow the clinician to have autonomy and freedom to deviate from the workflow when needed, it will feel better. If you punish the clinician for deviation when they feel it was clinically necessary, you create a moral injury event. You have asked them to go against their core value of autonomy.
Reading this, you may assume that I am anti-robot in the clinical setting. On the contrary, I am hugely optimistic for the use of robots in healthcare.
I believe that we can let robots do robot work so that the humans can do better human work.
In other words, lets take advantage of robots to do all the automated or algorithmic tasks that need to happen in healthcare. This frees up the clinicians to have more autonomy. It frees up their time. It frees up their mental space. The clinicians can ask the robots to do complicated tasks and then use the outputs of those tasks to make more humanistic decisions with their patients.
For example, imagine that you are a primary care physician in a busy rural practice. You take care of almost the entire town, and you don’t have a lot of specialty support nearby. You use robots to help you in many ways:
- Use an AI tool to take an audio recording of your visit with a patient and auto-generate a medical note that you can edit as needed.
- Use AI to auto-schedule your busy day, looking for holes in the schedule and suggesting proactive patient care tasks to do during that time, triggering text messages to offer an appointment to a patient who was waiting to see you.
- Use AI to manage the lab work that you have ordered. They can alert your smartphone when the lab results are in, auto-link the labs to your note from the patient visit. Finally, the AI can remind you why the labs were ordered and who should be alerted with the results.
- Use AI to run through all the possible medications available, using the most up-to-date medical literature to assess efficacy. Then the AI can link to the cost to the patient with their insurance, and provide you with an easy-to-read list with pros and cons and cost.
There are many other ways that AI (I call them robots) can help clinicians with their daily work. The key is that the robots are there as helpers. The robots give the clinicians more time and mental space to be more human with their patients. The robots give the clinicians more autonomy.
Until recently, most of the AI and robots used in healthcare systems have made clinician life worse. They have actually taken more time and mindspace away from the clinical work with patients. However, we are on the verge of massive improvements in AI and helper robots.
The key for successful use of robots is for the operations team to ask themselves over and over “How does the robot enable the clinician to have more autonomy? How does the robot give the clinician more time and mental space? How does the robot make the clinician feel respected?”
If robots cannot support the core values of physicians and other healthcare providers, they will add to moral injury. Unfortunately this is what we have seen to date and as a result many clinicians are (understandably) wary of new AI and robots.
When the operations teams understand the human characteristics of clinicians and support that through AI, we may see a massive shift in medicine towards more humanism and less moral injury.