In this episode of Counting Sand, host Angelo Kastroulis is joined by Dave Foster, a consumer health advocate and strategy advisor, to discuss the complexities of health strategy. They delve into how health equity strategies aim to address disparities by stratifying data based on social identities such as race, ethnicity, and gender. The conversation highlights the importance of patient involvement in healthcare decisions and the ethical use of AI. Examples from Johns Hopkins' sepsis prediction models and the challenges faced by IBM Watson illustrate the potential and pitfalls of AI in healthcare. Foster emphasizes the need for diverse input, including patient advocacy, to ensure AI tools are effective and equitable. The episode underscores the significance of understanding social determinants of health and integrating technology to improve care coordination and outcomes.
Key Topics Discussed:
Introduction to Health Strategy:
Health Equity Strategy:
Patient Education and Advocacy:
The Role of AI in Healthcare:
Integrating Social Determinants of Health:
Future Directions and Ethical AI:
Key Quotes:
Resources Mentioned:
Action Items for Listeners:
Final Thoughts:
The episode emphasizes the critical role of technology and patient advocacy in advancing health equity. By incorporating diverse perspectives and ethical considerations into AI development, the healthcare system can better address disparities and improve outcomes for all patients.
Angelo: [00:00:00] Health strategy refers to an approach and actions needed to achieve certain health related outcomes. It could mean things like policy, resource allocation, activities we might do, but it certainly includes technology like computer science. We're going to talk a little bit about all of those things today as we cover this idea of health strategy.
I'm your host, Angelo Kastroulis, and this is Counting Sand.
Angelo: Today I am joined by a long time friend and colleague, Dave Foster, a consumer health advocate and strategy advisor. Dave, as long as I've known you, that's what you've been doing is advocating for patients. So tell us a little bit about some of the cool things you've been doing.
Dave: Most recently I've been working on health equity strategy. What I mean by that is. How to help organizations make sure that when they are providing services or measuring quality that they take into account the differences between people who have different social identities. So the organizations that measure quality or audit and accredit organizations, Center for Medicare and Medicaid Services, National Committee for Quality Assurance, NCQA, the Joint Commission, they are all working together to create a foundational agreement about how to measure quality in a world where we see disparities and we want to eradicate them when possible. Some structural things in society are difficult for healthcare organizations to address. Although you start to see more of that in relation to some of the social needs that people have. But the main thing is that organizations can't just submit to these organizations What their average scores are for quality measure within a population.
They have to start looking at the data, stratifying it by race, ethnicity, language, sexual orientation, gender identity. And those social identity characteristics of individuals within a population, you will see differences in how people fare on things like cancer screenings. And that's, something that these organizations can do something about.
The remedy that I have been working on at Healthwise, which is now part of WebMD Ignite, was to educate and inform patients and get them involved in their care and encourage them to advocate for themselves. And so from here on, I'm going to continue working on that. Helping organizations understand how do they explain to their consumer constituents why it's very important to collect this data.
Hopefully we get into that in the discussion today. Ultimately, this is about having good data about the people you're serving so that you can tailor your service to their needs and demonstrate that you are redressing inequalities in healthcare.
Angelo: Yeah. So that's pretty cool. So, I think for those who might not know what some of these social drivers are and why they're important. We’re going to talk about that a little bit too, why they're really important because it does drive health.
Another aspect of this is just utilizing AI. We've heard about AI, so much in computer science and especially in healthcare. And we've got to make sure it's properly utilized. There's a lot of good examples of AI, like the stuff that Johns Hopkins was doing on sepsis and their models looked like it could predict sepsis six hours in advance.
Six hours in sepsis is critical and it had like a 40 percent accuracy. And then we have the opposite end of the spectrum where things like Watson in 2018. We thought it would help us with cancer and it turns out it was giving incorrect and unsafe treatments and all kinds of recommendations that weren't really guideline based.
And so evidence-based medicine is a challenge. So being able to kind of put AI in its proper place so that we can understand it, I think that that's critically important. And so I think that's kind of a little bit about what we're talking about too, right? Being able to say AI is good, nobody’s saying it isn’t. We're trying to just put it in its right place to understand. How can we know what this black does, right?
Dave: Yeah. And I think there are some really great efforts going on within industry and academia to look at the ethical use of computing as these tools become more powerful to make sure that we don't cause patient harm. The sepsis example is a good one.
And I think there were some researchers at Duke, I believe, who it was either Duke or UNC realized that there was some bias introduced into the algorithm they were using because of the data that they had around race identity of the patient. And algorithm maybe wasn't very accurate for people who had a certain racial identity. And these things are hard to explain.
So on the one hand, you don't really want to use something like race identity as a determinant of biological conditions. You know, race is a cultural construct. It's not really biological. On the other hand, because it is a social construct, there are social structures that change the way we treat each other or the way that people have access to care.
So you may end up introducing problems into some of these algorithms if you don't take race into account appropriately, or any of the social identities that we talked about. One of the things that I would like to see more of with these ethical AI groups is to have more representatives from the patient community. Consumers, patients, families, the people who are impacted by these tools.
I think we have good representation from industry and from academia, computer scientists. The government is working on some type of regulation, although it's probably going fairly slowly. So in the meantime, if we can ask more consumer advocacy folks to get involved and create a seat at the table for them to be involved.
My hope is, that these ethics [00:06:00] artificial intelligence groups will be better. Just as you want to diversify the people who are working on the problems in industry or in academia. I think we also need to bring in a different perspective from the consumer side of things.
Angelo: Yeah. I appreciate you saying that.
And I think that these are important because machine learning, if you want to avoid bias, it's one of those things that we say all things being equal, biology is what matters. But all things are not equal. In reality, what happens is there are all kinds of other factors.
How far away you are. How mobile you are. Do you understand the instructions? Are you able to get prescriptions or can you afford them? There's all these other factors that come into play that affect healthcare. So, it isn't just that all things being equal, this is what matters. There's all kinds of reasons why we might not have the outcomes we want.
So, the idea of, let's see if we can understand them by stratifying them. Helps us to be able to determine whether, you know, we're doing a relatively good job in terms of, health guideline-based care. It may be in this particular facility, but there's all these other factors that, oh, well, some patients aren't understanding the instructions. And you go, okay, well, if they did, we could help walk them through it and maybe they would have better care or they don't have access to good food.
So, therefore, of course, they're not going to improve their diabetes. It's not just medicating them as a solution. So, you have to kind of have these other factors to be able to understand, why it is that the guidelines that you're measuring are what they are. And I think that's critical. So, it's an interesting point you made about patients being an advocate, like what kind of input could a patient have, what kind of guidance would they want to provide in technology?
Dave: I think one thing I've heard from consumers whenever I've convened groups of consumers to talk about what their experience is like in care or in their social circumstances. What they've said is that we don't really know all the time, what are those guidelines? What is the standard of care? How do we know if we're being treated in a way that's important with those standard guidelines?
And so I think that's one of the things that we need to introduce into these discussions is how do we explain to a consumer, a patient, a family member, how these tools are being used to make decisions about their treatment plans or their diagnosis or many of the things that these tools are going to improve. Just having a little bit of information go to the consumer, so that they can understand what is the standard of care, so that when they notice something about their own bodies. They're pretty good experts about themselves and their own bodies to a large degree. So, if they hear something or see something that doesn't seem quite right, then we want to encourage people to speak up and advocate for themselves or their family members and say, “I understand that you're prescribing this medicine or you're ordering this procedure because you're trying to treat this condition. I'm not really sure about that because here are some other things in the symptoms that I'm feeling and they may not bother me that." Or they really do, here’s this other symptom that we're not addressing that I feel like we really need to focus on.
And those kind of things, I think, can start to not only go into the human interaction between clinician and patient, but it can start to go in as input into the algorithm to determine what the treatment recommendation might be. So, this is the guideline for someone of this age, with this condition, here is how they ought to be treated. Unless the person has a preference, where they don't want those types of invasive procedures.
They would prefer a less invasive procedure. Or the other way around, they may want to pursue aggressive treatment, not watchful waiting. So, the idea that the consumer can be involved in the design of these tools, so that it takes into account their preferences, once they're ultimately in production and running and gathering input. Not only from a laboratory observation but also an observation of what a person expresses about what they're symptoms and how they feel about them.
Angelo: And as I mentioned, you've been in this as far back as I can remember. You're thinking about the patient's voice. And I've always thought that too. The patient seems to be absent in a lot of the conversations in healthcare. A lot of times we're talked about, rather than being part of this conversation.
And I think that it's important to have a voice in the discussion. And so, of course, one of the things we try to do with technology that's based on evidence, like clinical guidelines and execution and all that, decision support. What we're trying to do is help the provider operate at the top of their license. So that everyone can get an equal amount of care, no matter where you go. You don't have to be fortunate enough to be in a research hospital that has the cutting-edge technology. Yes, they might have the cutting-edge equipment or something, but at least the knowledge. It shouldn't be uh, a monopoly on knowledge like that should be able to freely distribute.
But one of the issues though, that I worry about with AI, is that all of a sudden, now it's going to be a conversation between a machine and providers. Metaphorical conversation, obviously. But although, I don't know, with chatGPT, maybe it's a real conversation. But I think that, in other words, we may get shut out in a different way than what we thought. So, if we're not in the conversation of being able to enable this technology or giving it feedback and input, then I don't know, am I thinking about it wrong? Is that a little bit too doomsday?
Dave: No, I mean, this goes back to one of the things we first worked on together was the HIMSS interoperability showcase health story demonstration project. Where a number of different vendors participated in a scenario using standards from HL7 to interchange data from step to step through a patient's journey. Literally her story, of her life. And this particular patient in the scenario that we first worked on. It was an oncology scenario where she was screened, diagnosed with breast cancer then presented with surgery treatment options. Lumpectomy, full mastectomy, and the piece that I was playing had to do with the HL7 questionnaire standard. Where [00:12:00] a series of questions could be presented to the patient about their concerns related to the risks of metastatic breast cancer.
What do they feel about preserving breast tissue? Things like that. It's very hard for a clinician to really know what a patient prefers in that kind of scenario without actually asking them. And so I feel like many years ago, we were entertaining this idea of shared decision-making. Where someone is presented with rich detail, in plain language about their condition and the treatment risks or treatment benefits, alternatives of treatment, and then have a conversation between the patient and the provider about how to best meet that person's needs based on the way they feel about their condition and their symptoms. And I hope that as we start to codify these decision-making tools more and more in this new type of computing that we continue to take this into account.
I can remember even there was this notion in IBM Watson, even that, patient preference ought to be taken into account. Now was it executed? We can debate that. I think there were a lot of issues with the execution of IBM Watson. But I think some of the concepts that were introduced there are still valid today that we want to address. So, I think your idea of making sure that the conversation between a machine and a highly trained professional and a person whose life and situation is in a high stakes scenario, we absolutely want to make sure that there's input from that person who is going to be benefited or potentially harmed by whatever takes place in that high stakes scenario.
Angelo: Yeah. And I think, when you think about the way we disseminate this kind of technology and knowledge, there's a lot of room for all different kinds of things. It's very eclectic.
When you think about guidelines, guidelines are researched. Then a lot of very capable people, who are part technical and part clinical, start to codify research and guidelines into uh, effectively something that can be computable like a rule. And then we have AI, which is training on data and trying to kind of discover its own rules.
I think both of those are important. But you can't, really have one without the other. I think you still need to have decisions based on guidelines and technology that is computable, but of course, that isn't scalable. We can't build an infinite number of those rules, nor do we know what they all are.
So I think you need to have both of them in this equation. Some are more explainable than others, but I think ultimately you need to have these kinds of different technologies. I think that that's what actually got me to go back to grad school, was stuff like what we did in the at the interoperability showcase, where we started moving the data.
And we thought, okay, this is great. And I was thinking, well, what's next? Like, how do we understand this data? And you're right. It isn't just us understanding in a vacuum. You need to have the patient, to be able to truly understand what it's about. But I thought, okay, we need to be able to turn this into something computable so that we can understand it and get other, insights. So, I think that this is a super important computing problem.
So how do you think that a patient would actually be able to get this kind of input? Like what should we be doing to enable this? Should we create a standard and expect vendors to just find a way to get the patient's preference in there?
What is the right way to advocate, do you think, for the patient?
Dave: One thing I think about is a standard quality measure like mammography. So if I have a quality measure that says women of a certain age, 40 years old, whatever the number is, should have a mammogram, at a certain frequency, annually, every five years.
Whatever the United States Preventative Services Task Force comes up with. And then NCQA and other organizations. put that into their quality measurement rules. So literal language of how to calculate that score for a population you know, it's fairly sophisticated. But you know it's not something that couldn't be tweaked. And so what I wonder is, could we change something about the denominator of those calculations? Is it feasible for a computer to look at the different aspects of a person's life. And you, you may have to have a conversation with people. You may have to present them with questions in a survey.
But those are difficult to do. And I'm wondering if in the future, someone could have a conversation with their doctor about the risk of breast cancer and their family history and an ambient scribe could sort of automatically figure out, Oh this patient is very concerned about their risk of breast cancer.
And so we definitely want to include them in the denominator of this measure. Or this other patient is very concerned about the worry caused by a false positive and by doing a biopsy and the potential harm that can come from that. And that person, perhaps it's reasonable to exclude from the denominator of that measure.
So an organization could maybe get a good score even if a group of their patients opt out of that cancer screening. Because it's maybe a reasonable thing for individuals who have certain concerns and certain family history, maybe that they don't really need to have that cancer screening because doing so would be a kind of psychological harm.
Angelo: That's a good point. I hadn't thought about that. I mean, there's all kinds of other family history. Certainly, the rules don't take any of that into account. They don't take into account genomics, like do you have the BRCA gene? Things like that.
So, there's more room for this. I know that guideline pretty well. The other side of that guideline is if you just tweak the age range, you get drastically different results. It's, I think it's 52 to 70 something right now. If you move it to 45, you'll get drastically different data.
And it depends on the setting. Are you reporting it [00:18:00] or are you trying to close a care gap? You might decide to tweak that range cause you might not care. Another example is, in that same measure, is coverage information. If you've had gaps in coverage, that's what's reported to the government.
But if I'm a provider, I don't actually care about that. And you still, got to get your mammogram, even if you had a different insurance company. So, you know, we might have to tweak these guidelines a little bit. And I think AI could play a part there where it can not necessarily have to figure out all these factors, but it could add on. In other words, like, kind of tack on new things or different insights and second guess them a little bit and say, in this particular case, I think we should override it because of this. That'd be interesting.
Dave: I think one of the reasons why the U.S. Preventative Services Task Force lowered the age of mammography for women is because they had access to data that showed disparities among race. So, Wanda Nicholson, she's the co-chair of the task force, one of the things that I learned from her is that if you change the guidelines, then you can potentially catch more diseases than you would have if you waited to start screening people.
And so I think in this case, if you look at the population, the group of African American women that were in their forties they were not getting screened. And so now that the guideline is changing, hopefully, we're going to catch more of this. So, the flexibility to change that probably important from a health equity perspective. But you wouldn't know to make that change unless you had the data for the social identities of people that experienced society in different ways. One of the things that I want to help organizations do, and I hope organizations can figure out how to train their clinicians so that they're more comfortable with this.
I mean, it's not always a comfortable conversation asking people about their social identity. But if we can explain to patients, this is why it is important that your healthcare professional ask you questions about REL and SOGI because they want to make sure that they're treating their patients the best way that they can.
Then perhaps patients will be more open. If providers know that the consumer patient population is being educated about this, maybe the providers will have more confidence that they can ask these questions. And so that will improve the underlying data that we have, in order to help us gain insight into how the guidelines ought to evolve.
Angelo: Yeah, and I think there's some education involved there. We've all been asked some questions. Not all providers do this, but when they ask you, you know, how are things going at home? You're thinking, what does that have to do with what I'm in here for? And so there is certainly reluctance to share that.
But I think if there were examples that people could see, say, for example, if I knew this, you might not realize that there's a correlation between that and we can help. I didn't know that's what you did. And I think that that's the other part of it is maybe there's educating.
So, I think if we want to have consumers, this kind of segues nicely, I think, into some of your thoughts, which I definitely want to hear, including consumers in the conversation, there's also an educational aspect to it. Which is not really the place of vendors of technology. It's not really the place of, I mean, yes, your provider should want to educate you also, but there's a lot that they might not have access to, to be able to get this to you.
And so your thoughts are around creating kind of a new way of thinking in terms of having nonprofit organizations out there that, help you to be able to advocate in AI on these, social drivers in other ways. Tell me a little bit more about that.
Dave: Yeah. Let's go into social drivers a little bit.
So one of the statistics that gets thrown around is that 20 percent of your health status is impacted by the health care services that you receive. And 80 percent of your health status is determined by the social circumstances and the environment that you exist in. So we can do all the health care service improvement that we want.
It's not going to have as big of an impact as improving the environment that people exist in. And so the question is, what is the role of a healthcare services enterprise in impacting the social circumstances that someone exists in? I would say it's a good debate. You know, we want people working on housing to continue working on housing.
We don't want to healthify the housing insecurity issue. On the other hand, if there's something that a healthcare organization can do to help a patient become eligible for housing services. And there can be some type of support to the organizations that solve housing and security problems because of the weight or the prominence of a healthcare organization in helping housing organizations get funding or rezoning or, whatever it is that helps solve the housing crisis. The health aspect of it is just another data point.
Another facet of the argument about why is it good to help people solve their housing problems. So this is similar to the social identity. If we can explain to people, this is why your healthcare organization is asking you about your social circumstances. The screenings that we have for housing, transportation, food insecurity, all the things of the gravity project, you know. There are almost a dozen domains that the gravity project has identified. And HL7 standards for the messages and the questions and the responses.
So that when you diagnose someone with a social need, then you can provide an intervention and do a referral and track all of this and see if the social services that you're referring people to improve their unmet social needs that impact their health. And so the AI component here, I think is that we possibly can use this new technology tool to be able to figure out what is the best way to help someone meet these needs that they have. It's a complicated new thing, figuring out how does healthcare and social care interact with each other. And if the technology vendors who are involved [00:24:00] in collecting data about social needs and the referrals to social care. That data can be computed in an advanced way to determine who fares better.
What are the practices that we have in place that allow someone to actually get their transportation insecurity addressed? So, that they can have better access to care, get their prescriptions refilled, things like that. That's where I hope we can interject with artificial intelligence advocacy work, is to make sure that we're also using the technology to find out the best ways to help people in the real world, in their real life.
Angelo: Yeah, okay. So those are some things I think they hit a nerve with tech guy like me. Because one of the problems is that we can't really get good data to these organizations nor get good data back from them. A lot of them are still working in paper charts and clipboards and things like that.
In reality, we make all these guidelines, just like you said. Now, it was interesting, I didn't know that stat about 20 percent of our outcome comes from the providers. But I think what's interesting is, we're trying to help that become as efficient and effective as possible.
The technology is enabling providers and that's important. But what is interesting is it's really about closing gaps, especially if you think about value-based care. It's about closing the gaps and helping people get better. So when some of the most effective items are things we can't measure, it becomes a problem.
Like, how do you know what you're doing is effective in closing these gaps? I mean, You can kind of just look at the data and are things getting better in general, but you don't know why. What's nice about gravity and some of these other efforts is if we can come up with ways of describing what the data might look like, a standard so that if we could conceivably get this data moving back and forth, we could actually see the results. And then start using that in our calculations and measurements and say, the gaps are closing and we can attribute it to this because look at this factor in this data. And then you can start seeing how effective and start helping organizations that are effective become more funded and organizations that aren't as effective, learn how to be more effective.
And I think that that's the promise of data in general. But this is an area that I think has a lot of room for growth. But I could definitely see value in closing gaps.
Dave: It's a real issue on the social care side of things, in terms of information technology resources of community-based organizations are typically not very well financed in many cases. And so they don't tend to have the type of information technology staff or budgets for licensing software the way that health care services does.
So what can we do to address that? There are companies out there like Unite Us and findhelp. There are standards like the Los Angeles 211 group, United Way, and other states have set up social care hubs and data exchange frameworks. You know, the state of New York is having to figure this out because Medicaid organization in New York is going to operate in 12 different regions. And there are a dozen or so Medicaid managed care organizations and hundreds of community-based organizations. Who's going to build the technology framework for those 12 regions, 12 MCOs, hundreds of CBOs? This is a big problem. And I'm not sure that there's one vendor out there. Maybe this should be an open-source project that a nonprofit should take on. Maybe one of these AI groups could take this on as one of the things that they could build on top of to determine the best way to spend money on social care to actually help people. These are the kinds of things that I've been thinking about lately, just hearing from experts in the integration of social care with healthcare. How are we going to address this technology resource gap on the social care side of things?
Angelo: Yeah, that is a big problem. When you think about it, the world that I spend a lot of my energy in is in this guideline calculation world.
So, we're producing these outputs. We're calculating who's in a cohort. And diabetes A1C is a measure. And you know, who's in the numerator? But then it kind of ends there and you hope somebody closes the gap. You can't really take it any further. Even if I knew who to send this to, like what list this should go on, I couldn't get it to that organization. And then even if somehow we were able to get at the organization, we want to be able to get the responses back to know, what happened, what's effective, what's not effective. And I think that you're right. It's a tremendous problem. It's super important and it's really a tremendous problem but I do understand in some ways you have to be realistic to say, well, let's start by surfacing the gaps with technology.
At least let's do that. And that's where I've spent kind of all my energy. And now we're seeing, okay, I think we're able to start surfacing and AI is moving at this really great rate. The next step would be, okay, how do we close them? Can we find a way to, kind of close the loop of data so we can close the gaps?
I think that that's a challenge. I don't know that we have the answer to that, but do like that we're starting to look at things outside of the traditional health sphere to the other parts of our lives that, that are really important.
Dave: Yeah. And I think that many times the solution is not a technology solution. It's a human services solution. But how do you help the human service professional be more efficient? Well, that's where the technology comes in.
Angelo: That's it. Exactly. And that's, I think, the appropriate use of AI by the way, is making humans effective. I don't think these moonshots are the way, we should focus.
We spend too much energy on that. I think it should be on effectiveness. That feels like low-hanging fruit. That feels like things we could accomplish today without the next decade of technology. We could do that today. We can make people more effective.
Dave: Yeah. I think, what, one of the things that has frustrated me over the last couple of decades working on this, is you know, you see a data technology vendor with a marketing claim, oh, we can predict who is going to develop diabetes. And maybe that's a good use of technology because, perhaps that's really hard to predict who is going to be at increased risk in a population so that you can devote your resources [00:30:00] to those individuals. But I think the real value is not in identifying the people, it's in what you do next. That's where we need innovation and creativity. How do you help lower-trained community health workers do some of the things that today we rely on nurses to do? Massive nursing shortage because nurses are retiring or leaving the field because of burnout and threats of physical violence.
And its really put nurses through the ringer, definitely during the pandemic. That's continuing. The people who want to become nurses, there are not enough trainers to train those individuals. You know, we don't have enough spots in the nursing schools. And so I don't think we're going to be able to build up the nurse force that we need, in order to care for people who are developing these chronic conditions. So how can the technology help lower-trained CNA type workers in the field, do some things with oversight from a nurse? Make sure that people are getting the right care, coordinating that care, addressing issues that they're having around managing their, symptoms in really simple ways.
That's what I'm hopeful for is that some of these technologies can augment the highly trained nursing staff with individuals who can provide the care that's needed with the expert system support behind them.
Angelo: Yeah, I agree. I think this idea that technology is going to predict disease.
I just feel like you're not going to. It's not possible. But let's just say we have the technology today to predict that you're going to get diabetes. We wouldn't be able to make a difference because it's all about what happens after that, just like you said. So if I knew I was going to get it, I couldn't change my outcome still.
So I think that's, I think where we could start making our health system in general, more effective. Then I think we're onto something. We'll have a lot of impactful healthcare than we would with these moonshot predictions of being able to predict all these kinds of things.
But the reality is, I'm not even sure that's possible. So as one AI guy will admit, I mean, that's what I’m willing to go out on a limb here and say. Yeah, I don't think you can predict something that nobody can predict, right? There has to be some way to understand the data.
Dave: The last thing here on getting consumers involved with artificial intelligence. So the thing we just talked about where, you know, let's say a company like Hippocratic AI trains a model. They work with a bunch of nurses to train that model. So, okay, now you have this really good language model that can support any kind of care management activity. Help a patient understand that they need to keep taking their high blood pressure medicine, even though they don't feel any symptoms when they don't take it. Because you're trying to prevent strokes and heart attacks. Okay, great. That's wonderful that you've had clinicians train a model. Maybe that was the problem with IBM Watson that, we didn't do enough.
We didn't hire enough clinicians to, train that model perhaps. And so feels like we're making progress in 2024. But what I worry about is that we perhaps don't have enough patient advocates and consumer advocates assisting with the training of that model. So, that the output is also something that addresses the concerns that people have.
What are the information needs that people have? What are the questions that they feel like they need answered? What is the media format? Maybe text is not the right media format. Maybe we really need to focus on medical illustrations and videos. It's a really difficult problem, especially in healthcare. You gotta be careful with these things. You don't want to create a health video using generative AI and the anatomy is incorrect.
Angelo: No, you don't.
Dave: That's really scary. But maybe that's the media preference that people have. And that's where we should really be focusing because visual communication is something that clearly people have a preference for. Otherwise, Tik Tok wouldn't be so popular.
Angelo: That's a really good point. That is true. A lot of us are visual learners. Well, Dave, Thanks for joining us today. Um, I know we're running out of time here. Um, and thank you all listeners. We know without you, this wouldn't be possible. Before you go, please take a moment to like, and subscribe to the podcast on your favorite platform. I'm your host, Angelo Kastroulis, and this has been Counting Sand. You can follow me on Twitter, AngeloKastr or on LinkedIn, angelok1. And Dave?
Dave: On LinkedIn at DaveFoster