Health-e Law Podcast Ep. 2

AI as an Aid: Emerging Uses in Healthcare with Jim Gatto of Sheppard Mullin

Thank you for downloading this transcript.

Listen to the podcast released December 7, 2023 here: https://www.sheppardmullin.com/multimedia-537

Welcome to Health-e Law, Sheppard Mullin's podcast addressing the fascinating health-tech topics and trends of the day. Our digital health legal team, alongside brilliant experts and thought leaders, share how innovations can solve some of healthcare’s (and maybe the world’s) biggest problems…if properly navigated.

In this episode, Sheppard Mullin partner Jim Gatto, co-chair of its AI Team, joins us to discuss the AI revolution in healthcare, including how it is enhancing and improving the industry while navigating ethical and legal risks.

About James G. Gatto

James G. Gatto is a partner in the Intellectual Property Practice Group in Sheppard Mullin’s Washington, D.C. office, where he also serves as Co-Leader of the firm’s Artificial Intelligence Team and Leader of the Open Source Team.

Jim’s practice focuses on AI, blockchain, interactive entertainment and open source. He provides strategic advice on all aspects of intellectual property strategy and enforcement, technology transactions, licenses, and tech-related regulatory issues, especially ones driven by new business models and/or disruptive technologies. Jim has over two decades of experience advising clients on AI issues and is an adjunct professor who teaches a course on Artificial Intelligence Legal Issues. He is considered a thought-leader on legal issues associated with emerging technologies and business models, most recently blockchain, AI, open source, and interactive entertainment.

About Sara Shanti

A partner in the Corporate and Securities Practice Group in the Sheppard Mullin's Chicago office and co-chair of its Digital Health Team, Sara’s practice sits at the forefront of healthcare technology by providing practical counsel on novel innovation and complex data privacy matters. Using her medical research background and HHS experience, Sara advises providers, payors, start-ups, technology companies, and their investors and stakeholders on digital healthcare and regulatory compliance matters, including artificial intelligence (AI), augmented and virtual reality (AR/VR), gamification, implantable and wearable devices, and telehealth.

At the cutting edge of advising on "data as an asset" programming, Sara's practice supports investment in innovation and access to care initiatives, including mergers and acquisitions involving crucial, high-stakes and sensitive data, medical and wellness devices, and web-based applications and care.

About Phil Kim

A partner in the Corporate and Securities Practice Group in Sheppard Mullin's Dallas office and co-chair of its Digital Health Team, Phil Kim has a number of clients in digital health. He has assisted multinational technology companies entering the digital health space with various service and collaboration agreements for their wearable technology, along with global digital health companies bolstering their platform in the behavioral health space. He also assists public medical device, biotechnology, and pharmaceutical companies, as well as the investment banks that serve as underwriters in public securities offerings for those companies.

Phil also assists various healthcare companies on transactional and regulatory matters. He counsels healthcare systems, hospitals, ambulatory surgery centers, physician groups, home health providers, and other healthcare companies on the buy- and sell-side of mergers and acquisitions, joint ventures, and operational matters, which include regulatory, licensure, contractual, and administrative issues. Phil regularly advises clients on matters related to healthcare compliance, including liability exposure, the Stark law, anti-kickback statutes, and HIPAA/HITECH privacy issues. He also provides counsel on state and federal laws, business structuring formation, employment issues, and involving government agencies, including state and federal agencies.

Transcript:

Phil Kim:

Today's episode is on AI in healthcare, its applications, and legal issues.

Sara Shanti:

I'm Sara.

Phil Kim:

And I'm Phil.

Sara Shanti:

And we're your hosts today. Want to thank you all for joining.

Phil Kim:

We're pleased to have Jim Gatto here with us today, who is our partner here at Sheppard Mullin. His practice focuses on intellectual property, blockchain, AI, financial technology, among other emerging tech and business models. Jim serves, of course, as the co-leader of our artificial intelligence team, co-leader of Sheppard's blockchain and fintech team, and he also leads Sheppard's open source team. Jim also publishes The Legit Ledger, a podcast focused on the latest trends in blockchain. Jim has over 35 years of legal experience focused on all aspects of IP strategy, technology transactions, and tech-related regulatory issues. Thank you so much for joining us here today, Jim.

Jim Gatto:

Thanks. It's a pleasure to be here with you guys.

Sara Shanti:

We're seeing it in the headlines constantly, AI. There's a lot of folks who are opining on it. But because of your deep background, can you maybe first just start in the simplest terms of what AI is and why we're seeing the explosion in healthcare specifically, right now?

Jim Gatto:

It really is using computers, algorithms, and data to try to mimic aspects of human intelligence. And many people debate whether AI really is intelligent or whether it's just as good as how it's programmed, and people are entitled different opinions on that. But I think if you look historically, AI has been around since the '50s. The term was coined in the '50s. There were various attempts at different ways to approach AI, and one of the earliest approaches was much more of a rules-driven approach. You'd have an expert in certain subject matter, they would create a set of rules for computers to do depending on certain circumstances, and it required really trying to map out all the knowledge in that domain. And that really didn't work. I would say that was really more artificial than it was intelligent. Fast-forward to where we are today, algorithms truly do learn. There are numerous examples of things where AI has done stuff that humans have not done, which shows that it's actually not just memorizing things, but it truly is learning. And I think where we are today, it's more intelligent than it is artificial.

Generative AI is a different animal. Generative AI is often trained on more different types of content. So it can be text, it can be images, video, music, software. It's trained more on expressive material, and the output of generative AI is typically something that's more expressive information. So you can output images, you can output reports, analysis, music, video, all of the other kind of creative content. And that's really what generative AI is primarily focused on.

The reason that we kind of had this tipping point in November is that there's really at least three things that need to come together for AI to truly work and to be intelligent as opposed to more artificial. And that is you need a tremendous amount of data, you need a tremendous amount of computing power, and you need good algorithms. Now, we've had the algorithms for a long time. Many of the algorithms that are out there that are being used are kind of open source algorithms at this point. There's innovation that's happening as well, but they've been around for a long time. But it's really the confluence of computing power and data that's really enabled AI to take off like it has.

Sara Shanti:

Where are you seeing AI being applied in healthcare?

Jim Gatto:

When we think about healthcare, it's everything from healthcare administration. So a lot of the things that go into running a hospital or other facility, so much of that is going to be and is in the process of being transformed and benefiting from AI to ring out incredible efficiency. The tools are being programmed to look for what the programmers told it to look for. Now, AI is being used for those purposes, plus identifying relationships that were unknown before. Again, part of the intelligence part of this, it's finding connections in the data that people previously didn't know about, and that's part of what makes it so powerful and different from what's been done for the last couple of decades. So we're seeing a tremendous amount of use in that area. In fact, there is a Gartner report that came out that said that by 2025, they predict that generative AI will be used in more than half of all drug discovery and development initiatives.

Certainly when it comes to medical professionals performing their jobs, one of the big questions I get, and this comes in every industry as well, is will AI replace people's jobs? And the short answer is it will in some cases. In some cases it will augment people's jobs. And when it comes to medical professionals, I think there'll be much more augmentation than there will be replacement. It is clear, it's been demonstrated that AI, through using computer vision can do a better job of examining X-rays than many radiologists. And that's not a slight on radiologist. But what'll happen is you're not going to eliminate radiologists. It’s just radiologists will leverage AI and their own skills and you'll get a better result than what might be possible otherwise.

One of the big problems with AI, and this is why especially for things like medical professionals, it will never replace them, is that as good as it is, it's not perfect and it's often wrong. It often produces what's called hallucinations or bad results. I don't think we'll ever get to the point any time in the near future where we're going to have AI that is reliable enough for important things like medical decisions to not have a human involved in the process. So I think augmentation is a really big factor for those types of jobs. For administration, for other jobs like that, there'll be replacement. And then just in general, the field of diagnostics, AI, again, can find issues in problems and connections that sometimes humans can't even find.

Phil Kim:

Yeah, it sounds like despite all the hallucinations and issues that we see with AI, it will, as you mentioned, augment and improve. And as far as impacting quality of care and access to care, it'll only enhance those.

Sara Shanti:

I know Phil and I hear this all the time, the epidemic of isolation, eldercare, home care, monitoring within the home, on the prevention side and on the chronic side where it's not necessarily provider interaction, but it's patient interaction with a smart device or a smart app.

Jim Gatto:

Let me take the eldercare one first. So there's a company I work with as a group. It's actually more like an AI incubator. They have a set of companies, but they have some really interesting core technology in connection with eldercare. For example, what they do is just using video, it could be from your phone or whatever, you could have a video set up and just based on gait analysis and using AI, they're actually working on being able to predict when elders may fall, for example. They've identified patterns within the gait, for example, which will predict that. And so that can prevent falls instead of trying to figure out why someone fell or how to treat them afterwards, which may be important, but if you could prevent the fall in the first place, it's even better. So that type of technology, I think, is going to become more ubiquitous. And again, that's part of the user technology that people will help themselves with this technology in areas like eldercare and the like.

With respect to isolation, so it's really interesting. One of the areas that we really haven't touched on is robotics, but there's a growing number of humanoid robots that will be companions, and a lot of them are helpful. They do everything from they can clean your bathroom. They literally will be over time, interactive with people and it almost will feel like there's a friend there, there's someone they can speak to. I know it may seem a little futuristic, but there's a lot of these humanoid robots that are actually out in the market right now. They're still kind of early, but they're rapidly gaining in usefulness. And so I think that's something that we'll see a little bit more of as well.

AI can be used in a lot of ways to be predictive of when people are experiencing loneliness. Or, based on what you say, what you write, it can actually detect in many cases, human emotion. There's a lot of technology around that. And determining when people are sad or depressed is one of the things that they can help identify and flag that for either loved ones who want to take care of them or medical attention. Again, it could share it with providers, etc. So I think we'll see all of those areas continue to improve over time.

Phil Kim:

Given that we're all attorneys here, what are some of the risks, the biggest risks that you see or the biggest legal issues that you see with the confluence of AI in healthcare?

Jim Gatto:

Yeah, so I mean, for any given application, there may be some specific issues, but let me kind of hit the broad categories that generally cover most of them. So one of the first and the biggest issues is data. So as I mentioned earlier, all of the AI is trained on different forms of data. If it's personally identifiable information or health-related information, obviously it's important for companies that are training these models to make sure they have a right to use the data that they're training on. And there's already about a dozen lawsuits that allege that some of these large language model tools were trained on data they didn't have a right to use, some of which is personally identifiable information. Some of it is biometric privacy protected information. Some of it is health-related information. So the data and the training, it's really important if anyone gets involved in training their own models that they ensure the right to use the data that they're using.

A second issue we touched on earlier is just inaccuracy. And so reliance on these tools without checking the outputs can lead to liability. There's a suit pending, not so much health related, but it's a defamation suit where the output said that this guy was involved in a lawsuit for defrauding some foundation and, A, there's no lawsuit, and B, he's never been involved with the foundation. So that's one example.

Another big area that needs more attention, this is something that the FDA and the FTC are all very focused on, is that there's bias and discrimination in some of these tools based on the data they're trained on. So a simple example is one of the categories of things that are very useful out there right now is there's apps that you can scan your skin, for example, for certain skin conditions, and it can help identify or predict what that may be. That information can be provided to a provider or whatever. But if what you're training those models on are all images of a white person's skin, those tools may not work for someone with darker skin. There's already been examples of things like that.

This is just a very simple example of where the dataset that things are being trained on, sometimes it's not done intentionally, it's just done unknowingly. And this is part of what the FTC is focusing on, is that you need to think through your data training upfront to make sure that it's representative of the people that are going to use it so that you don't have either biased results or results that discriminate because it doesn't work for certain classes of people.

The other thing I would just say, there's a lot of IP issues around this, the number of patents being filed in this space. I spoke at an event that the patent office held on AI, and as part of that event, the commissioner had mentioned that 18% of all the patents filed this year are AI related across all technologies. Really staggering number. It's the most I've ever seen for any technology. So there's certainly a lot that's going on in the patent space and other aspects of IP. A lot of the output that's being generated because it's expressive content, there's two issues. One is a lot of it's being trained on copyright protected information, whether it's images or other stuff. And a dozen of the lawsuits are based on the fact that the training data is the allegation is that the information used, there's not a right to use it. And so it's copyright infringement to train the models. But the outputs can also be an infringement as well, so the users can have potential viability. So understanding those issues is important.

And the last thing I'll just mention that's really, I think, a big issue is that each of these tools operates differently. Each of these tools has different terms of use. When you input information to one of these tools, in some cases that information is confidential. In some cases it's not. And so it's important not to put confidential information into a system that doesn't treat the information as confidential. And some of these systems tell you right up upfront that we're going to use your information to retrain our models. If businesses, whether it's a provider or an institution is putting information in, we need to understand that some of their sensitive business information maybe being exposed if that's what's being put in. So it's important to understand how the tools work and what the terms of use are to understand what's protected and what's not. So I think that just getting solid advice on the range of issues and the specific issues that may arise with any particular application is really critical.

Phil Kim:

Yeah, we would completely agree with that. Well, Jim, thank you so much for joining us on this episode of Health-e Law. We look forward to seeing you all on our next episode. In the meantime, Jim, the AI team, and all of us here at the Digital Health Team at Sheppard Mullin are here on standby. Thanks for joining us.

Contact Info:

James G. Gatto

Sara Shanti

Phil Kim

Additional Resources 

Jobs of Tomorrow: Large Language Models and Jobs

5 Things Corporate Boards Should Know About Generative AI Risk Management

* * *

Thank you for listening! Don't forget to SUBSCRIBE to the show to receive new episodes delivered straight to your podcast player every month.

If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show on Apple Podcasts, Amazon Music, Google Podcasts, or Spotify. It helps other listeners find this show.

This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.

Jump to Page

By scrolling this page, clicking a link or continuing to browse our website, you consent to our use of cookies as described in our Cookie and Advertising Policy. If you do not wish to accept cookies from our website, or would like to stop cookies being stored on your device in the future, you can find out more and adjust your preferences here.