Navigating Mass Tort Liability: Leveraging Legal AI for Strategic Litigation Management
| S:1 E:1In this episode of Litigate with Insight: The LMI Podcast, we dive into the complex world of mass tort litigation and how legal AI is transforming the landscape for attorneys and legal professionals. Join us as we explore the challenges of managing mass tort cases, the role of artificial intelligence in streamlining processes, and practical strategies for leveraging AI tools effectively.
Join Megan Pizor, LMI’s General Counsel and Chief Technology Officer, and Angela Browning, LMI’s Chief Strategy Officer, distinguished legal professionals with extensive experience in mass tort litigation, as they share their insights and expertise on harnessing the power of AI to enhance case management, streamline document review, and identify key patterns and trends in litigation data.
Where to Listen
Find us in your favorite podcast app.
[Music playing]
Angela Browning:
Hi, and welcome to Litigate With Insight, the LMI podcast. A podcast about the intersection of data, analytics, technology, and innovation in the legal industry.
In each episode, we bring together experts, thought leaders, and industry pioneers to explore how these cutting-edge topics impact litigation. I’m your host Angela Browning, Chief Strategy Officer at LMI.
On today's episode, we dive into the complex world of mass tort litigation, and how legal AI is transforming the landscape for attorneys and legal professionals with general counsel for LMI, Megan Pizor.
Welcome Megan, why don't you start off by telling us about your history with litigation management.
Megan Pizor:
Well, thank you for being here, it's a pleasure. I have been with LMI (Litigation Management) for about 14 years, and I've had multiple roles within the company. But my most recent has been in the role of general counsel and chief data officer wherein I wear two hats.
I am legal counsel for the company, so I get to manage the general legal and legal operations, ranging the gamut from intellectual property to contract management, to data privacy, which leads to my second hat, which is the chief data officer role, which includes more of that data privacy, but also includes the use of technology and tools to better manage the data of our clients who are typically involved in mass tort litigation.
Angela Browning:
Now, we're hearing a lot in the legal industry right now about AI and the use or misuse of AI in the legal industry. Why don't you spend a few minutes giving us an overview of the current status of AI and what uses there are currently in the industry.
Megan Pizor:
Current uses of AI, that is the question, isn't it?
I want to start with what is AI?
There is definitely some inconsistency when trying to find a good working definition for AI. In fact, many applications that were once labeled AI, are really no longer perceived that way because they become useful and common enough that no one thinks of them as AI anymore.
Think about Google or any other search engine that could predict your search based on just typing in a few letters. Is that AI? There's actually a name for this phenomenon: “the AI effect.” As soon as it successfully solves a problem, it's no longer considered AI.
That said, however, I think we can say AI can best be described as the use of machines that have been programmed to replicate human intelligence, and essentially, perform tasks that require cognition.
Think of reasoning, problem-solving, even learning from experience where computers previously just used to execute specific instructions provided by a human user. AI actually goes to the next level and allows for the ability to learn and even think.
In the legal context, AI technologies are being utilized in quite a few ways. I think the use cases will absolutely grow over time, but some of the current uses include e-discovery. This is probably the most well-known of AI in the legal industry. AI algorithms can sift through lots of business documents such as emails, word processing documents, et cetera, and help lawyers find relevant information quickly and efficiently.
They can even learn from human reviewers to effectively predict things. Think of document responsiveness, relevance, and even perform tasks such as issue tagging or marking of privilege content. Again, this is probably the most recognized use of AI at present.
Another area, however, is legal research. There are AI-powered tools that can do a lot to enhance research efficiency. Platforms can actually be used to analyze legal databases, statutes, regulations, case law, and then provide relevant insight to the end users, the attorneys, the paralegals. These tools do tend to work best with human oversight, however, which I will probably talk quite a bit more about throughout the rest of the episode.
Contract management is another good example of AI use. It can generate standard contracts and agreements, and really, automate the repetitive tasks that lawyers and paralegals are doing and allow them to focus on more high-end analysis and tasks.
It can also do initial contract reviews, help identify potential risks, extract key terms to flag attention to potentially important content that might require further review from humans later in the process.
Case assessments can also benefit from AI. There are tools and algorithms that can actually predict case outcomes based on historical data. Lawyers can then use those predictions to make basically better and more informed decisions regarding strategy.
Another use of AI in the legal industry is due diligence. Think of corporate mergers, acquisitions, and other business transactions – AI can actually assist with due diligence by analyzing documents and identifying potentially critical information for the attorneys who are conducting the due diligence, the mergers, the acquisitions.
Website chatbots are actually also becoming pretty common and are not maybe traditionally thought of in the legal industry. AI-powered chatbots can actually handle routine legal inquiries via law firm websites and really any business to consumer website.
Angela, I'm sure you've probably actually encountered this, maybe you've visited a website, and a chatbot pops up and asks if it can help you with something. You then type in a question or maybe select one of the options, and the bot responds and answers your questions, directs you elsewhere to the website, or maybe a person then follows up if your question exceeds the bot's capabilities. Law firms are using this increasingly to free up lawyer and paralegal time in responding to basic repetitive questions.
And lastly, I'll mention intellectual property management. AI is really starting to help law firms and corporate counsel manage IP portfolios by doing things such as tracking patent filings, identifying potential infringements, and other related tests.
So, I think to summarize, AI really can help legal professionals by streamlining processes, improving efficiency, and enhancing legal services. However, as I previously noted, it is best with human oversight, at least for now, to mitigate against potentially erroneous data.
Angela Browning:
Megan, that's a lot of coverage in terms of AI and its current use. Now, when I'm thinking about LMI and our role in the legal industry, can you share a little bit about how AI is being used in mass tort litigation?
Megan Pizor:
Absolutely. For the most part, it's being used in a lot of the same ways that we just discussed. However, given some of the specific scenarios that arise unique to mass tort litigation, I think we can tailor some of those use cases to be a little more applicable to larger scale litigation, particularly involving injury.
We talked a little bit about initial discovery and e-discovery. For mass tort cases, maybe consider a manufacturer that's facing multiple lawsuits related to a single product. There could be myriad different discovery requests in each of the different lawsuits or cases.
AI can use prior discovery responses to actually draft responses or even objections to new requests, which can really save attorneys a lot of time in the mass torts context with so many parties involved.
Another AI use might be data organization and analytics. AI can be very helpful in organizing and analyzing data sets across entire plaintiff populations for example. Potentially, identifying trends or patterns that maybe wouldn't have been seen or noticed easily by the human reviewers. Identifying patterns such as specific types of injury caused by maybe chemical exposure or a medical device can also help the attorneys make much more informed and data-driven decisions when developing their legal strategies.
Client communication can also benefit from AI in mass tort litigation. Think of the chatbots we talked about earlier. Tools such that these can really help to enhance client communication when a firm is representing a large inventory of litigants. Chatbots, virtual assistance, they can perhaps provide case status updates to clients, answer common questions, and really just overall, improve client satisfaction given the ability to be so responsive when you have a large inventory of clients.
Another area that many may not consider is risk assessment in litigation financing. Third party funders are increasingly being used in mass tort litigation and they can actually use AI and more specifically natural language processing tools for financial risk assessment purposes. For example, they might be able to analyze a particular law firm's track record and key success probabilities to make better investment decisions.
And lastly, medical record summarization. In mass tort cases, attorneys often need to analyze a lot of medical records. Thousands of pages of medical records. There's a lot of discussion in the industry around this and whether and to what extent AI can assist by helping to sort through some of this volume, and whether it can potentially create summaries or put records in chronological order for the attorneys that are either pursuing or defending the cases.
And I will say I think AI can almost certainly help, for example, maybe identify some dates or keywords, but I also want to mention that there's some very real challenges that are still being presented by medical and I guess other uniquely complex record sets. But again, I think we'll talk more about this as we navigate this episode.
Angela Browning:
So, I'm hearing a lot about data, documents and how do you navigate the complexity of all of those things wrapped together when trying to manage mass tort litigation.
Can you share the importance of capturing data accurately at the outset and what the risks are if you don't?
Megan Pizor:
I love this question because you are really speaking my language right now. I would say accurate data is arguably the foundation of really any solid information management strategy, including the application of any form of AI. I am a very firm proponent of the concept that what comes out is only ever as good as what goes in.
Angela Browning:
That's exactly what it is. I think about a lot of different databases I've worked in, and if its garbage going in, its absolutely garbage going out.
Megan Pizor:
Agreed. I would say quite simply, data accuracy leads to more reliable and therefore, more effective results. So, I'll give some examples.
First of all, overall performance and generalization. AI models typically learn from data via some form of training. If the training data is not accurate, if it has any form of bias, the resulting performance will likewise be biased or inaccurate.
So, speaking of bias, bias mitigation. Bias data can lead actually to discriminatory AI systems. Bias training data can and has been resulting in bias predictions related to race, gender, and sometimes other sensitive attributes.
So, that's certainly an area where data accuracy can be key. We can also consider overall trust and decision-making. Trust in AI depends on its accuracy. Inaccurate results would erode trust and confidence for anyone including attorneys, practitioners, paralegals.
This is increasingly important I think because AI is being used every day for more and more critical decision-making. Think of document relevance in legal discovery and even doing medical diagnoses. Trust is critical and it requires accuracy.
Another consideration is maybe robustness and resilience related to data sets. The more “noise” there is within a data set, meaning the more data that's either accurate or simply just unrelated, the harder the AI has to work, and therefore, the harder it is for the AI system to perform well.
Cost efficiency is another key factor, maybe the key factor. Training is typically involved in AI, as I previously mentioned. It requires computational resources and maybe more importantly, time. Inaccurate data not only wastes the computational resources, but it also wastes training time and then requires even more time to either retrain or review for accuracy, et cetera.
I'll say a quick side note here for anyone out there that may be a little more on the data nerd side. For generative AI models, there are techniques such as something called Retrieval-Augmented Generation or RAG that can help enhance accuracy and reliability. But that's also probably a rabbit hole for exploration maybe in a future episode.
Angela Browning:
And I'll certainly explore that name, RAG, that's an interesting acronym for a name.
Megan Pizor:
Not the most delightful, but there we have it.
Lastly, there are absolutely some legal and ethical considerations. As in any area, reliance on incorrect data leads to mistakes which can have legal and ethical ramifications. For example, consider inadvertent privacy violations.
So, when exploring AI tools or partnerships, potential users, lawyers, paralegals should really ask about strategies for keeping data accurate, clean, and normalized for best results. And I will say that reputable vendors can and will offer insight not only into their own tools but also best practices generally for getting to the most accurate data.
Angela Browning:
That's a partnership that is critical in this realm and making sure that people that we work with, the legal practitioners and their clients understand what these risks are and the ramifications of not collecting data in a way that provides the ability for AI to do the things that they intended it to do.
Would love to hear some successful kind of success stories on use of AI in the legal field. What lessons could be learned from them, more specific to the mass tort industry.
Megan Pizor:
Certainly. I think much of the success in mass tort litigation has really been similar to other areas in the legal industry where AI adoption has been successful. I mentioned previously e-discovery. I think predictive coding and analytics can be really of extra value in mass tort litigation or any mass litigation, where there is the potential for just much higher volumes of discovery data. Being able to add efficiencies in, reduce cost, reduce time to matter resolution, that can really be very powerful in larger scale litigation, tort, or otherwise.
Document discovery tools with AI capabilities really can speed up the discovery process and keep cost down, which is obviously critical to anyone.
I think there are also some really interesting predictive modeling tools that are generating potential case outcomes based on historical data. I alluded to this previously, but some of the tools out there are really doing some interesting things with looking at prior case law, looking at the way judges may have ruled on particular issues and taking large data sets to allow for predictions that might help attorneys either assess the strength of their case or potentially even negotiate settlement strategies.
Put together inventories of settlements or determine where case strengths or weaknesses may be, that might help to bucket plaintiffs into different categories for resolution purposes.
Angela Browning:
Now, the legal industry is in general, risk-averse, so use these tools, I think is a little bit behind maybe some other industries that have been using this for far longer. Given the ethical implications, what precautions should legal professionals take when utilizing AI tools in their practice?
Megan Pizor:
Very, very good question. AI offers really significant benefits, but it also can present some really significant challenges. Although the technology is advancing every day, AI is not (at least not yet) an all-encompassing solution, there's no easy button, again, at least not yet. I'm excited to see where the future goes, but for now, there are still risks and limitation, and human oversight is still absolutely necessary.
First of all (and Angela, I think you might be able to even comment on this), there's a lot of AI hype. There are more and more tools and companies out there promising a lot of things. AI is very exciting, and quite frankly, I really think the future possibilities are potentially endless, but there are still limitations.
There are risks, best practices, areas that require more development before truly adding real value. And I think it's critical for lawyers and firms to make informed decisions, and to be cautious of trendy buzzwords or vendors that may over promise and then under deliver. And again, I suspect you may have experienced some of this.
Angela Browning:
We definitely see that at LMI with some of our clients who have tried some of these tools and have expressed to us what they are anticipating getting in terms of a deliverable. And it's not necessarily the case.
We end up doing the work for them in more of a traditional way using our proprietary technology, but it's not the kind of these bells and whistles that others are touting out there that AI can do, at least in our realm. Could it get there eventually? Possibly, and probably, but it isn't there right now and we're certainly seeing that on the back end.
Megan Pizor:
Absolutely. And there are also some ethical concerns. It's not always easy to have insight into exactly how an AI tool is functioning, but fairness, transparency, accountability, those are all still important in the practice of law and really business best practices generally. Without transparency, there absolutely can be ethical issues.
For example, AI tools can do something that's called hallucinating, meaning if it's training or underlying data is flawed or lacking, the AI tool may actually attempt to fill in the gaps by creating its own content.
And this happened in a relatively well-known, at least in some circles, New York sanctions case last year in 2023 called Mata v. Avianca where the attorneys submitted briefs in several cases that were researched using specifically Chat GPT and the AI in those cases, not finding exactly what it needed, created fake case citations and extracts.
And the attorneys who most likely had no idea or no understanding that AI can hallucinate, didn't thoroughly check the case law and had rather problematic consequences. The cases were dismissed, the court actually sanctioned the attorneys and fined them and their firm.
And this isn't the only such case. Fake case law has shown up in other matters in the U.S., Canada, the UK, potentially other areas (those are the areas I'm familiar with). And these incidents can definitely highlight the need for transparency and accountability in AI generated work. So, in other words, make sure you know what your tools are doing under the surface.
Angela Browning:
Absolutely. And I'm thinking just from our perspective, use of medical records, there's going to be missing information. And the last thing our clients need us to be doing with technology is filling in gaps that don't need to be filled in.
Some of those gaps are critical to know whether there's a missing record, and that we need to go request that record rather than having the AI fill in the gap with what they think the record should say.
That's a little bit niche for our area, but very important in kind of using that balance, that ethical concerns behind using something like ChatGPT to fill in the blanks when there's potentially a summary review of records.
Megan Pizor:
Absolutely. I mentioned, I thought we would dive a little more into medical records throughout this episode, and just as you say, this is the perfect time to do it because while AI can be used to assist to some extent with record review, specifically medical records, there are absolutely still challenges and risks, which could be especially important in high stakes litigation where accuracy is really critical, especially at the outset.
And I'd like to say here that some of those problems may be, at least for now, AI does have some memory problems. Unlike humans, which arguably can retain large amounts of information for long periods of time, AI still has limited memory. For the most part, it keeps information just long enough to perform a task and make a decision and then it discards it. That's more efficient and it's cheaper.
So, for voluminous data and file sets, which is often the case with medical records in mass tort litigation, this can be really problematic. I do want to note however, that a lot of the current AI solution providers are really working hard on the AI memory or context. So, I do expect that this aspect will probably improve quite a bit going forward, if not already.
But medical records also lack consistency. There can be embedded tables or charts or handwritten notes, images, inconsistent formatting, and even for simple items such as dates, which can literally appear in multiple formats: month before day, day before month, numbers versus spelled out. It can be very difficult for AI to recognize patterns or know to look for data that is largely inconsistent.
And medical records also can contain highly specialized and often, variable terminology which AI systems can and do struggle with in terms of accurately extracting and interpreting. For lack of a better way to put it, AI does lack common sense, at least to some extent.
It operates based on patterns in data and predefined rules, so it can struggle to understand context or make intuitive judgments. So, tasks like understanding the context of keywords within a more complex context can require deeper human understanding.
Again, however, advancements are being made rapidly on this front, but for now, there are absolutely some limitations just given the nature of medical records.
And I think I want to say, lastly, AI can be misled. We talked about the potential for hallucinations. If an AI does provide an inaccurate response such as a fake case citation, a real issue can be that it can't always be informed as to why its response was bad or inaccurate in order to learn from its mistake. And that's something that, again, there's a lot of headway being made on this front, but we're not quite there yet. So, those are some of the unique aspects of medical records.
On a related note, I think something that's worth mentioning here is data privacy as another potential risk. Medical records, employment records, financial records, they can contain some very sensitive information, and making sure that sensitive client information is adequately being protected is critical.
So, if a lawyer or practitioner is using an AI tool, where and how it is storing and processing that data is really critical for consideration. I am and always will continue to be a firm proponent of sensitive data being protected under the principles of least privilege and a minimum necessary. So, it's I think, critical to understand how some of these tools are utilizing that data.
And then also, there are some copyright and intellectual property considerations regarding who owns the AI-generated content, which I think will be a little bit interesting to navigate going forward.
Angela Browning
There's a lot of risks, a lot of benefits is what I'm hearing. I'd love to hear your key takeaways for legal practitioners.
Megan Pizor:
Absolutely. So, first of all, I want to say there's a lot of exciting things coming down the pike with AI, and I think you mentioned at the outset that there's some hesitation in the legal industry to adapt technology maybe quite as quickly as some of the other industries. And that's true and maybe has some benefits since we're still testing the waters with a lot of the exploding potential of AI. But one thing I really want to highlight is I think don't panic.
I hear a lot of AI is going to replace lawyers, AI is going to get rid of my job. And I don't think that's maybe an accurate understanding of AI, at least not for now. I think AI will absolutely change the way we work, but it likely won't, at least not in the near future, obviate the need for lawyers and legal practitioners.
It will invariably change the way that we practice and maybe even the way that we live, but there will still be risks, there will still be change. So, the need for checks and balances, human oversight, and even for effective advocacy as we continue to navigate completely new territories.
For example, who's liable if AI makes a mistake? There's a lot of case law that's going to come out of this and a lot of need for advocacy. So, I would say first and foremost, don't panic.
Secondly, I would say that AI is a tool, not a replacement for legal expertise. Lawyers still need to interpret results generated by AI, consider those ethical implications that we talked about and manage overall case strategy. So, using the tools wisely and understanding their risks and benefits will really go a long way to effective adoption of AI tools.
And to that end, I would say keep abreast of changes. Technology literally changes every second, it's rapid. So, we as legal practitioners need to be aware of those tools and what they can do to help us, or those that can pose significant risks so we can make better informed decisions for our clients.
And lastly, I'd say if you haven't already done so, you Angela, those listening to the podcast, try some tools for yourself, test it out. Ask other practitioners how they are using them successfully, try asking where they're running into issues.
The more you try out the tools, the more you will find ways to use them. But always remember to keep data privacy top of mind and make sure you understand what the tools are doing with your data before you start entering in potentially sensitive information and or client data.
Angela Browning:
And I want to add one more thing. If you want to learn more about LMI and RVU and what is happening with AI, visit our website www.lmiweb.com. We've got a host of articles, thought leadership, and a blog that others can read.
They can subscribe to our blog we send out monthly, and we keep abreast of the issues happening now in AI and sharing with our clients, at least from our perspective in the mass tort litigation industry.
[Music playing]
I'd like to thank Megan for being our guest on today's episode as well as you, our listeners. If you enjoyed the show, subscribe to Litigate With Insight, the LMI Podcast on your favorite podcast app. Share it with your colleagues or on LinkedIn. You can learn more about LMI at lmiweb.com.
This has been a co-production of Evergreen Podcasts and LMI. Special thanks to our contributor, Kaleigh Savnik. Our producers are Brigid Coyne and Sean Rule-Hoffman. Our audio engineer is Sean Rule-Hoffman.
I'm your host, Angela Browning, thanks for listening.
Hide TranscriptRecent Episodes
View AllLeveraging Plaintiff Data in Mass Tort Litigation: Structuring Data for Powerful Insights
Litigate with Insight: The LMI Podcast | S:1 E:3Legal Technology Advancements: Past, Present & Future
Litigate with Insight: The LMI Podcast | S:1 E:Bonus 1Streamlining Mass Tort Cases: An Inside Look at Multidistrict Litigation
Litigate with Insight: The LMI Podcast | S:1 E:2Introducing the new LMI Podcast!
Litigate with Insight: The LMI PodcastHear More From Us!
Subscribe Today and get the newest Evergreen content delivered straight to your inbox!