French Insider Podcast Ep. 42

AI at Work: Steering Employers Through Legal Minefields with Melissa Hughes of Sheppard Mullin

Thank you for downloading this transcript.

Listen to the podcast released December 17, 2025 here: https://www.sheppardmullin.com/multimedia-662

In this episode of French Insider, Melissa Hughes, a senior associate in Sheppard Mullin’s Labor and Employment Practice Group and member of the French Desk, joins us to explore the use of AI for automated decision-making throughout the employment life cycle, including the associated risks and how they can be mitigated.

Disclaimer: This episode was recorded prior to the signing of Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.” As a result, some discussions may not reflect the policies or guidance established by this order.

About Melissa Hughes

As a senior associate in the Labor and Employment Practice Group in Sheppard Mullin’s San Francisco office, Mellissa Hughes defends and counsels employers in a range of disputes, involving harassment, discrimination, retaliation, failure to accommodate, wrongful termination, wage and hour claims, PAGA actions, and class actions. She also has traditional labor law experience, including arbitration, unfair labor practice proceedings, and litigation under the National Labor Relations Act.

Melissa represents employers of all sizes in state and federal courts, administrative proceedings, and every phase of litigation, from pre-suit strategy through post-trial motions. She also serves as a trusted advisor on day-to-day workplace issues, including disability accommodations, leaves of absence, performance management, workplace investigations, and compliance with California’s complex wage and hour laws.

As a member of Sheppard Mullin’s French Desk, Melissa advises French companies and groups operating in or expanding to the U.S. on a full range of employment and personnel matters in both French and English.

About Inès Briand

Inès Briand is an associate in Sheppard Mullin’s Corporate Practice Group and French Desk Team in the firm’s Brussels office, where her practice primarily focuses on domestic and cross-border mergers and acquisition transactions (with special emphasis on operations involving French companies). She also has significant experience in general corporate matters and compliance for foreign companies settled in the United States.

As a member of the firm’s French Desk, Inès has advised companies and private equity funds in both the United States and Europe on mergers and acquisitions, commercial contracts and general corporate matters, including the expansion of French companies in the United States.

Transcript:

Speaker 1:

Welcome et bienvenue à French Insider, the Sheppard Mullin French desk monthly podcast dedicated to French investors and companies investing and doing business in the United States. Each episode features conversations with thought leaders and experts in various industries on the business environment and challenges in investing and successfully growing in the US. And now, for the inside look.

Inès Briand:

Hi, and welcome to today's episode of the French Insider Podcast. My name is Inès Briand, and I'm a corporate associate in the New York office of Sheppard Mullin. Today I'm joined by Melissa Hughes, a senior employment attorney within the San Francisco office of Sheppard Mullin. Hi, Melissa. Thanks so much for joining us today. How are you doing?

Melissa Hughes:

I'm good. Thank you so much for having me.

Inès Briand:

It's our pleasure. Can you tell us a bit more about yourself and your practice?

Melissa Hughes:

Yes, sure. I'm Melissa Hughes. I'm a senior employment attorney, and I'm located in San Francisco. My practice is pretty much centered around helping employers navigate all of their needs when it comes to all things employees. I usually partner with companies of all sizes to try to understand how their business works, so that I can find solutions that make sense for their business and for their operations.

Inès Briand:

And especially in California, because it has its own specific states of roles, right?

Melissa Hughes:

That's right. California likes to make employment law extremely, extremely complicated and tedious.

Inès Briand:

So, today you're joining us to discuss the topic of navigating the AI legal landscape as an employer. Can you tell us a bit more?

Melissa Hughes:

Yeah, sure. There are some issues that are specific to each company and each employer, but working with so many different companies, there are also trends and issues that I see affecting employers across the board, no matter the size and no matter the industry. And that's the case with AI, which is why I'm so happy to discuss the topic today. It goes without saying that AI is everywhere. It's been the constant topic of conversation. There's a lot going on and it's evolving quickly. And there are huge proportion of companies, employers and workers who are using AI, whether it be for their customers or for their jobs, and throughout the employment lifecycle generally. Today I want to flag some of the issues and fixes that employers should be on the lookout for when it comes to using AI tools in employment.

Inès Briand:

Great. And so, before we jump in, could you level set with our listeners and explain how AI interacts with the employment world?

Melissa Hughes:

Yeah, of course. I like to think there are two broad categories of how AI is used in the workplace. Internal tools that employees may use day-to-day to help them do their jobs or increase their productivity. I call this category AI for employee augmentation. It might be an AI note taker for meetings or Copilot, or productivity enhancement tools that provide metrics to help with productivity. There are best practices and policies that can be implemented to ensure that your workforce uses AI in a way that's responsible, ethical and aligned with your company's values, but that's typically not the use cases that we're the most concerned about.

The second category is the one that carries the most risk. That's when AI is used throughout the employment lifecycle for automated decision-making processes. That can be talent acquisition tools like AI for resume screening, candidate searches, ranking or screening, communications with candidate, for example. That can also be performance management tools like automated evaluations or performance reviews, or using AI to discipline employees.

Inès Briand:

Why are we more concerned with using AI for automated decision-making and employment?

Melissa Hughes:

There are a variety of laws, both at the federal and states level that govern employment decision and what they can't be. So, importantly, an employment decision cannot be discriminatory or based on a protected characteristic of an employee.

Inès Briand:

Which makes sense.

Melissa Hughes:

It sure does. But that's where it gets tricky. When it's not human making the decision, or the employer, it's an automated tool that makes this decision, whether it be selecting a candidate for hire or someone for a demotion or selecting individuals for a layoff. What was the decision based off of is the question. And in the case of an AI tool, which makes decisions automatically based on an algorithm, that decision is based off of a large language model, an LLM, that has been trained on data and that is now making a decision based on what it's learned. So if the tool has been trained in the way that carries bias, the employment decision may now be unlawful without the employer even knowing it. But whether the decision was made by the employer directly or by an AI tool, the employer is the one who is going to be subject to liability. It's not an excuse to say, “Well, I didn't make the decision, the AI did it.”

Inès Briand:

Okay.

Melissa Hughes:

I'll give you an example. A company decides to get a brand new AI tool to screen candidates. They gather the resumes for all of the most successful employees they’ve historically had over the years. And they feed it to the tool to kind of train the model and predict who's going to be successful. And that's how they're going to select their candidate for a position that's open. Turns out all of the resumes collected are men under 40 years old. That's all of the people that are the most successful at the company. Now, the tool is screening out. The tool is learning from those resumes and who is going to be successful in making its decision in screening resumes of candidates. Now, the tool is screening out all women and all candidates over 40 based on the years of experience listed on people’s resumes. And anyone who’s been screened out based on this model that was trained with some bias in the model, now all of these candidates, all of a sudden, have been screened out in a discriminatory way, and may now sue the company for failure to hire based on an unlawful reason. Now, imagine 200 people applied to that position. It's not just a person who has screened out, it's potentially a class action.

Inès Briand:

Yeah. No, definitely. We need to be careful. And so what are some of the measures that employers can take to mitigate such risks?

Melissa Hughes:

The number one advice that I give to employer is to try to keep a human in the loop. Whether we're talking about recruiting, performance management, termination decisions, it's incredibly important to use AI tools as a way to assist and help rather than replace the human. So taking the example of the resume screening, instead of having the tool entirely screen out resumes based on that model that was trained, have the tool rank resumes so that you get the whole universe of resumes. No one is automatically screened out by the tool. And you get all of the resumes coming across the hiring manager's desk. And they can use their discretion and their best judgment to look at maybe someone who was ranked 15 on the list and might have been completely screened out by the AI tool, but may be a promising candidate nonetheless. The second piece of advice is for each tool that you're using, understand how the model was trained, what is the data that was used and what are the potential risks associated with the tool.

For example, I'm hearing lately from a lot of employers that they've rolled out chatbots that can answer employee questions about the company's policies. So the company will feed all of their policies into the AI tool and then employees can type up a question in the chatbot and the tool will answer their questions about the policies. Theoretically, that's very helpful until someone asks if they're entitled to a leave of absence, for example, and the bot tells them, "Nope, you're not entitled to it," when they in fact were entitled to it. If the employee acts upon this advice from the bot describing the policy, there is risk associated with the use of the tool. So it's really important to understand the specific use case for each tool, and adopt an approach to risk management that is very tailored.

The third advice that I have to give is train your employees and have clear policies surrounding the use of AI—clear policies on acceptable use of AI. What's okay for employees to input in an AI tool and what's not acceptable information to input in an AI tool. Clear policies on responsibility for confirming outputs. If employees are using AI to do their job and creating outputs, it should be very clear that the employee has a responsibility to review the output and make sure it's accurate before it's used. Clear recordkeeping policies. It's also helpful to have a designated AI person who's trained on the various tools. They can assess the tools for bias, and they can generally be a resource for other employees who may have questions or concern.

Inès Briand:

Okay. And so when an employer decides to pick AI tool, what should they be looking at? Is there anything that you can recommend they should pay particular attention to?

Melissa Hughes:

Yeah, sure. Of course. It's very, very important to review carefully your contracts with vendors, and be mindful of which tools you're using, and reviewing what it is that the vendor is providing you. So it's important to train your employees who are in charge of negotiating those contracts or selecting vendors to be on the lookout for specific issues. You can also negotiate safeguards in those contracts whenever possible. That's not always the case, especially with the big names in the AI world. It's oftentimes very difficult to negotiate specific deviations from the standard contracts, but it is possible. It's important to protect your data as well. Where is the information inputted in the LLM going? Where is it stored? Is it going out to the world or kept confidential? You can generally assume that any input is never confidential, but you can also look into zero data retention agreements, ZDR agreements, which are agreements between providers and their customers that ensure that no data is stored beyond the moment that it is processed.

That's been really helpful in kind of mitigating the risk. There's a host of risk mitigation measures that you can take. And when in doubt, call your friendly employment lawyer. I'm happy to help.

Inès Briand:

No, yeah, of course. We hope that all listeners will call you if they have any questions. And so when it comes to AI regulations, so we all know and are well aware of the EU AI Act, but is there a similar act that applies to employers in the United States?

Melissa Hughes:

There are several laws and regulations that are already in place in the U.S., yes, at the state level and many that are in the works. The legal landscape is very quickly evolving. There have been attempts by the current administration to prohibit states from regulating AI, including through the Beautiful Bill, which was unsuccessful, and more recently through executive orders. It's unclear whether these attempts will be successful or not. Currently, there are AI laws already in place in various states: Colorado, California, Illinois, Texas, Utah. There are bills in the works in many other states, including New York, Connecticut and Montana. New York City also has enacted its own AI law, which has turned out to be somewhat ineffective in practice, but the landscape is generally very quickly evolving and there's a lot of movement in most states. Most of these statutes have qualified the use of AI and employment as high risk, and there is a gamut of requirements imposed by the statutes by virtue of using AI tools in employment, which a lot of different statutory schemes see as higher risk.

Some of the requirements are pretty straightforward, such as disclosures, providing an ability to opt out and reporting incidents of algorithmic discrimination. But some of them are way more obscure and burdensome, such as, for example, implementing AI risk policies and programs, taking measures to use reasonable care to avoid foreseeable algorithmic discrimination, impact assessments and so on. The requirements vary in how burdensome they are, but they generally all require some level of steps to be taken relatively soon, if not immediately. The Colorado AI Act, for example, the effective date has been pushed to June 2026, which should give employers a little bit of breathing room to comply with the requirements. But the Illinois AI Act, for example, it's going into effect in January 2026. So it's time to make sure that if as an employer you're using those AI tools to make employment decisions, you're taking measures immediately to comply with the requirements that are applicable in the states where you employ employees.

Inès Briand:

You have to comply by states. Let's say I have a New York corporation and I have employees in Iowa, New York, and I don't know, Illinois. So, the act that would apply to me would not be based on where the corporation is incorporated, but where my employees are, that's correct?

Melissa Hughes:

Yeah. And generally in employment law, that's the case.

Inès Briand:

Okay, cool. Are there any recent trends that you've observed within the U.S. that employers should be aware of?

Melissa Hughes:

Unless and until the administration's attempt implementing moratorium on state regulation is successful, which is very unclear. We're going to see more and more AI laws and regulations in many, many states. We're already seeing states become more active in all of these areas, whether it be AI safety, automated employment decisions, biometrics, privacy, et cetera.

Inès Briand:

What advice would you give to companies as they roll out their new AI tools or increase the use of AI to do business?

Melissa Hughes:

My main advice would be stay informed and ensure that your workforce is also informed and trained as to what the tools are, the risks, the requirements. It's constantly evolving, as I mentioned. So it's particularly important to stay up to speed and be well-informed. And to be very mindful that there are a lot of requirements that are coming or already there. So it's really, really important for employers to just stay up to date on those developments.

My second advice would be in light of the kind of fragmented legal landscape that is currently in place, having like multiple different states implement different laws that all carry different requirements. If as an employer you are operating in multiple different states, think about which approach would make sense in terms of implementing the various requirements, and adopting best practices that are harmonious and somewhat uniform as opposed to measures that may seem like reactionary for each state, and that don't make sense as a whole. And which in practice are going to be harder to manage.

So when in doubt, reach out to an attorney. Ultimately, the potential for liability and penalties is significant. I know this is an area that is dynamic where everything is moving fast and we want the best, fastest, most exciting new tools, but it's important for employers to be pragmatic when it comes to mitigating risks and understanding what the risk that each tool carries.

Inès Briand:

Thank you very much, Melissa. This was really interesting and you provided a very good background and landscape of the AI tools in the employment world. Do you have any last words or wise thoughts you would like to share with our listeners?

Melissa Hughes:

Yes, of course. And thank you so much for having me. What I'll say to close this out is there is just a lot of information out there. And there are a lot of different requirements, best practices and just general guidance that could be helpful. It's hard to make it fit all into one podcast, but I'm always happy to help and just one phone call always. So don't hesitate to reach out if you have specific questions.

Inès Briand:

And that brings us to the end of this episode and to the close of the year. We would like to thank you for your continued listening and support of the Sheppard Mullin French Insider Podcast. Your engagement and loyalty are greatly appreciated. We wish you a very happy and successful on the year, a Merry Christmas and happy holidays, and we look forward to bringing you more insights and analysis in the year ahead. Thank you for being with us and goodbye for now.

Speaker 1:

This podcast is recorded monthly and available on Spotify, Apple Podcasts, Stitcher and Amazon Music. As well as on our website, sheppardfrenchdesk.com. We want to help you, and welcome your feedback and suggestions of topics.

Contact Information:

Mellissa Hughes

Inès Briand

* * *

Thank you for listening! Don’t forget to FOLLOW the show to receive every new episode delivered straight to your podcast player every week.

If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show in Apple Podcasts, Amazon Music, Stitcher, Deezer or Spotify. It helps other listeners find this show.

Be sure to connect with us and reach out with any questions/concerns:

LinkedIn

Facebook

Twitter 

French Insider Website

Sheppard Mullin Website

This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.

Jump to Page

By scrolling this page, clicking a link or continuing to browse our website, you consent to our use of cookies as described in our Cookie and Advertising Policy. If you do not wish to accept cookies from our website, or would like to stop cookies being stored on your device in the future, you can find out more and adjust your preferences here.