What Recent Cases Reveal About Lawful and Confidential AI Use: Man vs. Machine, Part II 

This is Part ll in the FINEOS Man vs. Machine 3-part blog series that offers an examination of the role of human judgment amid the increasing workplace integration of artificial intelligence (AI).

Four recent cases demonstrate the growing risks of using consumergrade AI platforms, from inadvertent disclosure of sensitive information to AI-generated actions that escalate legal exposure. While none of these cases involve leave or disability, the courts’ stance on AIinfluenced actions contributes towards a governance blueprint applicable to leave and accommodation management. 

Insights from HeppnerWarnerFortis, and Nippon Life

In a February 2026 ruling in United States v. Heppner, the Southern District of New York held that materials a criminal defendant created using a public, consumer-grade AI tool were not protected by attorneyclient privilege or the workproduct doctrine. The court emphasized that the defendant used the AI independently, outside counsel’s direction, and communicated sensitive information to a thirdparty platform whose privacy policy expressly permitted data collection and disclosure, eliminating any reasonable expectation of confidentiality.  

In Warner v. Gilbarco, Inc., the Eastern District of Michigan reached the opposite conclusion in a civil context on the same day as the Heppner ruling, holding that a selfrepresented plaintiff’s use of ChatGPT to prepare litigation materials remained protected work product. The underlying reasoning was that the plaintiff’s use of AI was like privately preparing drafts or notes, and there was no evidence she uploaded confidential or protected documents in violation of any order.  

In a March 2026 ruling in Fortis Advisors v. Krafton, a Delaware court ruled Krafton breached its acquisition agreement by following ChatGPTgenerated takeover strategies, bypassing the advice of in-house legal counsel. The court emphasized that Krafton’s CEO “followed most of ChatGPT’s recommendations” and ruled the AIinfluenced adverse employment decisions lacked valid cause. (It is unclear, but likely, that the tool used was a consumer-grade platform.) 

In March 2026, Nippon Life sued OpenAI after a claimant was alleged to have used consumergrade ChatGPT to seek legal guidance and breach an existing settlement agreement with Nippon Life, wherein the tool, despite knowing there was a valid and enforceable agreement already in place, generated advice and filings forcing the insurer back into unnecessary litigation. The lawsuit alleged the claimant uploaded attorney emails and case documents into ChatGPT whose terms allow broad data handling and whose guardrails proved insufficient to prevent it from functioning as an unlicensed attorney. While this case has not reached its conclusion, the allegations put forward by Nippon Life serve as a stark warning of likely future disputes when AI is relied upon in the legal context. 

AI Platforms, Lawful Use, and Confidentiality Expectations

As HR teams adopt AI across a range of platforms, it’s important to recognize that not all tools operate the same way.  

Consumer-grade AI platforms operate with broad datause permissions. A user has no reasonable expectation of confidentiality when entering sensitive information into such a platform. Enterprise AI systems, on the other hand, are designed with contractual confidentiality protections, dataisolation guarantees, logging controls, and compliance frameworks. Even if an individual pays for a premium version of a consumer AI platform, this does not provide enterprisegrade confidentiality or legal rights to privacy. Upgrading to the paid consumer version improves features and usage limits, but it remains a consumer product.  

Although consumer AI is helpful for limited personal use, it should definitely not be used for work that requires confidential handling, particularly human resources management and leave and accommodation decisions that involve sensitive medical and employment information. However, even enterprisegrade AI cannot safeguard against liability when the underlying query seeks guidance on actions that are patently illegal. 

Implications for AI-Enabled Leave and Accommodation Management

FMLA and ADA documentation are already protected by strict confidentiality requirements even without the involvement of AI, because these records routinely contain employee medical information that must be handled with the same care as any other confidential medical records. Protecting the confidentiality of these records is a baseline compliance requirement, not just something that emerges only when AI enters the picture. 

The Heppner decision underscores that employee or manager use of public, consumergrade AI tools for leave and accommodation management (i.e. draft leave letters, summarize medical documentation summaries, or provide accommodation options) may compromise confidentiality, since communications entered into such tools can be treated as disclosures to a third party and may not be protected under privilege or workproduct doctrines. In contrast, Warner demonstrates that AI can remain a valuable tool when used within a humanled process with clear oversight and guardrails (such as stronger confidentiality protections). 

The Fortis decision shows that employment decisions that affect job status must never rely on AIgenerated reasoning without professional review. Even a company CEO should never bypass HR and legal expert review in favor of AI prompts.  

Additionally, the recent Nippon Life litigation teaches us that even though consumer platforms routinely warn users that their responses may be inaccurate or unreliable, users still rely on them heavily, with AI-generated artifacts that may escalate risk, trigger unnecessary disputes, or create new obligations for the organization. 

Employer Takeaways for Lawful and Confidential AI Use

Although the Heppner and Warner rulings reached different outcomes, they can be read together along with the Fortis and Nippon Life litigation to show that employee and organizational use of AI tools can either undermine or preserve confidentiality and liability, hinging on whether employers thoughtfully design their AIenabled leave and accommodation workflows:  

  • Make the safe option the easiest option: Employees should never feel the need to experiment with external tools in the first place. When enterprisegrade AI is seamlessly available inside existing tools or workflows, employees are far less likely to copyandpaste medical notes or legal correspondence into public tools simply because they “needed help drafting something.” 
  • Define and supervise AI use in HR workflows: Employers should set explicit policies governing when and how employees can use AI to handle sensitive medical or accommodation information. Evidently, this still needs to be said: do not upload sensitive information into a consumergrade AI platform. And do not ask AI to help you commit illegal acts! 
  • Train managers and employees not just on the what and how of AI, but also why: Emphasize that the confidentiality demanded by leave and accommodation laws far exceeds the privacy expectations of consumer AI models, which do not provide legal privilege, data isolation, or any of the protections employees might mistakenly assume they offer. Enterprise AI would reduce risks but it is not bulletproof; make sure employees continually know its limitations and when and how to use these tools. 

Ultimately, these four cases reinforce that AI, whether enterprisegrade or consumergrade, must remain a support tool rather than a substitute for human expertise, because maintaining the confidentiality and diligence required in leave and accommodation management still depends on human judgment to recognize context and prevent sensitive information from being mishandled.  

At FINEOS, we believe simplifying complexity and ensuring compliance should go hand in hand, empowering organizations with intelligent, AI-driven solutions that enhance, not replace, the human experience. Discover how a modern, integrated disability and absence management (IDAM) solution can help your organization stay compliant, reduce costs, and confidently adapt to a rapidly evolving market. 

Next: Read the full Man vs. Machine series

Part I AI Fluency in Leave and Accommodation: Man vs. Machine, Part I 

Part III The “Human in the Loop” in Leave and Accommodation Management: Man vs. Machine, Part III 

Part IV: Man vs. Machine at DMEC Compliance Conference 2026.  Pitting generative AI against compliance pros in real‑world leave and accommodation scenarios.

 

You may also be interested in