This is Part ll in the FINEOS Man vs. Machine 3-part blog series that offers an examination of the role of human judgment amid the increasing workplace integration of artificial intelligence (AI).
Four recent cases demonstrate the growing risks of using consumer‑grade AI platforms, from inadvertent disclosure of sensitive information to AI-generated actions that escalate legal exposure. While none of these cases involve leave or disability, the courts’ stance on AI‑influenced actions contributes towards a governance blueprint applicable to leave and accommodation management.
Insights from Heppner, Warner, Fortis, and Nippon Life
In a February 2026 ruling in United States v. Heppner, the Southern District of New York held that materials a criminal defendant created using a public, consumer-grade AI tool were not protected by attorney‑client privilege or the work‑product doctrine. The court emphasized that the defendant used the AI independently, outside counsel’s direction, and communicated sensitive information to a third‑party platform whose privacy policy expressly permitted data collection and disclosure, eliminating any reasonable expectation of confidentiality.
In Warner v. Gilbarco, Inc., the Eastern District of Michigan reached the opposite conclusion in a civil context on the same day as the Heppner ruling, holding that a self‑represented plaintiff’s use of ChatGPT to prepare litigation materials remained protected work product. The underlying reasoning was that the plaintiff’s use of AI was like privately preparing drafts or notes, and there was no evidence she uploaded confidential or protected documents in violation of any order.
In a March 2026 ruling in Fortis Advisors v. Krafton, a Delaware court ruled Krafton breached its acquisition agreement by following ChatGPT‑generated takeover strategies, bypassing the advice of in-house legal counsel. The court emphasized that Krafton’s CEO “followed most of ChatGPT’s recommendations” and ruled the AI‑influenced adverse employment decisions lacked valid cause. (It is unclear, but likely, that the tool used was a consumer-grade platform.)
In March 2026, Nippon Life sued OpenAI after a claimant was alleged to have used consumer‑grade ChatGPT to seek legal guidance and breach an existing settlement agreement with Nippon Life, wherein the tool, despite knowing there was a valid and enforceable agreement already in place, generated advice and filings forcing the insurer back into unnecessary litigation. The lawsuit alleged the claimant uploaded attorney emails and case documents into ChatGPT whose terms allow broad data handling and whose guardrails proved insufficient to prevent it from functioning as an unlicensed attorney. While this case has not reached its conclusion, the allegations put forward by Nippon Life serve as a stark warning of likely future disputes when AI is relied upon in the legal context.
AI Platforms, Lawful Use, and Confidentiality Expectations
As HR teams adopt AI across a range of platforms, it’s important to recognize that not all tools operate the same way.
Consumer-grade AI platforms operate with broad data‑use permissions. A user has no reasonable expectation of confidentiality when entering sensitive information into such a platform. Enterprise AI systems, on the other hand, are designed with contractual confidentiality protections, data‑isolation guarantees, logging controls, and compliance frameworks. Even if an individual pays for a premium version of a consumer AI platform, this does not provide enterprise‑grade confidentiality or legal rights to privacy. Upgrading to the paid consumer version improves features and usage limits, but it remains a consumer product.
Although consumer AI is helpful for limited personal use, it should definitely not be used for work that requires confidential handling, particularly human resources management and leave and accommodation decisions that involve sensitive medical and employment information. However, even enterprise‑grade AI cannot safeguard against liability when the underlying query seeks guidance on actions that are patently illegal.
Implications for AI-Enabled Leave and Accommodation Management
FMLA and ADA documentation are already protected by strict confidentiality requirements even without the involvement of AI, because these records routinely contain employee medical information that must be handled with the same care as any other confidential medical records. Protecting the confidentiality of these records is a baseline compliance requirement, not just something that emerges only when AI enters the picture.
The Heppner decision underscores that employee or manager use of public, consumer‑grade AI tools for leave and accommodation management (i.e. draft leave letters, summarize medical documentation summaries, or provide accommodation options) may compromise confidentiality, since communications entered into such tools can be treated as disclosures to a third party and may not be protected under privilege or work‑product doctrines. In contrast, Warner demonstrates that AI can remain a valuable tool when used within a human‑led process with clear oversight and guardrails (such as stronger confidentiality protections).
The Fortis decision shows that employment decisions that affect job status must never rely on AI‑generated reasoning without professional review. Even a company CEO should never bypass HR and legal expert review in favor of AI prompts.
Additionally, the recent Nippon Life litigation teaches us that even though consumer platforms routinely warn users that their responses may be inaccurate or unreliable, users still rely on them heavily, with AI-generated artifacts that may escalate risk, trigger unnecessary disputes, or create new obligations for the organization.
Employer Takeaways for Lawful and Confidential AI Use
Although the Heppner and Warner rulings reached different outcomes, they can be read together along with the Fortis and Nippon Life litigation to show that employee and organizational use of AI tools can either undermine or preserve confidentiality and liability, hinging on whether employers thoughtfully design their AI‑enabled leave and accommodation workflows:
- Make the safe option the easiest option: Employees should never feel the need to experiment with external tools in the first place. When enterprise‑grade AI is seamlessly available inside existing tools or workflows, employees are far less likely to copy‑and‑paste medical notes or legal correspondence into public tools simply because they “needed help drafting something.”
- Define and supervise AI use in HR workflows: Employers should set explicit policies governing when and how employees can use AI to handle sensitive medical or accommodation information. Evidently, this still needs to be said: do not upload sensitive information into a consumer‑grade AI platform. And do not ask AI to help you commit illegal acts!
- Train managers and employees not just on the what and how of AI, but also why: Emphasize that the confidentiality demanded by leave and accommodation laws far exceeds the privacy expectations of consumer AI models, which do not provide legal privilege, data isolation, or any of the protections employees might mistakenly assume they offer. Enterprise AI would reduce risks but it is not bulletproof; make sure employees continually know its limitations and when and how to use these tools.
Ultimately, these four cases reinforce that AI, whether enterprise‑grade or consumer‑grade, must remain a support tool rather than a substitute for human expertise, because maintaining the confidentiality and diligence required in leave and accommodation management still depends on human judgment to recognize context and prevent sensitive information from being mishandled.
At FINEOS, we believe simplifying complexity and ensuring compliance should go hand in hand, empowering organizations with intelligent, AI-driven solutions that enhance, not replace, the human experience. Discover how a modern, integrated disability and absence management (IDAM) solution can help your organization stay compliant, reduce costs, and confidently adapt to a rapidly evolving market.
Next: Read the full Man vs. Machine series
Part III The “Human in the Loop” in Leave and Accommodation Management: Man vs. Machine, Part III


