Instead of a traditional conference recap, Lori Welty, Esq. and Patricia (Trish) Zuniga reflect on their DMEC “Man vs. Machine” presentation about AI, human judgment, and everything in between.
Trish: When we were brainstorming all the way back in March 2025 for our DMEC Compliance Conference 2026 proposal, we knew AI was going to be a hot topic, but we didn’t just want to follow the buzziest trend. We were interested in examining how, not if, people are using AI, and how much. I know people that use AI all the time, myself included: to phrase an email diplomatically, draft social media captions, or summarize a 200-page monster of a document like a state budget bill. That even includes sensitive moments too: sense‑checking questions about medical tests, or figuring out how to navigate a delicate workplace situation before talking to HR or a manager.
AI is incredibly useful, and, because its answers sound so confident, it’s so easy to trust. As we’ve seen in some cases, maybe we trust it a little too much? Preparing for this presentation forced us to really examine what we believed AI could responsibly do today versus where human discretion must always stay in the driver’s seat.
Lori: Absolutely. One of the parts I found most interesting about my research in advance of the session was thinking about how to craft my prompts or questions for the AI and considering how that affected the results. I posed all the questions I worked on both as if I was an employee, and as an employer. In some cases, I hinted at a desired outcome. In others, I stayed neutral. I also posed all the questions on three different AI platforms. Though I was aware of this effect already, I was still surprised at the impact of AI sycophancy and how it was impacted by the tone of my prompts. It highlighted for me how important it is to exercise caution in the way we craft our prompts, and how we interpret the results. There does seem to be a leaning towards affirmation on the part of the AI, and I can see how, in the HR context, that could have dangerous implications. The other area I found interesting when imagining its use in this context was how often the answer was technically correct, but was missing broader compliance factors that would have put the provided responses in the necessary context.
Trish: I have to say, sharing the stage with you, Abby, and Daris was a clear reminder of the depth of your collective expertise in FMLA and absence management. That level of understanding only comes from years of hands‑on experience in real workplace situations. Watching you all react to AI‑generated answers made the point really clear: AI can surface information quickly, but it can’t recognize when critical details are missing.
What made the conversation resonate wasn’t pointing out where AI was “wrong,” but showing where human judgment filled the gaps. Both the panelists and the audience instinctively surfaced the issues and questions AI didn’t flag. That kind of analysis is second nature to experienced absence management professionals, and it’s not something you can shortcut with a prompt. By the end of the session, the takeaway was clear: AI can support decision‑making, but it cannot replace human judgment. The real work ahead is designing systems, policies, and cultures that keep the human in the loop. When we opened it up to the room, we got such a wave of audience stories, questions, and real‑world concerns.
Lori: Some of the stories we heard after the session ended supported the conclusion that while AI can add to one’s knowledge, both as employee and employer, it also needs to be approached intentionally. We heard several stories where AI was very helpful and provided reliable shortcuts to help expedite aspects of work that can sometimes be uninspiring. But we also heard stories of employees who had been guided in a direction by AI, only to find out that they had either missed out on rights they were entitled to, or had relied on assumptions about rights they didn’t actually have. AI can provide so much information in an instant, with such confidence, that it can be hard to recognize when it is incomplete, incorrect, or missing the human judgment required to truly understand an individual’s situation.
Trish: It felt clear this conversation isn’t about choosing AI or humans but learning how to intentionally include both. I actually had a lightbulb moment early in the conference during a breakout session led by the keynote speaker Joel Zeff, where instead of a typical informational lecture about workplace success and positivity with metrics and best practices, he led us in some improv games! We played one called Two‑Headed Monster, where you contribute a word, then adapt instantly to whatever comes next from your partner. It felt like a perfect metaphor for AI producing outputs while humans provide context, judgment, and direction. What also stood out is that when someone stopped listening, the scene fell apart, reinforcing that good judgment lies in reading nuance, adjusting in real time, and working towards a win-win scenario.
It got me thinking: AI may provide options or patterns, but humans are still responsible for these employment-related consequential decisions that impact other people’s lives, much like improv depends on trust and accountability. The future of AI at work lies in collaboration between humans and properly governed AI tools. I’m still kicking myself for not including a last-minute final slide that says: “TA DA! Our session is actually called: Man with Machine.”
Lori: I love that, Trish, and I also enjoyed Joel’s improv session! At the end of our session and the other AI-related sessions from the DMEC conference, I was left with this thought: In the world of absence, the stakes are human. Behind every question, every data point, every “technically correct” answer is a real person navigating a difficult moment in their life. AI can help us see patterns and move faster, but despite its sometimes seemingly emotional responses, AI cannot exercise judgment, empathy, and accountability. Those factors come from experienced professionals who know when something doesn’t quite fit and are willing to slow down and ask one more question. AI is a tool, not a substitute for humanity. The future is not one where AI replaces us, but one where we are accountable stewards of its use, deliberate in ensuring that essential human judgment is never outsourced.
Thank you for joining us for the FINEOS Man vs. Machine blog series. If you’ve been following this series, I hope we’ve been able to present concepts like AI fluency, confidentiality, lawful use, and HITL in a way that feels clear and practical. At the end of the day, if you remember only one thing, I hope it’s this: employers can use AI responsibly when they preserve the human judgment that leave and accommodation management demands.
This is the final entry in the FINEOS Man vs Machine IV Part series. Read the full series.
Part III Discover what human judgment really looks like when AI enters the leave and accommodation process.


