In the lead up to our FINEOS Man vs. Machine panel session at the DMEC Compliance Conference 2026, this 3-part blog series will offer an examination of the role of human judgment amid the increasing workplace integration of artificial intelligence (AI). Below, Part I introduces the concept of AI fluency and how human decision‑making capabilities should intersect with AI.
As AI and automated systems become embedded in the workplace, including leave and accommodations management, questions arise about how they can be used and applied safely and accurately. The answer depends largely on the level of “AI fluency” of the professionals responsible for interpreting and overseeing these outputs.
AI fluency is defined as “the ability to work effectively, efficiently, ethically, and safely within emerging modalities of Human-AI interaction,” which involve automation, augmentation, and agency. According to McKinsey, “the demand for AI fluency has grown sevenfold in two years, faster than for any other skill in US job postings.” AI fluency has emerged as a critical professional competency, enabling employees to understand system limitations, interrogate model outputs, and apply informed judgment when overseeing AI‑supported processes.
Key Findings of AI Fluency Study
Early research is beginning to quantify how individuals are developing the skills to use AI. A recent AI fluency study by Anthropic has found that when users become more directive with AI and ask it to create artifacts, users are less likely to question its reasoning or identify missing context.
Anthropic observed two distinct behavior patterns depending on whether users were asking the AI to generate an artifact (e.g., a draft, summary, email, analysis) or engaging in non‑artifact interaction (e.g., asking questions, seeking explanations, exploring ideas). Users showed reduced critical thinking when interacting with AI by skipping or weakening the key oversight behaviors that indicate careful evaluation, such as performing fewer fact-checks or questioning the reasoning of the model (asking how it arrived at a conclusion, what logic it used, or what uncertainties were involved) less frequently.
While Anthropic did mention its own methodological constraints and that these are not definitive conclusions, their findings do underscore the need for ongoing thoughtful consideration about how all employees think or work with AI.
Implications for AI-Enabled Leave and Accommodation Management
The findings from Anthropic’s AI Fluency Study highlight several critical implications for employers managing leave and accommodations using AI-enabled workflows. While automation may be appropriate for low‑risk tasks, higher‑stakes activities require a high level of AI fluency. Without the ability to critically evaluate the artifacts AI produces, organizations face several vulnerabilities:
- Erosion of human oversight: Human oversight is essential to ensure AI systems adhere to ADA requirements for individualized assessment and reasonable accommodation. Reduced scrutiny of AI‑generated artifacts increases the risk that inaccurate summaries, flawed recommendations, or missing context will go unnoticed, undermining the individualized assessments required in leave and accommodation decisions.
- Increased legal and compliance risk: If employees are required to use AI but aren’t trained to critically evaluate outputs, they are far more likely to default to whatever the system suggests. This undermines the human judgment required in leave and accommodation decisions and weakens the employer’s ability to defend any employment actions they may have taken.
- Potential algorithmic discrimination and bias: Without the critical thinking skills required to question and contextualize AI‑generated outputs, employees may unintentionally accept or act on biased recommendations, leading to inequitable outcomes for certain employee groups and increased legal exposure for employers.
What This Means for Building an AI-Fluent Workforce
Preparing teams to work alongside AI requires intentional investment not just in skills, but in the culture that supports responsible use. AI functions as both an assistant and an amplifier; it can accelerate opportunities when work practices and culture are strong, but it can also magnify existing challenges when they are not. Nurturing the overall organizational culture, especially a culture of learning, is critical to creating an environment grounded in trust and verification while still driving learning and innovation.
Organizations should approach AI in the same way as any organizational cultural change: by explaining the “why,” building readiness, and developing their people first through essential soft skills programs well before focusing on the fast-evolving tools. A workplace where curiosity, readiness, and strong human-centered oversight is embedded is best positioned to benefit from automation while protecting the integrity of the software operating in leave and accommodation management.
The findings from McKinsey and Anthropic reinforce this view: AI adoption is fundamentally a people initiative. For software companies in particular, where engineering excellence directly shapes both product quality and customer outcomes, AI’s value depends on a teams’ ability to use it deliberately and intelligently. By embedding AI fluency and critical thinking across an entire organization, and integrating AI into workflows with intention, innovation can be accelerated while preserving the high value human judgment, enabling people to “focus on higher value tasks.” It is this enablement that drives performance.
A critical but often overlooked element is recognizing that people and teams begin their AI journey from different starting points. Assessing AI maturity, meeting employees where they are, and growing capability in ways that feel relevant and empowering ensures individuals can evolve with intention. This in turn strengthens a high-performing, purpose-led culture where AI plays a supporting role in becoming a catalyst for meaningful progress.
Conclusion
In sum, the findings of the study emphasize that human judgment will continue to be essential in the work of leave and accommodation management. The most significant challenge ahead is ensuring employees develop the AI fluency needed to use these tools with the level of care and discernment the work requires. AI fluency is not a technical “nice-to-have,”, but a foundational element of an effective learning-oriented organizational culture, developed through ongoing practice and refinement rather than a single training session.
At FINEOS, we believe simplifying complexity and ensuring compliance should go hand in hand, empowering organizations with intelligent, AI-driven solutions that enhance, not replace, the human experience. Discover how a modern, integrated disability and absence management (IDAM) solution can help your organization stay compliant, reduce costs, and confidently adapt to a rapidly evolving market.
This article written by Patricia Zuñiga, IDAM Compliance Manager and Breda Donlon, Head of Organisational Learning and Development
Next: Read the full Man vs. Machine series
Part III The “Human in the Loop” in Leave and Accommodation Management: Man vs. Machine, Part III


