HR managers spend their days mapping psychosocial risks (PSR), deploying listening devices and training managers to detect burn-out. And yet, a study by Boston Consulting Group (BCG ) conducted in 2026 among 1,488 employees of major US corporations reveals a disturbing finding: the HR function is among the professions most affected by AI-related cognitive exhaustion (or "AI brain fry"), with an exposure rate reaching 19.3%. This figure illustrates a profound structural dysfunction: those responsible for protecting the company's mental health are today the first silent victims of its technological transformation.
The institutional paradox of the HR function
The vulnerability of HR professionals is no accident. It is the result of an untenable triple injunction. Firstly, they are the direct users of numerous algorithmic tools (predictive ATS, onboarding chatbots, performance analysis tools). Secondly, they support the digital transformation of all other professions. Finally, they act as internal regulators, guaranteeing well-being in the face of the potentially deleterious effects of this same AI.
This architecture creates a paradox. The HR function is being asked to deploy AI while absorbing its human collateral damage, without any additional resources or specific protection. Acknowledging one's own cognitive fatigue would almost be tantamount, for an HR department, to questioning the legitimacy of its preventive mission. As a result, this vulnerability remains the big blind spot in social balance sheets, and hardly appears in any Document Unique d'Evaluation des Risques Professionnels (DUERP).
Burn-out vs. "AI brain fry": the failure of detection tools
One of the major challenges lies in the inability of our current tools to identify this new malady. Classical burn-out arises from chronic emotional overload, assessed over several months, and manifests itself as emotional exhaustion. Conversely, AI brain fry arises after a few weeks of intensive exposure, and is characterized by mental fog and decision fatigue linked to cognitive saturation.
Where burn-out is a question of the "volume" of work, AI exhaustion is a question of the "nature" of the interaction: constant validation, correction of algorithmic output and continuous arbitration between machine and human exhaust the brain. While BCG notes that AI can reduce burnout linked to repetitive tasks by 15%, it generates in return that famous cognitive overload that historical tools, such as the Maslach Burnout Inventory (MBI) or the DUERP (designed long before AI), are incapable of capturing.
The glass ceiling: the rule of 3 AI agents
BCG has identified a mathematical tipping point: beyond 3 AI agents supervised simultaneously, productivity gains are reversed. The mental workload required to validate, arbitrate and detect errors exceeds the time savings offered by automation.
However, the operational reality of an HR professional structurally shatters this ceiling. Between the HRIS, the recruitment tool, the training modules and the performance analysis, the systems pile up without any dashboard adding up their number. Exceeding this threshold is no longer an accident, it has become the norm. Added to this is the silent intensification of work. According to Gloria Mark (UC Irvine), the average concentration time has dropped to 47 seconds (compared with 2.5 minutes in 2004). Constant notifications from AI tools drive professionals to self-interrupt, creating a fragmentation of attention and a compulsive checking reflex.
Shadow AI and surveillance: the invisible administrative burden
HR overload doesn't stop with the direct use of institutional tools. According to BlackFog (January 2026), 49% of employees use AI tools at work that have not been approved by their employer ("Shadow AI"). More worryingly, 27% have shared sensitive data (names, salaries, personal data). The burden of auditing, correcting and ensuring RGPD compliance inevitably falls on the shoulders of HR.
Finally, the issue of algorithmic surveillance weighs heavily on the mental load. The OECD ( 2025) points out that 51% of workers subject to constant surveillance report stress (versus 38% without surveillance). Since February 2025, the European AI regulation (EU 2024/1689) formally prohibits emotion inference systems at work. Disabling non-compliant tools, monitoring usage and documenting these processes for the CNIL constitute an additional administrative layer for teams already close to breaking point.
In the face of cognitive exhaustion, lucidity about one's own exposure is the first condition for protecting others. Clearly naming the AI brain fry, integrating the alert threshold of the 3 AI agents into the DUERP and rigorously mapping the Shadow AI are today the founding acts of a coherent HR prevention policy in 2026.
Sources
- Boston Consulting Group (BCG), 2026: A study of 1,488 employees of major US companies on cognitive exhaustion and the impact of supervising AI agents.
- Gloria Mark (University of California, Irvine): Research on fragmentation of attention and falling average concentration span at work.
- BlackFog, January 2026: Survey on "Shadow AI" practices and the sharing of sensitive data by employees.
- OECD, 2025 : Report on the consequences of algorithmic surveillance on workers' stress levels.
- European Union, February 2025 : Regulation on Artificial Intelligence (EU 2024/1689) framing, in particular, emotion inference systems in the workplace.