Tuesday, March 24, 2026

What happens when people begin using AI for work?

This research appears to be dubious and tainted by serious biases and by naively applying historical empirical laws!

Unfortunately, this research seems entirely based on computer models!

ML & AI is a technology revolution going on that no one can predict.

"Two new ... Papers ... examine these questions using economic models to understand the impact of individuals' choices to use AI at work. ...

If a person’s skills in a certain area start out at a low level, they are rationally incentivized to use AI. But skills improve through practice and erode without it. So the more a person relies on AI, the further their skills degrade, which induces the need for more AI use, and the degradation cycle repeats.

A "high-skill" person stays capable and independent, while a "low-skill" person becomes persistently dependent on AI. As with any technology, advancements in AI evolve the behaviors of the users.

Most importantly, the authors find that the skill degradation from AI use actually lowers people’s overall performance compared with the no-AI scenario. ..."

From the abstract (1):
"As AI systems enter institutional workflows, workers must decide whether to delegate task execution to AI and how much effort to invest in verifying AI outputs, while institutions evaluate workers using outcome-based standards that may misalign with workers’ private costs. We model delegation and verification as the solution to a rational worker’s optimization problem, and define worker quality by evaluating an institution-centered utility (distinct from the worker’s objective) at the resulting optimal action. We formally characterize optimal worker workflows and show that AI induces phase transitions, where arbitrarily small differences in verification ability lead to sharply different behaviors. As a result, AI can amplify workers with strong verification reliability while degrading institutional worker quality for others who rationally over-delegate and reduce oversight, even when baseline task success improves and no behavioral biases are present. These results identify a structural mechanism by which AI reshapes institutional worker quality and amplifies quality disparities between workers with different verification reliability."

From the abstract (2):
"As AI systems shift from tools to collaborators, a central question is how the skills of humans relying on them change over time. We study this question mathematically by modeling the joint evolution of human skill and AI delegation as a coupled dynamical system. In our model, delegation adapts to relative performance, while skill improves through use and decays under non-use; crucially, both updates arise from optimizing a single performance metric measuring expected task error. Despite this local alignment, adaptive AI use fundamentally alters the global stability structure of human skill acquisition. Beyond the high-skill equilibrium of human-only learning, the system admits a stable low-skill equilibrium corresponding to persistent reliance, separated by a sharp basin boundary that makes early decisions effectively irreversible [???] under the induced dynamics. We further show that AI assistance can strictly improve short-run performance while inducing persistent long-run performance loss relative to the no-AI baseline, driven by a negative feedback between delegation and practice. We characterize how AI quality deforms the basin boundary and show that these effects are robust to noise and asymmetric trust updates. Our results identify stability, not incentives or misalignment, as the central mechanism by which AI assistance can undermine long-run human performance and skill."

What happens when people begin using AI for work? | Cowles Foundation for Research in Economics "New research on employee skill and performance uncovers a paradox for using AI."


No comments: