Does AI really have agency?
Written by Eija-Leena Koponen, co-founder of Renessai
The concept of agency in artificial intelligence has become a focal point in both technical and public discourse. In both Finnish and international news, common language, and according to some specialists, AI is often described as an independent actor, capable of making decisions, learning, and even “wanting” things. At Renessai, we have strongly argued that AI does not have agency at all, but, a closer look at both research and practical experience reveals a more nuanced reality.
One could ponder even what is even the thinking we humans do.
Autonomy vs. programmed reactivity
From a technical standpoint, modern AI systems have achieved impressive levels of autonomy. Agent-based systems can independently forecast demand, place orders, and adapt to changes in their environment without ongoing human intervention. In practical terms, these systems are often labeled as “agents” because they can pursue defined goals and adapt to new data.
Yet, hands-on development shows that this autonomy is fundamentally bounded. AI systems operate within parameters and triggers set by human designers. Their “decisions” are the result of algorithms and optimization routines, not genuine self-determination or action. The system does not possess desires or intentions; it functions based on objectives encoded by humans and optimizes them. As highlighted in recent academic work (here and here), current AI lacks the ability to generate “voluntarist reasons” – the capacity to will a reason into existence when faced with equally weighted choices, which is considered a hallmark of true agency in philosophical literature.
There are also risks related to assigning agency to AI.
This can obscure the human choices and accountability.
Who is responsible for the actions?
The notion that AI systems possess agency can foster unrealistic expectations. Current AI lacks the capacity for genuine deliberation, value judgment, or moral reasoning. Its outputs are statistical or algorithmic, not the result of conscious reflection. As research points out, even the most advanced AI today is narrow, i.e. highly effective in specific domains (helping a human to solve a problem) but incapable of the broad, context-sensitive reasoning that characterizes true agency (“thinking” if the help is harmful or not).
Ethical analysis consistently warns against overstating AI’s agency. Ascribing agency to AI can obscure the human choices embedded in data selection, system design, and oversight. When headlines or business pitches suggest that “AI decided,” it risks shifting accountability away from the people and organizations responsible for the system’s actions.
The importance of how we speak
You might argue that it's merely semantics to talk about what agency in its deepest meaning represents. But the distinction between operational autonomy and philosophical agency is not just academic. It has real-world implications for how AI is designed, deployed, and governed. The most effective solutions combine classic automation with AI, using each where appropriate while maintaining clear human oversight.
Ultimately, while AI systems can act autonomously within defined boundaries, their “agency” remains fundamentally limited and a result of human intent. Recognizing these limits is essential for ethical development, responsible deployment, and honest communication about what AI can – and cannot – do. The conversation about AI’s agency should be grounded in both technical realities and ethical clarity, ensuring that responsibility and accountability remain firmly with human actors.
P.S. Generative AI was asked to contribute to this article, but it did not come up with the subject nor the points itself.