Artificial intelligence is entering a phase in which the central question is no longer whether a model can generate text or automate a task, but whether companies can redesign work so that AI becomes part of the operating model. Sanjin Bičanić from Bain argues that value does not come from adding tools to existing processes, but from rethinking work around AI agents.
In his view, the first large language models were statistical systems for modelling language. Development is now moving toward reasoning models and agents that can perform concrete tasks. “The original large language models were mathematical constructs that model language as statistics,” Bičanić says. With the next generation, usefulness changes: “When you move into reasoning models, they are still probabilistic machines, but because they can reason about what they are reasoning about, they begin to look more like human reasoning.”
The decisive factor is not the model alone. Bičanić stresses that agents do not become useful out of the box. They need instructions, tools, and context. “When you put them in a harness, and they become agents, under the hood, it is still probabilistic, but you find that they actually do begin to do useful work,” he says. What matters is the “combination of the model, which is the engine, and the harness, which is the tools and the instructions around it.”
That is why some expectations placed on AI agents are unrealistic, especially when they are asked to take over context-sensitive tasks. Answering email may appear simple, but it requires tone, relationships, priorities, and personal style. “For AI to answer an email, you do have to provide quite a bit of scaffolding around it,” Bičanić says. “It turns out responding to email is actually one of the more difficult things to do.” The first successful use cases appear where the task is defined, and the output is easy to verify.
For companies, the decisive difference is not whether they use AI, but how they use it. Those that merely add AI to existing processes see limited impact, mostly individual time savings. “If you just add AI tools on top of the processes as they are today, what you get is micro-productivity,” he explains. “An individual gets five, ten, or fifteen more minutes per day, but the organization as a whole does not really benefit.” Companies that redesign the process with AI agents at the centre have a better chance of generating productivity gains.
Bičanić illustrates the point with the shift from steam engines to electric motors in factories. Companies that replaced the central steam-driven pulley saw almost no productivity improvement. Those who understood electricity allowed them to reorganize the factory floor, making it more efficient. “It is a little bit similar to AI,” Bičanić says. “Companies that take the time to reimagine the work with agents in the centre are getting much more value than those who are just applying it to the existing process.”
Looking ahead to the next five years, Bičanić rejects certainty as a serious forecasting position. “The reality is nobody knows,” he says. Models will continue to improve, but the main constraint will not be intelligence alone. It will be the ability of companies to diffuse AI into complex business environments. “The real world is messy, and far messier than what a bunch of people in Silicon Valley will lead you to believe,” Bičanić concludes.