Iese

Logo Iese

GenAI: Easy to use, harder to manage

Turning AI into real business transformation requires more than adopting new tools. It means building the organization around abundant cognition, valuable judgment and compounded learning.

Illustration showing two hands, each of them holding the end of a tangled thread
May 7, 2026

By Sampsa Samila

The first time most people experiment with generative AI, they are struck by how easy the technology is to use and how widely useful it is. Such ease and versatility have fueled widespread adoption of GenAI at the individual level, with over half of U.S. adults using it in the past year. The generalist abilities of the models mean employees can do tasks beyond their previous capabilities and save real time. But what works well for individuals is not the same as what works for companies. At the company level, those individual gains have yet to show up in profits or productivity.

The gap between individual gains and firm-level impact isn’t closing by itself. Managers are needed to close it, and they need the same management principles as before, applied to a technology with new properties. As the economists Carl Shapiro and Hal Varian put it a generation ago, “Technology changes, the laws of economics do not.” Price theory still predicts what happens when the cost of an input drops, but the input now is cognitive work, which is different from a commodity or a component. Organizational design still matters, but the boundary between human and machine tasks is a new challenge. Companies still need purpose, which, as a guiding principle in AI deployment, helps determine what to automate, what to augment, what to protect and what to refuse.

For broad transformation, companies need repeatable, scalable processes as well as leaders who know how to make them work in their organizations. AI’s new leadership challenge involves thinking in terms of processes and multilayered systems, which is a new framing for many people. It requires building a human organization that continuously renews and develops tacit knowledge around judgment, values and culture. And it demands the managerial agility to do all of this on a fast-moving technological frontier.

Mapping a new territory

Navigating a new territory starts with a map of that territory, an understanding of what’s new about the technology itself. What appeals to individuals is that foundation models such as ChatGPT or Claude are extraordinary generalists. They can write, summarize, translate, code, analyze and do almost any task in written language.

But for companies, that generality comes with a cost: the models have no knowledge of a particular company’s context, they can produce confident-sounding errors (hallucinations) and their accuracy in any specific domain tops out well below what most business processes require. As the same models move from producing outputs that the user reviews to taking actions on the user’s behalf, the stakes of that ceiling change: a confident error is no longer a misleading paragraph but a misguided action with operational consequences. And in a multistep process, those errors compound. What companies need for their operations is not a generalist but something closer to a specialist. Getting there means building a stack of capabilities on top of the foundation model to turn a generalist into something that can do the company’s work reliably.

The stack can have several layers:

  • Closest to the foundation model, fine-tuning adapts it to a particular domain by training it further on relevant data, reducing its usefulness in other areas and trading generality for accuracy.
  • A step removed, retrieval-augmented generation (RAG) gives the model access to proprietary documents and data at the moment of the query, grounding outputs in real company contexts without changing the underlying model.
  • Prompting shapes how the model is used, with prompt libraries and templates standardizing the inputs that produce reliable outputs.
  • Integration embeds the model into specific workflows, with user interfaces, process design and authorization scoping determining where human judgment enters, where it doesn’t and what the system is permitted to do on its own.
  • Finally, governance sets the limits: what AI is allowed to do, what it must not do and how its outputs are reviewed.

Not every company needs every layer, but every company that uses AI at scale has to decide which layers to build, which to buy and how to manage the resulting complexity.

Managing AI’s relocated complexity

Every process has a minimum complexity that matches the real-world situation it handles. That complexity cannot be removed, only relocated. To the user, a well-designed AI tool feels simple: you type a question, you get an answer. But the complexity the user no longer sees hasn’t disappeared; it has moved into the stack and, more importantly, into the organization that runs it.

The generality-accuracy-simplicity trade-off

Sampsa Samila

Professor in the Strategic Management Department at IESE Business School, where he is the Academic Director of the Artificial Intelligence and the Future of Management Initiative.