
In Love in the Time of Cholera, Gabriel García Márquez brings together three forces that, seen from today’s perspective, closely resemble what is happening with Artificial Intelligence in companies: the patient obsession of Florentino Ariza, the pragmatic decision-making of Fermina Daza, and the rational discipline of Dr. Juvenal Urbino. Florentino lives driven by the promise of a future that "must" arrive; Fermina, instead, breaks the idealization and demands reality; Urbino orders the world with method, science, and control. Between the three, the novel recalls an uncomfortable lesson: when a collective emotion accelerates—love, fear, or expectation—the greatest risk is not feeling it, but letting it replace judgment.
Today, many organizations are trapped in their own version of that triangle. On one hand, there is hype, haste, and even a touch of hysteria to try, pilot, and "implement AI" before the competitor. Demos are chased, initiatives are announced, and experiments multiply, sometimes without a clear business question or a roadmap to bring them into operation. On the other hand, Fermina appears: the corporate immune system, that natural reaction to the unknown. People fear making mistakes, fear being singled out, fear that an algorithm will expose what is not controlled, or that change will make them redundant. And as pressure increases, Urbino becomes indispensable: because without method—governance, roles, reliable data, metrics, and controls—AI does not become an advantage, but rather noise.
The problem is that, amidst this tension, most teams do not yet have the essentials: knowledge, capabilities, and, above all, time to relearn and experiment safely. They are asked to innovate "at the speed of the market" with saturated agendas, without spaces to practice, fail without cost, and turn learning into standards. Thus, AI begins to look less like a transformation and more like an emotional race: enthusiasm at the top, anxiety at the bottom, and inconsistent results. But there is a way out: moving from haste to strategy, from fear to trust, and from isolated pilots to organizational capability.
In this transition, the role of leadership becomes decisive: not as a spectator of the phenomenon, but as the architect of the direction. Defining the future state—that north star—practicing role modelling, and becoming the cornerstone of a sustained change in organizational culture will allow companies to build their "cyborg" future, where humans and technology work as a single system without losing judgment or responsibility. To achieve this, more than a catalog of tools, a concrete set of guarantees is required that CEOs and their teams must ensure.
First, leadership that supports change and stimulates innovation and the constant adoption of any technology: today we talk about agentic AI; in 12 months, probably something else, but the capacity for adaptation must remain. Second, the development of a culture of exploration, a circle of trust that enables testing without paralysis, with controlled risksand clear rules. Third, teams capable of leading a workforce with a new taxonomy of skills, where value shifts toward soft skills, judgment, collaboration, and pragmatic thinking rather than isolated content or expertise. And fourth, the redesign of the decision architecture, combining information generated by technology with the strategic and contextual sense of the team, so that AI serves as an input and accelerator, not a substitute for judgment.
In the end, the challenge is not choosing between drive, prudence, or method, but knowing when to summon each one. Without Florentino, there is no ambition or energy to move; without Fermina, there are no limits or reality; without Urbino, there is no system to sustain what has been achieved. Leading in the era of AI requires precisely that balance: passion guided by judgment and converted into organizational capability.