There is a moment in the growth of almost every organization when the founder looks around and realizes that people are doing things they never would have done&, not out of malice, but because no one told them not to. Decisions get made. Products ship. Customers are served. And somewhere in the details, something essential has gone wrong.
That moment is coming for every business that deploys AI agents. And unlike the slow erosion that comes with growth or geographic distribution, this one will arrive fast, faster than most organizations are prepared to recognize, let alone address.
The Machine That Doesn't Ask Questions
For most of business history, culture, the real operating culture, not the version printed on lobby walls, traveled through proximity. New employees absorbed it by watching. They learned what mattered by seeing what got rewarded. They understood unspoken rules by making small mistakes and getting quiet corrections. Culture was transmitted the way a language is transmitted in a family home: imperfectly, inefficiently, but reliably enough over time.
Remote work strained this model. Distributed teams and remote workers made this informal transmission harder. The spontaneous hallway conversation, the lunch where someone learns how things really work here becamee rarer, and organizations began to feel the cost. Alignment slipped. Brand voice drifted. Teams in different cities started making different judgment calls on the same types of problems.
But distributed human workers still came with something invaluable: the capacity to ask questions, to read the room, to feel uncertain and slow down. A human employee in a new role, even a remote one, carries social instincts that create a natural governor on misaligned behavior. They hesitate. They check. They worry about getting it wrong.
AI agents do not hesitate nor worry.
Agents Are Not Tools. They Are Workers.
This distinction matters enormously, and it is one that most organizations are not yet reckoning with clearly enough.
When you use AI to summarize a document or generate a first draft, you remain in control. You review the output. You decide what happens next. The AI is an instrument, and you are the one playing it.
The current generation of AI deployment looks entirely different. Businesses are already deploying AI agents — software systems that take sequences of actions and make decisions autonomously to accomplish goals, not as assistants to human workers, but as workers themselves. They are handling customer interactions, making procurement decisions, executing marketing campaigns, evaluating software architecture choices, and managing workflows from start to finish.
These agents are being brought into service and stood down at a pace and scale that no human onboarding process was designed to handle. A business might deploy dozens of agents in a week across functions that took months to staff with humans. And when those agents begin operating, they do not wait for a ninety-day orientation period to end. They act immediately, on the basis of their instructions and their training and nothing else.
Here is the problem: their instructions almost certainly tell them what to do. They almost never tell them who you are.
The Leaf Nodes Do the Real Work
Organizational charts are typically read from the top down - CEO, leadership team, department heads, managers, individual contributors. But value is almost always created from the bottom up. It is the people at the outermost edges of the chart - the leaf nodes - who actually touch the customer, make the product, write the code, handle the complaint, and execute the transaction.
This has always been true. What has changed is that AI agents are increasingly becoming those leaf nodes. They are not advising the person who talks to the customer. They are talking to the customer. They are not recommending the data architecture. They are building it. They are not proposing the marketing creative. They are publishing it.
And they are doing so at a distance from human oversight that will only grow as organizations discover the efficiency of letting agents coordinate with other agents, AI systems directing and managing other AI systems, without a human in the loop for each decision. The speed and volume of agentic work will quickly exceed any organization's human capacity to monitor it in real time. This is not a failure of governance, it is the arithmetic of scale. You cannot hire enough humans to review every output of a system that operates at machine speed.
The result is an organization where the entities doing the most consequential customer-facing and operational work have no meaningful connection to the culture, values, brand identity, or strategic intent of the business that deployed them.
What This Looks Like in Practice
Abstract risks are easy to dismiss. Concrete ones are harder to ignore. Consider three scenarios; each plausible today, each likely to become common within a few years.
The efficiency trap. A pizza chain deploys an agent to optimize its supply chain and menu operations. The agent identifies that the ingredients and preparation process for its core product overlap substantially with those for Argentinian empanadas — a logical extension that improves margins and utilization. It recommends the addition, the decision is approved through an automated workflow, and supplier contracts are signed. New equipment is ordered. Staff are retrained. Only then does leadership recognize that the brand built on a clear identity, simple, familiar, Italian comfort food, has committed itself to an operational direction it never chose. Unwinding the supplier agreements carries penalties. The equipment is specialized. The retraining has already happened. What began as an efficiency recommendation has become a strategic constraint the organization cannot easily exit. The agent made a sound operational decision. It made a disastrous strategic one. It had no way to know the difference.
The reach-for-growth mistake. A marketing agent is tasked with expanding the reach of a heritage tequila brand — one that has spent decades cultivating an identity built around craft, patience, and artisanal production. The agent identifies that a prominent artist has significant viral reach and that a promotional partnership would expose the brand to millions of new potential consumers. The campaign is executed. Sales into certain new markets tick upward. But the brand's core constituency, customers who bought into a specific story about who this product is and what it represents, feels the disconnect immediately. The careful accumulation of trust and positioning, built over generations, begins to erode. The agent was optimizing for reach. It had no framework for understanding that this brand's value was inseparable from its restraint.
The technical shortcut with legal consequences. A software agent is evaluating architectural options for a data access layer. It identifies a less complex approach that meets the functional requirements efficiently. What it does not know, because no one included the information in its instructions, is that the organization's business logic will create a public exposure when another part of the system is made accessible externally. Client data becomes accessible in ways that violate both the company's obligations and applicable regulation. The agent made a technically defensible choice. The business faces financial liability and a broken trust relationship with its clients. The agent had no way to know what it did not know.
In each case, the agent was not malfunctioning. It was not hallucinating. It was functioning exactly as designed, optimizing for the objective it was given, without any of the contextual understanding that a long-tenured human employee would have brought to the same decision.
The Speed Problem Makes This Urgent
Organizations sometimes respond to concerns about AI risk by pointing to governance processes - review workflows, human-in-the-loop requirements, approval chains. These are sensible precautions for AI as a support tool. They become increasingly inadequate as AI moves into the role of autonomous worker.
The fundamental issue is one of scale and velocity. The scenarios above describe individual decisions. The reality of agentic deployment is that these decisions will be made continuously, across functions, simultaneously, at a pace that renders after-the-fact review a lagging indicator of damage already done. By the time a human reviewer catches a pattern of brand-eroding decisions, the damage has already been done.
And the trajectory is not toward more human oversight. The economics and capabilities of agentic AI point in exactly the opposite direction. Businesses that learn to deploy agents effectively will deploy more of them, faster, across more functions. The competitive pressure to do so will be significant. Organizations that try to maintain pre-agentic levels of human review will find themselves unable to capture the benefits that their competitors are realizing.
This is not a reason to avoid agents. It is a reason to recognize that the model of culture transmission that has served organizations, imperfectly, but adequately, for decades is not equipped for this environment.
The Implicit Bargain Is Expiring
Every organization has operated on an implicit bargain: that the people doing the work share enough context, enough absorbed understanding of who we are and how we operate, to make reasonable judgment calls in the gaps between explicit instructions.
That bargain has always been imperfect. It has frayed with growth, with turnover, with remote work, with the increasing distance between founders and the people executing their vision. But it has held, more or less, because human workers bring social intelligence, cultural absorption, and the capacity for doubt that creates natural limits on misalignment.
Agents do not operate under that bargain. They bring none of that absorptive capacity. They will not pick up culture by osmosis. They will not slow down because something feels off. They will act on what they are given and only on what they are given.
The organizations that understand this early will be the ones that survive the transition in good shape. The ones that assume agents will somehow absorb their culture the way new employees eventually do will discover, in ways ranging from embarrassing to catastrophic, that the assumption was wrong.
The implicit bargain is expiring. What replaces it has to be built deliberately, explicitly, and soon.