5 minutes

Unlocking AI-Native Builders Without Losing Control

David Greenberg

Chief Marketing Officer

The first wave of AI-native development changed who builds.

As we discussed in The Rise of the Citizen Developer, the fastest-growing development team inside the enterprise is no longer limited to engineering. It now includes people in sales, marketing, finance, operations, customer success, and every other function close to the work. These builders use AI tools to turn intent into systems faster than traditional software processes can absorb. BlueRock’s first post framed this as the rise of the citizen developer: a structural shift in where software creation begins. 

The second wave changed who operates.

In The Rise of the Agentic Operator, we argued that these builders are no longer just creating workflows. They are increasingly responsible for systems that execute across models, tools, MCP servers, APIs, and downstream business systems. In agentic systems, the workflow is not the system. The execution path is the system. 

Now the enterprise has to face the third wave.

The question is no longer whether business users, AI-native developers, and agentic operators should be allowed to build. They already are. The question is whether the enterprise will enable them with the operational model they need, or restrict them with controls designed for a prior era.  That is the script that needs to flip.

For too long, enterprise control has been treated as a brake. Something that slows teams down. Something imposed after innovation begins. Something developers work around when it becomes too heavy.

That model will not survive AI-native development.

The winning enterprises will be the ones that create the conditions for AI-native builders, operators, and security teams to move together—enabling more innovation, more automation, and more responsible execution at scale.

In the agentic era, control is not the opposite of speed.  Control is what unlocks speed.


The Enterprise Mandate Has Changed

The rise of AI-native building is no longer speculative.

Gartner predicts that by 2026, 80% of low-code platform users will come from outside traditional IT departments, while 70% of new enterprise applications will use low-code or no-code technologies. 

At the same time, AI-assisted development is accelerating the volume and speed of software creation itself. GitHub’s 2025 Octoverse report found that more than 1.1 million public repositories now use LLM SDKs, with nearly 700,000 created in the past year alone — a 178% year-over-year increase. GitHub also reported that 80% of new developers adopt Copilot within their first week on the platform. 

The shift is already moving beyond assistance into autonomous execution. Recent research studying AI coding agents across more than 129,000 GitHub projects found adoption rates between 15% and 22% for agentic coding systems such as Cursor, Claude Code, Codex, and Devin, an unusually rapid adoption curve for tools capable of independently generating pull requests and executing development tasks. 

This is the critical point.

AI-native development is not simply making developers more productive. It is expanding who can create software, how execution is initiated, and how operational behavior enters the enterprise.

More builders now create execution paths.

More agents now participate in software delivery.

More business logic is being generated dynamically across tools, systems, APIs, MCP servers, and downstream infrastructure.

This is not a tooling shift.

It is an operational shift.

The traditional enterprise response is to slow adoption until these systems can be fully governed.

That instinct is understandable. But the organizations that win will not treat governance as a constraint on AI-native work. They will use it to safely expand what builders and agents are allowed to do.

Restriction Creates the Wrong Kind of Risk

Most enterprise controls were designed around a familiar assumption: software is created by a known team, reviewed before deployment, and operated through a defined path.

That assumption is breaking.

Citizen developers do not always begin with a formal application backlog. AI-native developers do not always wait for centralized tooling. Agentic operators do not always know every downstream system an agent will touch once execution begins.

This does not mean they are reckless. It means the work has moved faster than the operating model around it.

When enterprises respond with blanket restrictions, three things usually happen.

  • First, innovation slows in the places closest to the customer and the business problem.

  • Second, motivated teams find workarounds, creating shadow systems with even less visibility.

  • Third, the company loses the chance to shape how AI-native work should operate at scale.

That is the wrong tradeoff.

The enterprise should not be choosing between innovation and control. It should be redesigning control so innovation can scale.

This is where most organizations are still behind.

They are trying to govern AI-native work with models built for static software. But agentic systems are not static. Each agent behaves more like software with delegated authority: it has permissions, tool access, context, execution paths, exception paths, dependencies, and downstream effects.

That means the question is not simply, “Who built this?”

The better questions are:

  • What can this agent do?

  • Which tools can it call?

  • What systems can it touch?

  • What happens when it takes an unexpected path?

  • Who owns the outcome?

  • What context determines whether an action is trusted or risky?

Those are not traditional application governance questions.

They are Agentic Operations questions.

Agentic Operations Becomes the Key to Winning

Agentic Operations is emerging as a necessary operational discipline for enterprises adopting AI-native development and agent-driven systems at scale.

This is not simply an extension of DevOps, traditional observability, or existing governance models. Those disciplines were largely designed for deterministic systems where behavior could be defined, reviewed, and validated before execution.

Agentic systems operate differently.

Their behavior is shaped dynamically during execution through context, model reasoning, tool selection, permissions, downstream system interaction, and environmental conditions. The operational challenge is no longer just managing infrastructure or validating code quality. It is understanding and governing how execution unfolds in real time across increasingly distributed systems.

That shift changes what enterprises need operationally.

First, organizations need a connected view of execution rather than fragmented telemetry. Logs, prompts, traces, and gateway events provide useful signals, but none independently explain how decisions became actions or how those actions propagated across systems. Enterprises need visibility into the full flow of execution and downstream impact.

Second, organizations need execution-aware context. Actions cannot be evaluated in isolation. Whether behavior is appropriate or risky depends on the agent involved, permissions granted, systems accessed, data sensitivity, and the broader execution state.

Third, enterprises need governance that operates during execution, not just before or after it. Static policies, code review, and post-incident analysis remain important, but they are insufficient when behavior evolves dynamically in real time. Organizations increasingly need the ability to guide or constrain execution as actions occur.

Finally, enterprises need a shared operational model that aligns development, security, operations, and business stakeholders around how agentic systems are introduced and managed. AI-native work does not remain confined to a single function, and the operational responsibility for these systems cannot remain siloed either.

This represents a broader shift in enterprise architecture and governance philosophy.  Historically, organizations governed software primarily through centralized development processes. Increasingly, they will need to govern execution itself: how systems behave, how actions propagate, and how decisions translate into operational outcomes across interconnected environments.

The organizations that adapt successfully will not be those that slow AI-native development the most. They will be the ones that establish operational models capable of enabling broader participation, faster iteration, and responsible execution simultaneously.

Control Is the New Innovation Layer

The companies that win with AI will not be the ones that simply deploy the most copilots, agents, or automation tools.

They will be the ones that make AI-native work OPERABLE.

McKinsey recently argued that the durable value from AI will not come from productivity alone, but from reshaping offerings, business models, and market structures before competitors do. That requires more than experimentation. It requires the ability to safely move new forms of AI-driven work into production. 

This is where control becomes strategic:.

  • If an organization cannot see what agents are doing, it will limit what agents are allowed to do.

  • If it cannot understand execution, it will keep high-value use cases in pilots.

  • If it cannot govern actions in real time, it will restrict the very builders who are closest to the business opportunity.

But if the organization can operate agentic execution with confidence, the model changes.

  • More people can build.

  • More workflows can become agents.

  • More agents can move into production.

  • More business functions can automate meaningful work.

  • More innovation can happen closer to the problem.

That is the competitive advantage.

The next enterprise divide will not be between companies that use AI and companies that do not. That divide is already closing.

The next divide will be between companies that restrict AI-native work because they cannot control it, and companies that unlock AI-native work because they built the operational model to govern it.

The first group will move cautiously because they lack operational confidence.

The second group will move faster because they made discipline part of execution itself.

Closing Thoughts

The rise of the citizen developer expanded who can create software and automation inside the enterprise.

The rise of the agentic operator expanded who is responsible for how those systems behave in production.

The next shift is organizational. Enterprises must develop operational models that allow AI-native builders and agent-driven systems to scale responsibly across the business without losing visibility, governance, or execution control.

That is the role of Agentic Operations.

Not to restrict innovation, but to create the operational foundation required for broader participation, faster execution, and confident adoption of AI-native work at enterprise scale.



FAQ

What is Agentic Operations and why does it matter for enterprises?

Agentic Operations is an emerging discipline for governing AI-native development and agent-driven systems at scale. Unlike DevOps or traditional observability — which were designed for deterministic systems reviewed before deployment — Agentic Operations addresses systems whose behavior is shaped dynamically during execution through context, model reasoning, tool selection, and downstream system interaction. It matters because enterprises can no longer rely on pre-deployment review alone when agents are generating and executing logic in real time.

Why does restricting AI-native builders create risk instead of reducing it?

Blanket restrictions on AI-native builders typically produce three outcomes: innovation slows closest to the customer, motivated teams create shadow systems with even less visibility, and the enterprise loses the chance to shape how AI-native work operates at scale. The result is more risk, not less — just risk that is harder to see. The better approach is redesigning controls so they operate during execution, not just before or after it.

What's the difference between traditional enterprise governance and agentic execution governance?

Traditional enterprise governance assumes software is created by a known team, reviewed before deployment, and operated through a defined path. Agentic execution governance addresses a different reality: agents have delegated permissions, dynamic tool access, evolving execution paths, and downstream effects that cannot be fully predicted at design time. The relevant questions shift from 'who built this?' to 'what can this agent do, which systems can it touch, and what happens when it takes an unexpected path?'

Can enterprises enable AI-native builders and maintain security and compliance at the same time?

Yes — but it requires operational infrastructure designed for agentic execution, not adapted from static software governance. Enterprises need connected execution visibility across prompts, traces, and tool calls; execution-aware context that evaluates actions relative to permissions and data sensitivity; and governance that operates in real time rather than only at code review or post-incident analysis. Organizations that build this operational model can safely expand what builders and agents are allowed to do, rather than restricting both to manage risk.

How fast is AI-native development actually growing inside enterprises?

Adoption is accelerating rapidly across multiple dimensions. GitHub's 2025 Octoverse report found over 1.1 million public repositories now use LLM SDKs, with nearly 700,000 created in the past year — a 178% year-over-year increase. Agentic coding tools like Cursor, Claude Code, Codex, and Devin show adoption rates between 15% and 22% across 129,000+ GitHub projects. Gartner projects that by 2026, 80% of low-code platform users will come from outside traditional IT departments.